SlideShare a Scribd company logo
1 of 29
Download to read offline
Synchronizing Chaotic Systems
Summer 2006
Student: Karl J. Dutson Advisor: Dr. Robert Indik
An appropriate definition of Chaos Theory (and one of many) is “the quali-
tative study of unstable, aperiodic behavior in deterministic, non-linear, dynam-
ical systems” [1]. Chaos implies that a system exhibits sensitive dependence on
initial conditions. For the purposes of this research project, we are interested
in two types of dynamical systems which can be chaotic.
The first is a map, of the general form:
xn+1 = F(xn), where F : Rm
−→ Rm
.
A second system of equal interest is an autonomous system of ordinary
differential equations (ODE’s), of the form:
dy
dt
= y = F(y), where F : Rm
−→ Rm
.
The specific examples we will use include the logistic map:
xn+1 = axn(1 − xn) , (1)
and the well known Lorenz system [2]:
˙x = −σx + σy
˙y = −xz + rx − y
˙z = xy − bz ,
where a, σ, r, and b are positive parameters.
Synchronization is a phenomenon that can occur when two or more copies
of some dynamical system couple together. If two systems are coupled, they
interact with each other. If they synchronize, their behavior is nearly identical,
even though it may still be unpredictable.
A question that arises concerning two (or more) such systems is “what kind
of coupling leads to synchronization?” “Can we predict which values of the
system parameters will cause synchronization to occur?”
It turns out that the answer is yes, as verified by previous research [3], and
this is possible through determining what are known as Lyapunov Exponents
(LE’s) of a system. Calculating the LE’s of a chaotic system provides a great
deal of information about its dynamics. In particular, the largest LE measures
the rate at which initially close solutions to the system diverge from each other.
Also, previous research [3] revealed that the critical coupling strength for which
two coupled dynamical systems will synchronize is given in terms of the first
LE. Although the LE is a crucial piece of information regarding a complex
system, in practice it can be difficult to calculate. Fortunately, the results of
1
synchronization offer an adequate method of measuring the largest LE of the
system. However, there are other LE’s of complex systems, which are also
of interest, but are much harder to compute. Our objective is to investigate
the coupling of more than two copies of a dynamical system, and see if the
synchronization of these systems can be described in terms of higher order LE.
LE will be explained in further detail later.
The outline of this report is as follows: We will first examine the logistic
map. This will be done by generating a bifurcation diagram to show how its
behavior depends on its single parameter a. In addition, we will analytically
find some bifurcation points and expressions for fixed points. LE’s will then be
introduced within this context.
Next we will consider two coupled copies of the logistic map, which form a 2-
dimensional system. We will find what parameter value(s) for coupling strength
synchronize the system, for different values of a, and their dependence on the
LE. Following this we will look at higher-dimensional coupled systems, and we
hope to find relations between the synchronization of these systems and their
higher order LE’s.
Recall that a map, in general, is of the form
F : Rm
−→ Rm
.
For any initial condition x0 ∈ Rm
, the map defines a sequence recursively by
xn+1 = F(xn). (2)
Thus the recursive sequence produced by evaluating (or iterating) the map is a
function of the initial condition(s) and the number of iterations, n. For example,
if we set x0 = 0.5 and a = 1, and iterate the logistic map (Equation 1) once we
obtain x1 = 0.25. Then, to iterate the map again, we start with x1 = 0.25 as
our initial input, and the result is x2 = 0.1875, and so on. In this way, for any
integer n ≥ 0, xn is defined.
The iteration of a map can be thought of as the evolution of the state of
some system. A common example is how the population of a species grows
or decreases from one year to the next, based on an initial population. For
example, a caterpillar population changing from generation to generation would
be a suitable application of a map. This means that the number of iterations in
the sequence, n, must be a positive integer; it does not make sense to evaluate
something like x1.75.
Anytime we have a function such as F, we can call it a map and can define
iterates of that function (or map) by
xn = xn(x0). (3)
Also, the same function F can be used to define an autonomous ODE:
dx
dt
= F(x), (4)
2
and for each initial condition x0 there exists a solution
x = x(t, x0). (5)
Note that while both solutions (Equations 3 and 5) depend on an initial con-
dition, the evolution of the map depends on n, the number of iterations, and
the ODE solution depends on time, t. Thus with a continuous flow, such as
Equation 4, it makes sense to evaluate non-integer points or intervals in time,
as time is continuous and non-discrete.
A fixed point of the iterated map satisfies x = F(x), or xn+1 = xn. For
the logistic map, a nice trivial example of a fixed point is x = 0, from which
xn+1 = 0 = xn.
We say that a list x0, x1, . . . , xP −1 is a period P orbit for the map F if
xP = F(xP −1) = x0. If the list x0, . . . , xP −1 is indeed a period P orbit, then
every xj in the list is a fixed point of the map G defined by iterating F P times:
G(x) = F(F(· · · (F(x)) · · · )), (6)
where the dots above mean F is composed with itself P times.
A good way to tell if a system is stable is to determine whether its long term
behavior is highly dependent on initial conditions. If a system is stable, initial
conditions close to each other will eventually converge into the same orbit, or the
same fixed point. The system will not display the level of sensitivity to initial
conditions that is the trademark of chaos. Conversely, if the system is not stable,
initially close conditions will eventually diverge from each other exponentially,
and perhaps even indefinitely. So, from these two possible outcomes, we can
narrow down whether or not the system is stable by taking two initial conditions
separated by a very small difference, and observing how that difference changes
in the long run.
Suppose that x is a fixed point x ∈ Rm
of a map F such that x = F(x). To
determine whether x is stable we slightly perturb x to x + δ0 (where |δ0| < 1 is
very small) and apply the map F to x0 = x + δ0. Then δ0 = x0 − x and the
fixed point is stable if δn = xn − x −→ 0.
Because we are assuming |δn| = |xn − x| is small, a linearization in the
neighborhood of x suffices to check stability:
F(x + δ0) ≈ F(x) + F (x)δ0, (7)
by the Taylor Theorem. Since x = F(x), F(x + δn) = x + δn, and
δn+1 ≈ F (x)δn ≈ [F (x)]n+1
δ0. (8)
Therefore
δn ≈ [F (x)]n
δ0. (9)
Thus the fixed point is stable if [F (x)]n
→ 0. This is true if and only if the
eigenvalues λ of F (x) all have the property |λ| < 1 [4]. Note that because F is
in Rm
, F (x) is an m × m matrix - namely, the Jacobian Matrix:
3
J(x1, x2, . . . , xm) =






∂F1
∂x1
∂F1
∂x2
. . . ∂F1
∂xm
∂F2
∂x1
∂F2
∂x2
...
...
...
∂Fm
∂x1
. . . ∂Fm
∂xm






.
We will explore the fixed points and stability of the logistic map, which, as
stated above, is our first dynamical system of interest. It is written:
xn+1 = axn(1 − xn) ,
where 0 < a < 4. While this may seem like a very simple function, its behavior
can quickly become quite complex - chaotic even. The difference between chaos
and stability depends only on the value of the parameter “a”.
We started exploring the map numerically, by choosing an initial condition,
an appropriate number of iterations of the map, and varying the value of a. For
each calculation in the following table, the initial condition was x0 = 0.7, and
the number of iterations, n, was 5000. Initial conditions other than x0 = 0.7
(where 0 < x0 < 1) yielded results that varied only slightly from those listed.
The quantity denoted x in the table is the value that the iteration converged to
after the map was applied an appropriate number of times (this was different
depending on a, but was always < 5000), and is an approximate fixed point.
a x f (x)
0.10 0 0.1000
0.25 0 0.2500
0.50 0 0.5000
0.75 0 0.7500
0.991
0.0001 0.9899
1.00 0.0002 0.9999
1.10 0.0909 0.9000
1.25 0.2000 0.7500
1.50 0.3333 0.5000
1.75 0.3333 0.2499
1.90 0.3333 0.0999
2.00 0.5000 0
2.10 0.5238 -0.1000
2.25 0.5560 -0.2520
2.50 0.6000 -0.5000
2.75 0.6364 -0.7502
2.90 0.6552 -0.9002
a x1 x2 f (x1) f (x2)
3.01
0.6700 0.6330 -1.0200 -0.9798
3.1 0.5580 0.7646 -0.3596 -1.6400
3.2 0.5130 0.7995 -0.0832 -1.9168
3.3 0.4794 0.8236 0.1360 -2.1358
3.4 0.4520 0.8422 0.3264 -2.3270
1For these a values, the iteration seems to converge very slowly. The number of iterations
selected was sufficient for the other values of a, but for these many more applications of the
map would be necessary to obtain the same accuracy. Theoretically, at a = 0.99, x should =
0, and at a = 3, it should be true that x1 = x2.
4
This data suggests that there are about three major intervals containing
three different types of stable behavior for this map. When 0 < a < 1, the
fixed point, x, seems to always = 0. But for 1 ≤ a ≤ 3, the eventual fixed
point is different for each a, and is > 0. Finally, when 3 < a < 3.45, there
seems to be a convergence to a period 2 solution, where those two values are x1
and x2 in the second data table above. The behavior is even more complicated
when a ≥ 3.45, because the map converges to a period 4 solution! Furthermore,
based on several extensive numerical measurements, it seems that the system’s
behavior becomes chaotic for any a > 3.544, as very slight changes in a cause
additional (even multiple) period doublings. It is for this reason that the data
for a ≥ 3.45 has not been included in the table. We will elaborate more on this
later, but for now, let us only concern ourselves with these first three intervals,
as anything beyond them gets much more complicated.
For these intervals, we want to analyze the behavior of the system, which
means finding fixed points and analyzing their stability. Let us begin with a
starting value of x0, and define
f(x) = xn+1 = axn(1 − xn) = f(xn). (10)
Now, suppose we perturb this starting value by a very small amount δ0, where
|δ0| 1, and call it
y0 = x0 + δ0 ; (11)
Then
δ0 = y0 − x0, and δn = yn − xn. (12)
So the question then becomes “what happens to δn in the long run?” Does
δn → 0, such that x0 ≈ y0 and the system (or at least the interval) is stable?
Or does δn diverge and the system is unstable?
To proceed, let us also define yn by
f(yn) = yn+1 = ayn(1 − yn) . (13)
Thus
yn+1 = xn+1 + δn+1 = f(yn) , (14)
and since yn = xn + δn,
f(yn) = f(xn + δn) ≈ f(xn) + δnf (xn), (15)
by the Taylor Theorem. So we have
xn+1 + δn+1 ≈ f(xn) + δnf (xn), (16)
and therefore, by subtracting f(xn) = xn+1 from both sides,
δn+1 ≈ δnf (xn). (17)
5
And from this, we obtain an important result:
δn ≈ δ0
n−1
k=0
f (xk). (18)
Note that if xn = x is a fixed point, then this Equation is a one-dimensional
case of the general result found in Equation 9. If
lim
n→∞
n−1
k=0
f (xk) → 0, (19)
then δn → 0. A sufficient condition for this to be true is for |f (xn)| ≤ α, for
some α < 1, since then
n−1
k=0
|f (xk)| ≤ αn
→ 0. (20)
For our first two intervals, 0 < a < 1 and 1 ≤ a ≤ 3, the long term behavior
seems to converge to a fixed point, x. Thus
n−1
k=0
f (xk) = |f (x)|
n
, (21)
meaning that we can substitute and simplify things greatly:
lim
n→∞
n−1
k=0
f (xk) = lim
n→∞
|f (x)|
n
. (22)
Now f(x) = ax(1 − x), so
f (x) = a(1 − x) + ax(−1) (23)
= a − ax − ax (24)
= a(1 − 2x). (25)
From the data it seems that for all 0 < a < 1, the system converges to x = 0,
so we will start by evaluating the stability of this point:
f (0) = a(1 − 0) = a. (26)
Hence we find that
lim
n→∞
|f (x)|
n
= lim
n→∞
|a|
n
. (27)
And if 0 < a < 1, then
lim
n→∞
an
→ 0 . (28)
6
So for the interval 0 < a < 1, we know that the fixed point x = 0 is stable,
and that δn → 0. Perturbed initial conditions do not diverge from each other,
so there is no sensitive dependence on initial conditions and the system is not
chaotic.
This same argument (Equation 27) also shows that for a > 1, x = 0 is an
unstable fixed point. This fixed point still exists, it is just not stable for any
interval other than 0 < a < 1. So what are stable point(s) for a > 1? According
to our data, unlike for 0 < a < 1, it seems the fate of x is not always the same
regardless of a, and that x = 0. Instead, x converges to some positive non-zero
value. This value, a fixed point, seems to depend on the initial condition x0
and the magnitude of a. We can check all this by analytically obtaining an
expression for a fixed point = 0 as a function of a on the interval 1 < a < 3 .
We begin with the definition of a fixed point, f(x) = x, which for the logistic
map becomes
ax(1 − x) = x. (29)
Solving for x, we obtain
x = 1 − 1/a . (30)
This equation agrees with the data, and we can see that the stable fixed point
which the system eventually settles onto does indeed increase with a, and is
always positive. Also, for a < 1 or a > 3, the fixed points given by Equation
30 are unstable, though they still exist. Using this result (Equation 30), we can
show that
lim
n→∞
n−1
k=0
|f (xk)| → 0 . (31)
This can be done by evaluating f (x), and substituting in Equation 30, our
stable fixed point solution for the interval:
f (x) = a(1 − 2x)
f ((1 − 1/a)) = a[1 − 2(1 − 1/a)] (32)
= a − 2a + 2 = 2 − a.
So f (x) = 2 − a, and
lim
n→∞
n−1
k=0
|f (xk)| = lim
n→∞
n−1
k=0
|2 − a| → 0, (33)
because 1 < a < 3, so |2 − a| < 1.
For a ≥ 3, things quickly become more complicated. Recall from the result
in Equation 6 that because we seem to have a period 2 orbit at a > 3, we have
two stable period 2 solutions. These are different from our two stable solutions
7
(x = 0 and x = 1 − 1/a) of the previous intervals. Thus we experience our first
period doubling at a > 3. This first period doubling also coincides with a loss
of fixed point stability. To find period 2 solutions, we look for fixed points of
f ◦ f:
f(f(x)) = x = f(ax(1 − x)) . (34)
After a fair helping of algebra, we deduce from this the following 4th degree
polynomial:
x(a3
x3
− 2a3
x2
+ a3
x + a2
x − a2
+ 1) = 0 , (35)
whose roots will provide expressions for the four fixed points, which include
the period 2 solutions. Finding these roots might have been difficult, but we
already know two of them: x = 0 and x = 1 − 1/a, so we do not have to solve
for all four roots! These two were the stable fixed points for 0 < a < 1 and
1 < a < 3 respectively, so if we divide them out of our quartic polynomial, we
are left with a quadratic equation whose roots are the expressions for the period
2 fixed points.
We start by dividing out x, since it is clear that x = 0 satisfies the statement,
and obtain:
a3
x3
− 2a3
x2
+ a3
x + a2
x − a2
+ 1 = 0 . (36)
Next, since x = 1 − 1/a was a solution (and has been double-checked as a
solution of Equation 35 ), we divide the remaining cubic function by the factor
x − (1 − 1/a) = x − 1 + 1/a = x + (1/a − 1) = 0 . (37)
The quotient of Equation 36 and Equation 37 is
ax2
− ax − x + 1/a + 1 = ax2
− (a + 1)x + (1/a + 1) = 0 , (38)
and thus our quartic equation is reduced to a more manageable quadratic, whose
roots were found to be
x =
a + 1
2a
±
(a + 1)(a − 3)
2a
. (39)
There are 2 fixed points here, (the map oscillates between the 2 as long as
3 < a), so by letting xp1 represent the first and xp2 represent the second, we
may state
xp1 =
a + 1
2a
+
(a + 1)(a − 3)
2a
. (40)
xp2 =
a + 1
2a
−
(a + 1)(a − 3)
2a
. (41)
8
Now we have an expression for each of the 2 fixed points entirely as a function of
a. Note that these are only valid for a ≥ 3 (otherwise the results are imaginary),
and that for a = 3, xp1 = xp2, which gives the same result as x = 1 − 1/a.
Previously, for the intervals 0 < a < 1 and 1 < a ≤ 3, we obtained an
equation for the fixed point(s), and then for the derivative evaluated at this
point. The same was done here; but because we have 2 fixed points we have
two linearizations, given by
f (xp1) = −1 + (a + 1)(a − 3) , (42)
f (xp2) = −1 − (a + 1)(a − 3) . (43)
Our expressions for f (xp1) and f (xp2) offer a method of determining the
stability of the period 2 solution. Upon looking at the data table once again,
you will notice that within the interval 1 ≤ a < 3, f (x) −→ −1 as a in-
creases. Indeed, the next period doubling occurs precisely when the product
f (xp1)f (xp2) = −1. Hence, by setting f (xp1)f (xp2) = −1, we can solve for a
to find the “maximum a value” for this interval, as well as locate exactly where
the next period doubling occurs.
f (xp1)f (xp2) = −1 (44)
1 − (a + 1)(a − 3) = −1 (45)
a2
− 2a − 5 = 0 . (46)
The roots of this final quadratic turn out to be a = 1 ±
√
6. Since we are only
looking at 0 < a < 4, a = 1 +
√
6 ≈ 3.4495 is the correct upper limit of stability
for to this period 2 interval. Also for this interval, it’s still true that
lim
n→∞
δn = lim
n→∞
n−1
k=0
|f (xk)| δ0 → 0 . (47)
because
−1 < f (xp1)f (xp2) < 1. (48)
At a ≈ 3.4495 there will be another period doubling. This 2nd period
doubling also coincides with an additional loss of fixed point stability, as with
the first. Here there will be a period 4 orbit for the map, and so it will oscillate
between four fixed points. Similar to the previous case, where the map would
return to a given fixed point every other iteration, the map will now only return
to a fixed point every four iterations.
If we wanted to find expressions for each of these four fixed points, the
procedure is the same, starting with
x = f(f(f(f(x)))) , (49)
9
which represents a fixed point in this situation. We could then substitute f(x) =
ax(1 − x) into this and solve for x, which would yield a degree 8 equation.
Its roots would provide expressions for each fixed point. This equation can
be factored, because we already have four fixed point equations, and thus we
already have four of its roots. But dividing out all four roots still leaves us
with a 4th degree polynomial. Its roots would be much more unpleasant than
those previously procured, and would require some careful computing to collect.
However, if we were to do so, and found the roots, next we could linearize at
each fixed point expression. Finally, by evaluating
f (xp1)f (xp2)f (xp3)f (xp4) = −1 (50)
(where xp1, xp2, etc. denote fixed points) and solving for a, we could find where
the next period doubling takes place, along with the upper boundary of the
period 4 fixed point interval.
We did not attempt this, but we were able, however, to determine that the
interval where these four fixed points exist was something near 1 +
√
6 < a <
3.544. Notice that these intervals between period doublings are decreasing. This
is no coincidence; the decrease in interval length with each successive period
doubling is geometric, as proven by Feigenbaum’s Theorem [5]. Also, upon first
analyzing the map the value of a = 3.544 seemed to introduce the onset of chaos.
The findings from all the data and analytical calculations are summarized
by the following bifurcation diagram:
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
a (Parameter Value)
x(FixedPoint)
Bifurcation Diagram: Behavior of Logistic Map as a Function of Parameter "a"
10
Hopefully you can see from the trend and from the diagram that following
this last interval another period doubling occurs, and then another and an-
other. It appears this will continue indefinitely as a increases. But the question
remains, “where exactly does the system go chaotic?” True, the behavior be-
comes seemingly more and more complicated with each period doubling, but
where (and how) do you draw the line between an enormous number of fixed
points and the chaotic regime? From the data it seems that this occurs at
around a = 3.544, but how do we know for sure?
This was where the Lyapunov Exponent came in. Recall that the Lyapunov
Exponent (LE) of a system is a measurement or gauge of sorts as to how chaotic
a system is. More accurately, the LE of a system measures the rate at which
solutions to the system slightly perturbed from each other will diverge. If we
refer back to Equation 18
δn = δ0
n−1
k=0
f (xk) ,
we have a nice way of looking at this mathematically. Another way to view
this expression is that we are linearizing at each point in the trajectory (or time
series), and between each linearization and the next, getting a sample of the
stability and dynamics near that point, and then combining all of these pieces
(or stability gauges) to get an overall idea of the entire system dynamics.
Hence this can be done for the long term by evaluating
lim
n→∞
δn
δ0
= lim
n→∞
n−1
k=0
|f (xk)| , (51)
so to observe the average level of stability, we look at the nth
root of
Equation 51
lim
n→∞
δn
δ0
1
n
= lim
n→∞
n−1
k=0
|f (xk)|
1
n
. (52)
To get the Lyapunov Exponent we take the log of Equation 52:
lim
n→∞
1
n
log
n−1
k=0
|f (xk)| . (53)
By determining this quantity (the LE) we have a means of measuring the average
system dynamics in the long run, and the amount of diverging or stretching that
takes place. Equation 53 was used to find the first LE for the logistic map for
3 ≤ a ≤ 4, and the results are in the table on the next page:
11
Parameter a Lyapunov Exponent
3.00 -0.0001
3.10 -0.2638
3.20 -0.9163
3.25 -1.3863
3.30 -0.6189
3.40 -0.1372
3.50 -0.8725
3.56 -0.0771
3.57 0.0106
3.60 0.1818
3.63 -0.0191
3.70 0.3524
3.74 -0.1119
3.75 0.3639
3.80 0.4349
3.83 -0.3695
3.90 0.4968
4.00 0.6933
A negative LE corresponds to the system’s behavior being non-chaotic, while
a positive LE corresponds to when it is chaotic. The magnitude of the LE plays
a key role as well. If it is positive, then the larger the LE, the more chaotic the
system’s overall behavior. Likewise, if the LE is negative, the opposite is true.
Note that at values such as a = 3.63, 3.74, and 3.83 (and others) the behavior
briefly exits the chaotic regime, as seen in the figure below.
3.5 3.6 3.7 3.8 3.9 4
−1.2
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Parameter a
LyapunovExponent/FixedPoint
Comparison of Lyapunov Exponents and Bifurcation Diagram, both as a function of "a"’
12
In fact, an interesting discovery at this stage of the research was that of a
period 3 orbit and three fixed points (well after the 2nd period doubling), which
was unintentionally detected by arbitrarily setting a = 3.83. This is depicted in
the figure below:
0 50 100 150 200 250 300 350 400 450
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
n (# of Iterations of Logistic Map)
x(FixedPoints)
Period 3 Fixed Points, Found in Map at a = 3.83, x
0
= 0.7
Even though this is not a new result, the independent discovery of a period 3
orbit almost immediately following the chaotic regime was still a fascinating one
regardless.
The data and figures provided show that the LE is useful for more than just
determining when we crossed the line into chaos. Being able to calculate the
LE for various a’s and plot it alongside the bifurcation diagram showed that
the system does not become increasingly chaotic with increasing a, something
which was not obvious, and perhaps even counterintuitive.
Now that we have thoroughly investigated the behavior of the logistic map,
we wish to explore the concept of synchronization. A 2-dimensional coupled sys-
tem can be created from the logistic map using two copies of it and introducing
a new parameter , where < 1. The logistic map
f(xn) = xn+1 = axn(1 − xn) (54)
then becomes the coupled system
13
xn+1 = f( (1 − )xn + yn) , (55)
yn+1 = f( (1 − )yn + xn) , (56)
which is given by
xn+1 = a[(1 − )xn + yn][1 − [(1 − )xn + yn]] , (57)
yn+1 = a[(1 − )yn + xn][1 − [(1 − )yn + xn]] . (58)
Note that the coupled system is derived by replacing xn (or yn) with its weighted
average yn (or xn).
If = 0.5, then regardless of the initial values x0 and y0, xn will = yn,
for n ≥ 1. On the other hand, if = 0, the systems decouple. If the initial
conditions in this case are nearby but unequal they will diverge if the logistic
map is chaotic. But what are the values that synchronize the coupled system?
And how do they relate to the Lyapunov Exponent?
There are indeed precise values that synchronize the coupled system, and
they are in fact different for different values of a. Moreover, there is not just
one value of , but an entire interval where synchronization occurs. Thus, for a
given value of a, there exists a range of different values that synchronize the
system.
These facts were found numerically, and can be seen from the table below.
For all values in this table, the initial perturbation was δ0 = 10−4
, the initial
condition was x0 = 0.7, and the total number of iterations was n = 4000.
Synchronizing
Parameter a LE (Measured) Value (Measured)
3.6 0.1818 0.086
3.7 0.3524 0.158
3.8 0.4349 0.180
3.9 0.4968 0.200
Note that for each different a, the interval of synchronization is [ synch., 0.5];
between the value which first synchronized the system and 0.5. This is il-
lustrated by the following pair of figures, which were obtained by first running
5000 iterates of xn, and then introducing and iterating the coupled system,
with yn = xn +δn. The first shows the system in an unsynchronized state, with
a = 3.9 and = 0.197904:
14
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.9, x0
= 0.7, d = 0.0001, e = 0.197904, n = 4000
By slightly increasing the value of to 0.197905, synchronization is achieved,
and the graph becomes:
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.9, x0
= 0.7, d = 0.0001, e = 0.197905, n = 4000
We can tell that the two systems are synchronized because their behavior is
virtually the same.
15
There were many other interesting phenomena surrounding the behavior of
the system and how it developed in approaching synchrony or moving out of it.
A few of these are depicted below. They demonstrate how the coupled systems’
dynamics changed drastically with minor variations of a (specifically changes of
+0.1). Here the value of was held constant (at 0.09), along with everything
else except a.
0.4 0.5 0.6 0.7 0.8 0.9 1
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.6, x0
= 0.7, d = 0.0001, e = 0.09, n = 5000
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.7, x0
= 0.7, d = 0.0001, e = 0.09, n = 5000
16
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.8, x
0
= 0.7, d = 0.0001, e = 0.09, n = 5000
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Coupled System, a = 3.9, x
0
= 0.7, d = 0.0001, e = 0.09, n = 5000
17
At this point it is necessary to introduce the Singular Value Decomposition
of a matrix. Let A ∈ Mm×n be any matrix, where
A : Rn
−→ Rm
.
Then ∃ orthogonal matrices U ∈ Mm×m and V ∈ Mn×n, along with a diagonal
matrix Σ ∈ Mm×n, such that A may be factored as:
A = UΣV T
(59)
The elements of Σ, denoted a11 = σ1, a22 = σ2, . . . , amn = σmin(m,n), are
uniquely determined by A, and appear in descending order σ1 ≥ σ2 ≥ . . . ≥
σmin(m,n) ≥ 0. These diagonal entries of Σ are known as the Singular Values
of A, and Equation 59 is the Singular Value Decomposition of A. Recall that
because U is orthogonal, its columns form an orthonormal set and therefore
UT
U = I = UUT
. The same is obviously true for V as well. Another way of
saying this is that there are orthonormal bases vj for Rn
and ui for Rm
such that
Avj = σjuj and σj ≥ 0. Here vj and uji are columns of V and U respectively.
The implications of the Singular Value Decomposition are far greater than
the fact that any matrix can be factored into A = UΣV T
. Let S represent the
unit sphere (which is the n-dimensional equivalent of the unit circle in R2
, and
as such, is centered at the origin), such that
S = { s ∈ Rn
| s = Σ cjvj, | Σ c2
j = 1},
since the columns vj of V form an orthonormal basis for Rn
, and since s 2
= 1.
Then the image of S under the application of any linear map A is always a hy-
perellipsoid (like the unit sphere, a hyperellipsoid is an n-dimensional equivalent
of an ellipse in R2
) whose principle axes are parallel to the columns of U:
AS = { w ∈ Rn
| w = A
n
j=1
cjvj =
n
j=1
cjσjuj, |
n
j=1
c2
j = 1 }.
Thus a geometrical interpretation of the Singular Value Decomposition is that
any linear mapping can be written as a reflection or rotation by U, a stretching
(or shrinking) along the principal axis of each dimension by Σ, followed by a
reflection or rotation by V .
That Avj = σjuj may come as a surprise, but this fact and the relevance
thereof can easily be shown. We have A = UΣV T
, and because V is orthogonal,
we can write
AV = ΣU. (60)
Now V has columns v1, v2, . . . , vn, so the jth
column of AV (Avj), is equal to
the jth
column of UΣ. But Σ only has diagonal entries, so the jth
column of
the m × n matrix UΣ is just σjuj. Therefore
Avj = σjuj. (61)
18
If each σj is distinct, then the vectors uj and vj are unique up to a sign.
Because σ1 is the First Singular Value of A, since σ1 ≥ σ2 ≥ . . . ≥ σmin(m,n) ≥
0, then σ1u1 is the First Singular Vector of A, and corresponds to the largest
principal axis of the resulting hyperellipsoid. Likewise σ2u2 is the Second Sin-
gular Vector and the second largest principal axis, and so on.
The fact that A = UΣV T
allows us to write
AT
A = (UΣV T
)T
(UΣV T
)
= (V ΣUT
)(UΣV T
) (62)
= V Σ2
V T
because ΣT
= Σ, and UT
U = Im, since U is orthogonal. In the same manner
AAT
= UΣ2
UT
. Now let W = AT
A, and let x ∈ Rn
be an eigenvector of W,
with λ an eigenvalue of W, such that
Wx = λx. (63)
Substituting AT
A for W, we have
V Σ2
V T
x = λx. (64)
Now we introduce another quantity, R = V T
x, so that x = V R. By replacing
this expression for x into Equation 64, we obtain
V Σ2
V T
x = V Σ2
V T
(V R) = V Σ2
R = λV R (65)
and then, applying the transpose of V to both sides,
Σ2
R = λR. (66)
This implies a very useful result; that σ2
j = |λ| for some j.
Suppose that instead of some matrix A ∈ Mm×n, we have a symmetric
matrix B ∈ Mm×m, such that BT
= B, with eigenvalues λi. B will have
Singular Value Decomposition B = UΣV T
, as before. However, the quantity
W will be
W = BT
B = B2
= V Σ2
V T
. (67)
Now recall that in general two matrices, say C and D, are similar if ∃ some
invertible matrix X such that C = X−1
DX and D = XCX−1
. Furthermore,
if C and D are similar, then their eigenvalues are the same. We can see in
Equation 67 that the matrix W = B2
is similar to Σ2
. This implies another
useful result - namely, that the eigenvalues of B2
and Σ2
are the same. Since
Σ2
is diagonal with entries σ2
1, σ2
2, . . . , σ2
min(m,n), these are the eigenvalues of
both matrices, and therefore
(σj)2
= (λi)2
. (68)
19
Taking the square root of both quantities gives
|σj| = |λi|, (69)
but we have specified that each σj be non-negative, so then
σj = |λi|. (70)
Two important results have been demonstrated: that the singular values of
a matrix A are equal to the square root of the absolute value of the eigenvalues
of A, and that for a symmetric matrix, the singular values are equal to the
absolute value of its eigenvalues. These facts can be used to gain further insight
into synchronization and to verify the numerical findings in the last table.
Our coupled system, based on the logistic map, can be written
xn+1
yn+1
=
f( (1 − )xn + yn)
f( (1 − )yn + xn)
= F
xn
yn
; a, . (71)
Let
Vn =
xn
yn
(72)
so that Equation 71 may be more concisely written as Vn+1 = F(Vn). Also, let
x0 and y0 be initial conditions, and let
V0 =
x0
y0
(73)
be the initial state of the system. Similar to the uncoupled system, if we perturb
V0 by δ0, the eventual fate of the system will be determined by:
δn ≈ F (Vn−1) F (Vn−2) · · · F (V1) F (V0) δ0. (74)
But previously F was just a scalar, so what is it in this case? Since F(Vn) is
F
xn
yn
=
f( (1 − )xn + yn)
f( xn + (1 − )yn)
(75)
and more importantly since the coupled system is 2-dimensional, F (Vn) here is
a 2 × 2 Jacobian, given by
20
F
xn
yn
=



∂F1
∂x1
∂F1
∂x2
∂F2
∂x1
∂F2
∂x2


 (76)
=


∂
∂x f( (1 − )xn + yn) ∂
∂y f( (1 − )xn + yn)
∂
∂x f( xn + (1 − )yn) ∂
∂y f( xn + (1 − )yn)

 .
Or, after being evaluated
F (Vn) =
0
@
f ( (1 − )xn + yn) (1 − ) f ( (1 − )xn + yn)
f ( xn + (1 − )yn) f ( xn + (1 − )yn) (1 − )
1
A . (77)
We are interested in the stability of the synchronized solution; that is, if we start
with V0 where x0 = y0, such that xn = yn, we want to know what happens to δn. Do
its entries become equal in the limit? To determine this, we substitute xn = yn = vn
into our last expression, and obtain
F (Vn) =
0
@
f (vn) (1 − ) f (vn)
f (vn) f (vn) (1 − )
1
A (78)
= f (vn)
0
@
(1 − )
(1 − )
1
A
where f (vn) = f (yn) = f (xn). Now that we have determined F (Vn), we can rewrite
Equation 74, the expression for how the initial perturbation changes:
δn ≈
n−1Y
k=0
f (vn)
0
@
(1 − )
(1 − )
1
A
n
δ0. (79)
Now let
E =
0
@
(1 − )
(1 − )
1
A
n
. (80)
This is actually why the Singular Values are relevant to our purpose. They correspond
to how far a system diverges, or stretches, from its initial state, and therefore can
measure if the eventual behavior is chaotic. Let σk(n) ≥ 0 ∈ Rm
denote the evolution
21
of the kth
Singular Value of some matrix as a function of n applications of the map. If
σ1 = σ1(n) → ∞ as n → ∞, then the stretching persists indefinitely, and indeed the
behavior is chaotic. Moreover, if σ1(n) and σ2(n) → ∞ as n → ∞, then even wilder
stretching is occurring, and so forth.
If we can find the Singular Values of E we can determine if it will synchronize the
system (that is, if δn → 0) and therefore if the synchronization is entirely determined
by . Since E is symmetric, we need only find its eigenvalues to obtain the Singular
Values (as previously shown). The eigenvalues of E were found to be λ1 = 1, and
λ2 = (1 − 2 ), and the corresponding eigenvectors are h1 = (1, 1)T
and h2 = (1, −1)T
respectively.
The initial perturbation, δ0, may be written as a linear combination of these eigen-
vectors, since they form a basis:
δ0 = α

1
1

+ β

1
−1

, (81)
where
α =
x0 + y0
2
, and β =
x0 − y0
2
. (82)
This can then be placed into Equation 79 to give
δn ≈
n−1Y
k=0
f (vn)
0
@
(1 − )
(1 − )
1
A
n

α

1
1

+ β

1
−1

. (83)
However, a further generalization can be induced if we observe a simple trend. We
already know δ0, so let us continue on and evaluate δ1 and then δ2:
δ1 = f (v0)
0
@
(1 − )
(1 − )
1
A
1

α

1
1

+ β

1
−1

= f (v0)

α

1
1

+ (1 − 2 )β

1
−1

,
δ2 = f (v1)f (v0)
0
@
(1 − )
(1 − )
1
A
2

α

1
1

+ β

1
−1

= f (v1)f (v0)

α

1
1

+ (1 − 2 )2
β

1
−1

.
The matrix multiplication is not entirely obvious, but the trend continues, and it
follows that Equation 83 becomes
22
δn =
n−1Y
k=0
f (vn)

α

1
1

+ (1 − 2 )n
β

1
−1

. (84)
We know that if the original system (the logistic map) is chaotic, then the coupled
system is chaotic as well. However, we are interested in how close to synchronization
the system is, which is determined by xn − yn = (1, −1)δn, or whether |xn − yn| → 0.
Our last equation for δn, Equation 84, can be rearranged to reflect this:
(1, −1)δn = yn − xn
= (1, −1)
n−1Y
k=0
f (vn)

α

1
1

+ (1 − 2 )n
β

1
−1

(85)
=
n−1Y
k=0
f (vn)(1 − 2 )n
2β (86)
=
n−1Y
k=0
f (vn)(1 − 2 )n
(x0 − y0) (87)
because 2β = x0 − y0. Take note that for n = 0, the expression collapses down to the
initial conditions, as it should.
This last quantity should be a familiar one: it was found in Equation 17, and the
LE can be obtained from it by taking the limit and the log, as in Equation 53. Let us
denote L as the Lyapunov Exponent, so that
L = lim
n→∞
1
n
log
n−1Y
k=0
f (vn). (88)
Multiplying both sides by n, and then raising them as exponents of e, we may roughly
say that
enL
=
n−1Y
k=0
f (vn). (89)
Though in these operations we are side-stepping the limit present in this expression
and neglecting some deeper mathematics necessary to handle it, this is sufficient for
our purposes.
Using this last result, we can determine precisely which values of will synchronize
the system. If enL
(1 − 2 )n
 1, then |xn − yn| → 0 and the system synchronizes; if
enL
(1−2 )n
 1 it does not. Thus, synchronization occurs exactly when enL
(1−2 )n
=
1. From this it follows that
23
enL
(1 − 2 )n
= 1 (90)
h
eL
(1 − 2 )
in
= 1
eL
(1 − 2 ) = 1
(1 − 2 ) = e−L
=
1 − e−L
2
. (91)
We have just shown that whether or not the system synchronizes is entirely dependent
on , which is a function of the LE of the original system, and is therefore a function
of a. These analytical findings were used to verify and test the accuracy of the those
found numerically, in the previous table:
Synchronizing
Parameter a LE (Measured) Value (Measured)
1−e−L
2
3.6 0.1818 0.086 0.083
3.7 0.3524 0.158 0.149
3.8 0.4349 0.180 0.176
3.9 0.4968 0.200 0.196
Apparently the accuracy of the numerical results was right on the mark - these ana-
lytical values are quite close to those found in the previous data set!
We have determined the values of for which the coupled system synchronizes,
and related this to the LE for our one dimensional dynamical system. But we wish to
extend the analysis to more complex coupling schemes, and attempt to relate higher
order LE’s to the synchronization of more than two copies of higher dimensional maps.
This can be approached by building on the generalization already obtained in
Equation 9:
δn ≈ [F (x)]n
δ0,
where F (x) was an m×m Jacobian Matrix. It governs the fate of some perturbation δ0
and thus the stability of an m-dimensional map. This generalization has been applied
in two cases: for the logistic map, F (x) was simply 1 × 1; for the coupled system,
F (x) was 2 × 2. Now we will explore the case where F (x) is m × m.
To first approach this, we attempted to make not just 2, but m copies some function
F : Rm
−→ Rm
, not necessarily the logistic map. We also begin by letting
Z =
0
B
B
B
@
z1
z2
...
zm
1
C
C
C
A
∈ Rmn
, where zj ∈ Rm
. (92)
24
As with the coupled one dimensional systems, we need to create a coupled system
by replacing each vector zj ∈ Z with a weighted average. Our first approach is to start
with a simple case. One possible weighted average, which is similar to that chosen for
the 2-dimensional coupled system, is
z1 −→ (1 − (m − 1) )z1 + z2 + · · · + zm
z2 −→ z1 + (1 − (m − 1) )z2 + · · · + zm
...
zm −→ z1 + z2 + · · · + (1 − (m − 1) )zm
for some ≤ 0.5.
One way to create this weighted average is to apply to Z the following block matrix:
Q =
0
B
B
B
B
@
(1 − (m − 1) )In In . . . In
In (1 − (m − 1) )In
...
...
... In
In In . . . (1 − (m − 1) )In
1
C
C
C
C
A
.
Here Q is mn × mn, consisting of m rows by m columns of blocks. Each block of Q
consists of an n×n matrix; specifically, an n×n identity matrix multiplied by or some
constant based on . For each block element Qij of Q, the entry is (1 − (m − 1) )In
for i = j, and In for i = j.
For our purposes, a more useful way to write Q is to separate the diagonal and off
diagonal entries. Expanding the quantity (1 − (m − 1) ), we obtain 1 − m + , so we
may write
Q = (1 − m )Imn +
0
B
@
In . . . In
...
...
...
In . . . In
1
C
A .
Furthermore, we will let
K =
0
B
@
In . . . In
...
...
...
In . . . In
1
C
A
so that Q may more concisely be written as Q = (1 − m )Imn + K.
Thus by applying Q to Z we obtain the desired weighted average, which will be
denoted T = QZ. Next we apply F to each block of the new matrix T, and we will
call this result
N(Z) = F(QZ) = F(T). (93)
You will notice the procedure is essentially the same as with the coupling of the 2
one-dimensional systems. But here, instead of replacing x with its weighted average,
and applying the logistic map (thereby creating a 2-dimensional coupled system), we
25
are replacing each vector z ∈ Z with a weighted average, and applying some general
function F.
Having created an m-dimensional coupled system with an appropriate weighted
average, we now evaluate whether the synchronization of this system is given entirely
in terms of . As with the coupled one-dimensional system, the long term fate of the
generalized system will be given by
δn ≈ [F (QZ)]n
δ0
≈ N (Zn−1) N (Zn−2) · · · N (Z1) N (Z0) δ0. (94)
As with each of the previous cases, this means we need to find F . Since N(Z) =
F(T) = F(QZ), N (Z) = F (T) = F (QZ) = F (QZ)Q, by the chain rule, we have
δn ≈ F (Tn−1)Q F (Tn−2)Q · · · F (T1)Q F (T0)Q δ0. (95)
For the logistic map, F was 1 × 1, and was a scalar; for the coupled system, F was a
2 × 2 Jacobian Matrix; for this generalized system, F will have dimension mn × mn.
Furthermore, F (T) will be given by
F (T) =
0
B
B
B
B
@
F (T) 0 . . . 0
0 F (T)
...
...
... 0
0 0 . . . F (T)
1
C
C
C
C
A
where each of the blocks F (T) along the principle diagonal are n × n Jacobians
themselves.
Now we introduce some useful properties needed to complete our analysis. Say
C and D are any two matrices that ∈ Mm×n, with Singular Values σ1, . . . , σk and
ρ1, . . . , ρk respectively. In general, the Singular Values of the matrix CD are not
simply σ1ρ1, . . . , σkρk. However, if CD = DC, and CDT
= DT
C, then the Singular
Values of CD and DC are in fact σ1ρ1, . . . , σkρk. Also, the eigenvalues of the matrix
C + D are equal to the sum of the eigenvalues of C with the eigenvalues of D.
Why are these facts important? Because if they apply, that means by simply
finding the eigenvalues of F (T) and Q (which are both symmetric matrices) separately,
and then multiplying them together (followed by taking the absolute value of the
product), we will have obtained the Singular Values of F (T)Q. From there, we will
be able to determine if the synchronization of our general system is based on alone.
So the next step was to see if the products F (T)Q and F (T)QT
were commutative.
Again, knowing that F (T) and Q are symmetric, we know F (T)Q = F (T)QT
. Some
calculations also showed that F (T)Q and F (T)QT
were indeed commutative, and
we computationally confirmed that F (T)Q = QF (T), which implies F (T)QT
=
QT
F (T).
However, this can easily seen, given that F (T) and Q both have the property that
all of their diagonal blocks are the same. So the product of F (T) and Q will simply
be the only elements of F (T) (which are the blocks along the diagonal) times the
diagonal blocks of Q, regardless of multiplicative order. Either way the product is:
26
0
B
B
B
B
@
F (T)(1 − (m − 1) )In In . . . In
In F (T)(1 − (m − 1) )In
...
...
... In
In In . . . F (T)(1 − (m − 1) )In
1
C
C
C
C
A
This useful fact allows us to rearrange Equation 95 from
δn ≈ F (Tn−1)Q F (Tn−2)Q · · · F (T1)Q F (T0)Q δ0
into the more simplified
δn ≈
0
B
@
F (Tn−1)F (Tn−2) · · · F (T0) 0 0
0
... 0
0 0 F (Tn−1)F (Tn−2) · · · F (T0)
1
C
A Qn
δ0.
We will refer to the matrix product on the right hand side of the above equation as Ψ,
and the matrix with diagonal block entries F (Tn−1)F (Tn−2) · · · F (T0) we will call
Υ.
Whether δn → 0 as n → 0 can be determined from the Singular Values of Ψ. In
finding these, the aforementioned facts come into play. We begin with the fact that
the eigenvalues of C + D are the same as the eigenvalues of C plus the eigenvalues of
D, for any two commutative matrices C and D. Thus the eigenvalues of Q are simply
the sum of the eigenvalues of (1 − m )Imn and K.
The rank of K is n, so the null space has dimension nm−n = n(m−1), and there-
fore one eigenvalue of K is 0, with multiplicity n(m − 1). The only other eigenvalue
was found to be m , with multiplicity n. So, since all of the eigenvalues of (1−m )Imn
are simply (1−m ), Q has eigenvalues (1−m )+ m = 1 and (1−m )+0 = (1−m ),
each with multiplicities n and n(m − 1) respectively. As we established above, since
Q is symmetric its Singular Values are equal to the absolute value of its eigenvalues.
So then
27
|λ1| = · · · = |λm| = σ1 = · · · = σm = 1 (96)
and
|λm+1| = · · · = |λmn| = σm+1 = · · · = σmn = (1 − m ). (97)
Similarly, the Singular Values of Qn
will be 1n
= 1, with multiplicity n, and |(1−m )n
|,
with multiplicity n(m − 1).
Unfortunately, this means that synchronization is not entirely controlled by , as
we would have hoped. To illustrate, let the Singular Values of Ψ, Q, and Υ be ψj(n),
qj(n), and σj(n) respectively. Then
ψj(n) = qj(n)σj(n). (98)
But q1(n) = · · · = qm(n) = 1, and qm+1(n) = · · · = qmn(n) = (1 − m )n
. Therefore
ψ1(n) = · · · = ψm(n) = σ1(n) = · · · = σm(n), (99)
and
ψm+1(n) = · · · = ψmn(n) = σ1(n)(1 − m )n
= · · · = σmn(n)(1 − m )n
. (100)
From this it follows that Qn
does not completely control synchronization, which shows
that the values of which synchronize the system depend only on the first Lyapunov
Exponent. This is, of course, disappointing, as we wanted to describe the synchroniza-
tion of a generalized coupled system in terms of any LE but the first! So it appears
that our basic first approach and weighted average were not sufficient to relate the
higher order LE’s of an m-dimensional system to its synchronization.
Though it is disappointing our initial, simplistic attempt did not yield further
insight into this relation, this possibility was anticipated, and at the least, we have
made another small step toward our stated objective by ruling out the weighted average
used. Based upon this result, we know that further investigation into generalizing the
results of synchronization would require a very different and more complex weighted
average. In pursuing the research topic beyond this point, the next step would be
to find a weighted average as stated, and complete the same analytical steps with it.
However, the challenge would be to properly select a weighted average that would
isolate the synchronization of a generalized coupled system purely as a function of its
parameter. Moreover, finding exactly which type of weighted average would provide a
means to the end we seek would be a task in and of itself. In addition, other coupling
schemes should probably be investigated as well.
28
References
[1] Kellert, S.H. (1993). In the Wake of Chaos: Unpredictable Order in Dynamical
Systems, (Chicago and London, The University of Chicago Press).
[2] Lorenz, E.N. (1963). Deterministic Nonperiodic Flow, (Journal of the Atmospheric
Sciences: Vol. 20, No. 2, pp. 130141)
[3] Bergevin, C., and Steinke, S. (1999). Undergraduate Research Projects - Univer-
sity of Arizona, University of Arizona Department of Mathematics. 22 Apr. 2006.
http://math.arizona.edu/ ura/971/bergevin/.
[4] Olver, P.J., and Chehrzad, S. (2006). Applied Linear Algebra, (Pearson Prentice
Hall: Theorem 10.12).
[5] Glendinning, P. (1994). Stability, Instability, and Chaos, (Cambridge University
Press: Ch. 11).
[6] Trefethen, L.N., and Bau, D. (1997) Numerical Linear Algebra, (SIAM - Society
for Industrial and Applied Mathematics)
[7] Gleick, J. (1987). Chaos: Making a New Science, (Viking).
29

More Related Content

What's hot

A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsFrank Nielsen
 
Lagrange's equation with one application
Lagrange's equation with one applicationLagrange's equation with one application
Lagrange's equation with one applicationZakaria Hossain
 
Maths ppt partial diffrentian eqn
Maths ppt partial diffrentian eqnMaths ppt partial diffrentian eqn
Maths ppt partial diffrentian eqnDheerendraKumar43
 
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...Project KRIT
 
Distributavity
DistributavityDistributavity
Distributavityabc
 
Presentation on Solution to non linear equations
Presentation on Solution to non linear equationsPresentation on Solution to non linear equations
Presentation on Solution to non linear equationsRifat Rahamatullah
 
A comparative analysis of predictve data mining techniques3
A comparative analysis of predictve data mining techniques3A comparative analysis of predictve data mining techniques3
A comparative analysis of predictve data mining techniques3Mintu246
 
Cramer row inequality
Cramer row inequality Cramer row inequality
Cramer row inequality VashuGupta8
 
Applied numerical methods lec4
Applied numerical methods lec4Applied numerical methods lec4
Applied numerical methods lec4Yasser Ahmed
 
Fault tolerant process control
Fault tolerant process controlFault tolerant process control
Fault tolerant process controlSpringer
 
The Mullineux Map and p-Regularization For Hook Partitions
The Mullineux Map and p-Regularization For Hook PartitionsThe Mullineux Map and p-Regularization For Hook Partitions
The Mullineux Map and p-Regularization For Hook Partitionsayatan2
 
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...inventionjournals
 

What's hot (20)

Es272 ch4a
Es272 ch4aEs272 ch4a
Es272 ch4a
 
overviewPCA
overviewPCAoverviewPCA
overviewPCA
 
A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributions
 
Lagrange's equation with one application
Lagrange's equation with one applicationLagrange's equation with one application
Lagrange's equation with one application
 
Maths ppt partial diffrentian eqn
Maths ppt partial diffrentian eqnMaths ppt partial diffrentian eqn
Maths ppt partial diffrentian eqn
 
Es272 ch4b
Es272 ch4bEs272 ch4b
Es272 ch4b
 
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
 
Distributavity
DistributavityDistributavity
Distributavity
 
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
 
Presentation on Solution to non linear equations
Presentation on Solution to non linear equationsPresentation on Solution to non linear equations
Presentation on Solution to non linear equations
 
A comparative analysis of predictve data mining techniques3
A comparative analysis of predictve data mining techniques3A comparative analysis of predictve data mining techniques3
A comparative analysis of predictve data mining techniques3
 
Cramer row inequality
Cramer row inequality Cramer row inequality
Cramer row inequality
 
Applied numerical methods lec4
Applied numerical methods lec4Applied numerical methods lec4
Applied numerical methods lec4
 
Fault tolerant process control
Fault tolerant process controlFault tolerant process control
Fault tolerant process control
 
Estimationtheory2
Estimationtheory2Estimationtheory2
Estimationtheory2
 
Metodo de muller
Metodo de mullerMetodo de muller
Metodo de muller
 
OPERATIONS RESEARCH
OPERATIONS RESEARCHOPERATIONS RESEARCH
OPERATIONS RESEARCH
 
The Mullineux Map and p-Regularization For Hook Partitions
The Mullineux Map and p-Regularization For Hook PartitionsThe Mullineux Map and p-Regularization For Hook Partitions
The Mullineux Map and p-Regularization For Hook Partitions
 
ResearchPaper
ResearchPaperResearchPaper
ResearchPaper
 
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...
A Study of Some Systems of Linear and Nonlinear Partial Differential Equation...
 

Viewers also liked

Viewers also liked (16)

BOLADALE CV
BOLADALE CVBOLADALE CV
BOLADALE CV
 
Globalisation speech
Globalisation speechGlobalisation speech
Globalisation speech
 
Case study example essay 2009
Case study example essay 2009Case study example essay 2009
Case study example essay 2009
 
ΣΗΜΕΙΩΣΕΙΣ 12
ΣΗΜΕΙΩΣΕΙΣ 12ΣΗΜΕΙΩΣΕΙΣ 12
ΣΗΜΕΙΩΣΕΙΣ 12
 
India Globalisation Speech
India Globalisation SpeechIndia Globalisation Speech
India Globalisation Speech
 
Summary india
Summary   indiaSummary   india
Summary india
 
ΣΗΜΕΙΩΣΕΙΣ 11
ΣΗΜΕΙΩΣΕΙΣ 11ΣΗΜΕΙΩΣΕΙΣ 11
ΣΗΜΕΙΩΣΕΙΣ 11
 
ΣΗΜΕΙΩΣΕΙΣ 13
ΣΗΜΕΙΩΣΕΙΣ 13ΣΗΜΕΙΩΣΕΙΣ 13
ΣΗΜΕΙΩΣΕΙΣ 13
 
ΣΗΜΕΙΩΣΕΙΣ 10
ΣΗΜΕΙΩΣΕΙΣ 10ΣΗΜΕΙΩΣΕΙΣ 10
ΣΗΜΕΙΩΣΕΙΣ 10
 
ΠΑΡΟΥΣΙΑΣΗ 13
ΠΑΡΟΥΣΙΑΣΗ 13ΠΑΡΟΥΣΙΑΣΗ 13
ΠΑΡΟΥΣΙΑΣΗ 13
 
China Globalisation Speech
China Globalisation Speech China Globalisation Speech
China Globalisation Speech
 
Globalisation essay
Globalisation essayGlobalisation essay
Globalisation essay
 
ΠΑΡΟΥΣΙΑΣΗ 10
ΠΑΡΟΥΣΙΑΣΗ 10ΠΑΡΟΥΣΙΑΣΗ 10
ΠΑΡΟΥΣΙΑΣΗ 10
 
Elixir Phoenix
Elixir PhoenixElixir Phoenix
Elixir Phoenix
 
Case study example essay 2008
Case study example essay 2008Case study example essay 2008
Case study example essay 2008
 
Case study example essay 2010
Case study example essay 2010Case study example essay 2010
Case study example essay 2010
 

Similar to Synchronizing Chaotic Systems - Karl Dutson

Newton paper.docx
Newton  paper.docxNewton  paper.docx
Newton paper.docxnitmor1
 
Tensor 1
Tensor  1Tensor  1
Tensor 1BAIJU V
 
On elements of deterministic chaos and cross links in non- linear dynamical s...
On elements of deterministic chaos and cross links in non- linear dynamical s...On elements of deterministic chaos and cross links in non- linear dynamical s...
On elements of deterministic chaos and cross links in non- linear dynamical s...iosrjce
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsSpringer
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsSpringer
 
Non linearequationsmatlab
Non linearequationsmatlabNon linearequationsmatlab
Non linearequationsmatlabsheetslibrary
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equationsZunAib Ali
 
Non linearequationsmatlab
Non linearequationsmatlabNon linearequationsmatlab
Non linearequationsmatlabZunAib Ali
 
Numarical values
Numarical valuesNumarical values
Numarical valuesAmanSaeed11
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlightedAmanSaeed11
 
Series_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfSeries_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfmohamedtawfik358886
 
07 chap3
07 chap307 chap3
07 chap3ELIMENG
 
The geometry of three planes in space
The geometry of three planes in spaceThe geometry of three planes in space
The geometry of three planes in spaceTarun Gehlot
 

Similar to Synchronizing Chaotic Systems - Karl Dutson (20)

Two
TwoTwo
Two
 
Multiple scales
Multiple scalesMultiple scales
Multiple scales
 
PhasePlane1-1.pptx
PhasePlane1-1.pptxPhasePlane1-1.pptx
PhasePlane1-1.pptx
 
Newton paper.docx
Newton  paper.docxNewton  paper.docx
Newton paper.docx
 
On the dynamics of distillation processes
On the dynamics of distillation processesOn the dynamics of distillation processes
On the dynamics of distillation processes
 
Tensor 1
Tensor  1Tensor  1
Tensor 1
 
506
506506
506
 
On elements of deterministic chaos and cross links in non- linear dynamical s...
On elements of deterministic chaos and cross links in non- linear dynamical s...On elements of deterministic chaos and cross links in non- linear dynamical s...
On elements of deterministic chaos and cross links in non- linear dynamical s...
 
simpl_nie_engl
simpl_nie_englsimpl_nie_engl
simpl_nie_engl
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flows
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flows
 
Non linearequationsmatlab
Non linearequationsmatlabNon linearequationsmatlab
Non linearequationsmatlab
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equations
 
Non linearequationsmatlab
Non linearequationsmatlabNon linearequationsmatlab
Non linearequationsmatlab
 
Numarical values
Numarical valuesNumarical values
Numarical values
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlighted
 
Series_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdfSeries_Solution_Methods_and_Special_Func.pdf
Series_Solution_Methods_and_Special_Func.pdf
 
Paper06
Paper06Paper06
Paper06
 
07 chap3
07 chap307 chap3
07 chap3
 
The geometry of three planes in space
The geometry of three planes in spaceThe geometry of three planes in space
The geometry of three planes in space
 

Synchronizing Chaotic Systems - Karl Dutson

  • 1. Synchronizing Chaotic Systems Summer 2006 Student: Karl J. Dutson Advisor: Dr. Robert Indik An appropriate definition of Chaos Theory (and one of many) is “the quali- tative study of unstable, aperiodic behavior in deterministic, non-linear, dynam- ical systems” [1]. Chaos implies that a system exhibits sensitive dependence on initial conditions. For the purposes of this research project, we are interested in two types of dynamical systems which can be chaotic. The first is a map, of the general form: xn+1 = F(xn), where F : Rm −→ Rm . A second system of equal interest is an autonomous system of ordinary differential equations (ODE’s), of the form: dy dt = y = F(y), where F : Rm −→ Rm . The specific examples we will use include the logistic map: xn+1 = axn(1 − xn) , (1) and the well known Lorenz system [2]: ˙x = −σx + σy ˙y = −xz + rx − y ˙z = xy − bz , where a, σ, r, and b are positive parameters. Synchronization is a phenomenon that can occur when two or more copies of some dynamical system couple together. If two systems are coupled, they interact with each other. If they synchronize, their behavior is nearly identical, even though it may still be unpredictable. A question that arises concerning two (or more) such systems is “what kind of coupling leads to synchronization?” “Can we predict which values of the system parameters will cause synchronization to occur?” It turns out that the answer is yes, as verified by previous research [3], and this is possible through determining what are known as Lyapunov Exponents (LE’s) of a system. Calculating the LE’s of a chaotic system provides a great deal of information about its dynamics. In particular, the largest LE measures the rate at which initially close solutions to the system diverge from each other. Also, previous research [3] revealed that the critical coupling strength for which two coupled dynamical systems will synchronize is given in terms of the first LE. Although the LE is a crucial piece of information regarding a complex system, in practice it can be difficult to calculate. Fortunately, the results of 1
  • 2. synchronization offer an adequate method of measuring the largest LE of the system. However, there are other LE’s of complex systems, which are also of interest, but are much harder to compute. Our objective is to investigate the coupling of more than two copies of a dynamical system, and see if the synchronization of these systems can be described in terms of higher order LE. LE will be explained in further detail later. The outline of this report is as follows: We will first examine the logistic map. This will be done by generating a bifurcation diagram to show how its behavior depends on its single parameter a. In addition, we will analytically find some bifurcation points and expressions for fixed points. LE’s will then be introduced within this context. Next we will consider two coupled copies of the logistic map, which form a 2- dimensional system. We will find what parameter value(s) for coupling strength synchronize the system, for different values of a, and their dependence on the LE. Following this we will look at higher-dimensional coupled systems, and we hope to find relations between the synchronization of these systems and their higher order LE’s. Recall that a map, in general, is of the form F : Rm −→ Rm . For any initial condition x0 ∈ Rm , the map defines a sequence recursively by xn+1 = F(xn). (2) Thus the recursive sequence produced by evaluating (or iterating) the map is a function of the initial condition(s) and the number of iterations, n. For example, if we set x0 = 0.5 and a = 1, and iterate the logistic map (Equation 1) once we obtain x1 = 0.25. Then, to iterate the map again, we start with x1 = 0.25 as our initial input, and the result is x2 = 0.1875, and so on. In this way, for any integer n ≥ 0, xn is defined. The iteration of a map can be thought of as the evolution of the state of some system. A common example is how the population of a species grows or decreases from one year to the next, based on an initial population. For example, a caterpillar population changing from generation to generation would be a suitable application of a map. This means that the number of iterations in the sequence, n, must be a positive integer; it does not make sense to evaluate something like x1.75. Anytime we have a function such as F, we can call it a map and can define iterates of that function (or map) by xn = xn(x0). (3) Also, the same function F can be used to define an autonomous ODE: dx dt = F(x), (4) 2
  • 3. and for each initial condition x0 there exists a solution x = x(t, x0). (5) Note that while both solutions (Equations 3 and 5) depend on an initial con- dition, the evolution of the map depends on n, the number of iterations, and the ODE solution depends on time, t. Thus with a continuous flow, such as Equation 4, it makes sense to evaluate non-integer points or intervals in time, as time is continuous and non-discrete. A fixed point of the iterated map satisfies x = F(x), or xn+1 = xn. For the logistic map, a nice trivial example of a fixed point is x = 0, from which xn+1 = 0 = xn. We say that a list x0, x1, . . . , xP −1 is a period P orbit for the map F if xP = F(xP −1) = x0. If the list x0, . . . , xP −1 is indeed a period P orbit, then every xj in the list is a fixed point of the map G defined by iterating F P times: G(x) = F(F(· · · (F(x)) · · · )), (6) where the dots above mean F is composed with itself P times. A good way to tell if a system is stable is to determine whether its long term behavior is highly dependent on initial conditions. If a system is stable, initial conditions close to each other will eventually converge into the same orbit, or the same fixed point. The system will not display the level of sensitivity to initial conditions that is the trademark of chaos. Conversely, if the system is not stable, initially close conditions will eventually diverge from each other exponentially, and perhaps even indefinitely. So, from these two possible outcomes, we can narrow down whether or not the system is stable by taking two initial conditions separated by a very small difference, and observing how that difference changes in the long run. Suppose that x is a fixed point x ∈ Rm of a map F such that x = F(x). To determine whether x is stable we slightly perturb x to x + δ0 (where |δ0| < 1 is very small) and apply the map F to x0 = x + δ0. Then δ0 = x0 − x and the fixed point is stable if δn = xn − x −→ 0. Because we are assuming |δn| = |xn − x| is small, a linearization in the neighborhood of x suffices to check stability: F(x + δ0) ≈ F(x) + F (x)δ0, (7) by the Taylor Theorem. Since x = F(x), F(x + δn) = x + δn, and δn+1 ≈ F (x)δn ≈ [F (x)]n+1 δ0. (8) Therefore δn ≈ [F (x)]n δ0. (9) Thus the fixed point is stable if [F (x)]n → 0. This is true if and only if the eigenvalues λ of F (x) all have the property |λ| < 1 [4]. Note that because F is in Rm , F (x) is an m × m matrix - namely, the Jacobian Matrix: 3
  • 4. J(x1, x2, . . . , xm) =       ∂F1 ∂x1 ∂F1 ∂x2 . . . ∂F1 ∂xm ∂F2 ∂x1 ∂F2 ∂x2 ... ... ... ∂Fm ∂x1 . . . ∂Fm ∂xm       . We will explore the fixed points and stability of the logistic map, which, as stated above, is our first dynamical system of interest. It is written: xn+1 = axn(1 − xn) , where 0 < a < 4. While this may seem like a very simple function, its behavior can quickly become quite complex - chaotic even. The difference between chaos and stability depends only on the value of the parameter “a”. We started exploring the map numerically, by choosing an initial condition, an appropriate number of iterations of the map, and varying the value of a. For each calculation in the following table, the initial condition was x0 = 0.7, and the number of iterations, n, was 5000. Initial conditions other than x0 = 0.7 (where 0 < x0 < 1) yielded results that varied only slightly from those listed. The quantity denoted x in the table is the value that the iteration converged to after the map was applied an appropriate number of times (this was different depending on a, but was always < 5000), and is an approximate fixed point. a x f (x) 0.10 0 0.1000 0.25 0 0.2500 0.50 0 0.5000 0.75 0 0.7500 0.991 0.0001 0.9899 1.00 0.0002 0.9999 1.10 0.0909 0.9000 1.25 0.2000 0.7500 1.50 0.3333 0.5000 1.75 0.3333 0.2499 1.90 0.3333 0.0999 2.00 0.5000 0 2.10 0.5238 -0.1000 2.25 0.5560 -0.2520 2.50 0.6000 -0.5000 2.75 0.6364 -0.7502 2.90 0.6552 -0.9002 a x1 x2 f (x1) f (x2) 3.01 0.6700 0.6330 -1.0200 -0.9798 3.1 0.5580 0.7646 -0.3596 -1.6400 3.2 0.5130 0.7995 -0.0832 -1.9168 3.3 0.4794 0.8236 0.1360 -2.1358 3.4 0.4520 0.8422 0.3264 -2.3270 1For these a values, the iteration seems to converge very slowly. The number of iterations selected was sufficient for the other values of a, but for these many more applications of the map would be necessary to obtain the same accuracy. Theoretically, at a = 0.99, x should = 0, and at a = 3, it should be true that x1 = x2. 4
  • 5. This data suggests that there are about three major intervals containing three different types of stable behavior for this map. When 0 < a < 1, the fixed point, x, seems to always = 0. But for 1 ≤ a ≤ 3, the eventual fixed point is different for each a, and is > 0. Finally, when 3 < a < 3.45, there seems to be a convergence to a period 2 solution, where those two values are x1 and x2 in the second data table above. The behavior is even more complicated when a ≥ 3.45, because the map converges to a period 4 solution! Furthermore, based on several extensive numerical measurements, it seems that the system’s behavior becomes chaotic for any a > 3.544, as very slight changes in a cause additional (even multiple) period doublings. It is for this reason that the data for a ≥ 3.45 has not been included in the table. We will elaborate more on this later, but for now, let us only concern ourselves with these first three intervals, as anything beyond them gets much more complicated. For these intervals, we want to analyze the behavior of the system, which means finding fixed points and analyzing their stability. Let us begin with a starting value of x0, and define f(x) = xn+1 = axn(1 − xn) = f(xn). (10) Now, suppose we perturb this starting value by a very small amount δ0, where |δ0| 1, and call it y0 = x0 + δ0 ; (11) Then δ0 = y0 − x0, and δn = yn − xn. (12) So the question then becomes “what happens to δn in the long run?” Does δn → 0, such that x0 ≈ y0 and the system (or at least the interval) is stable? Or does δn diverge and the system is unstable? To proceed, let us also define yn by f(yn) = yn+1 = ayn(1 − yn) . (13) Thus yn+1 = xn+1 + δn+1 = f(yn) , (14) and since yn = xn + δn, f(yn) = f(xn + δn) ≈ f(xn) + δnf (xn), (15) by the Taylor Theorem. So we have xn+1 + δn+1 ≈ f(xn) + δnf (xn), (16) and therefore, by subtracting f(xn) = xn+1 from both sides, δn+1 ≈ δnf (xn). (17) 5
  • 6. And from this, we obtain an important result: δn ≈ δ0 n−1 k=0 f (xk). (18) Note that if xn = x is a fixed point, then this Equation is a one-dimensional case of the general result found in Equation 9. If lim n→∞ n−1 k=0 f (xk) → 0, (19) then δn → 0. A sufficient condition for this to be true is for |f (xn)| ≤ α, for some α < 1, since then n−1 k=0 |f (xk)| ≤ αn → 0. (20) For our first two intervals, 0 < a < 1 and 1 ≤ a ≤ 3, the long term behavior seems to converge to a fixed point, x. Thus n−1 k=0 f (xk) = |f (x)| n , (21) meaning that we can substitute and simplify things greatly: lim n→∞ n−1 k=0 f (xk) = lim n→∞ |f (x)| n . (22) Now f(x) = ax(1 − x), so f (x) = a(1 − x) + ax(−1) (23) = a − ax − ax (24) = a(1 − 2x). (25) From the data it seems that for all 0 < a < 1, the system converges to x = 0, so we will start by evaluating the stability of this point: f (0) = a(1 − 0) = a. (26) Hence we find that lim n→∞ |f (x)| n = lim n→∞ |a| n . (27) And if 0 < a < 1, then lim n→∞ an → 0 . (28) 6
  • 7. So for the interval 0 < a < 1, we know that the fixed point x = 0 is stable, and that δn → 0. Perturbed initial conditions do not diverge from each other, so there is no sensitive dependence on initial conditions and the system is not chaotic. This same argument (Equation 27) also shows that for a > 1, x = 0 is an unstable fixed point. This fixed point still exists, it is just not stable for any interval other than 0 < a < 1. So what are stable point(s) for a > 1? According to our data, unlike for 0 < a < 1, it seems the fate of x is not always the same regardless of a, and that x = 0. Instead, x converges to some positive non-zero value. This value, a fixed point, seems to depend on the initial condition x0 and the magnitude of a. We can check all this by analytically obtaining an expression for a fixed point = 0 as a function of a on the interval 1 < a < 3 . We begin with the definition of a fixed point, f(x) = x, which for the logistic map becomes ax(1 − x) = x. (29) Solving for x, we obtain x = 1 − 1/a . (30) This equation agrees with the data, and we can see that the stable fixed point which the system eventually settles onto does indeed increase with a, and is always positive. Also, for a < 1 or a > 3, the fixed points given by Equation 30 are unstable, though they still exist. Using this result (Equation 30), we can show that lim n→∞ n−1 k=0 |f (xk)| → 0 . (31) This can be done by evaluating f (x), and substituting in Equation 30, our stable fixed point solution for the interval: f (x) = a(1 − 2x) f ((1 − 1/a)) = a[1 − 2(1 − 1/a)] (32) = a − 2a + 2 = 2 − a. So f (x) = 2 − a, and lim n→∞ n−1 k=0 |f (xk)| = lim n→∞ n−1 k=0 |2 − a| → 0, (33) because 1 < a < 3, so |2 − a| < 1. For a ≥ 3, things quickly become more complicated. Recall from the result in Equation 6 that because we seem to have a period 2 orbit at a > 3, we have two stable period 2 solutions. These are different from our two stable solutions 7
  • 8. (x = 0 and x = 1 − 1/a) of the previous intervals. Thus we experience our first period doubling at a > 3. This first period doubling also coincides with a loss of fixed point stability. To find period 2 solutions, we look for fixed points of f ◦ f: f(f(x)) = x = f(ax(1 − x)) . (34) After a fair helping of algebra, we deduce from this the following 4th degree polynomial: x(a3 x3 − 2a3 x2 + a3 x + a2 x − a2 + 1) = 0 , (35) whose roots will provide expressions for the four fixed points, which include the period 2 solutions. Finding these roots might have been difficult, but we already know two of them: x = 0 and x = 1 − 1/a, so we do not have to solve for all four roots! These two were the stable fixed points for 0 < a < 1 and 1 < a < 3 respectively, so if we divide them out of our quartic polynomial, we are left with a quadratic equation whose roots are the expressions for the period 2 fixed points. We start by dividing out x, since it is clear that x = 0 satisfies the statement, and obtain: a3 x3 − 2a3 x2 + a3 x + a2 x − a2 + 1 = 0 . (36) Next, since x = 1 − 1/a was a solution (and has been double-checked as a solution of Equation 35 ), we divide the remaining cubic function by the factor x − (1 − 1/a) = x − 1 + 1/a = x + (1/a − 1) = 0 . (37) The quotient of Equation 36 and Equation 37 is ax2 − ax − x + 1/a + 1 = ax2 − (a + 1)x + (1/a + 1) = 0 , (38) and thus our quartic equation is reduced to a more manageable quadratic, whose roots were found to be x = a + 1 2a ± (a + 1)(a − 3) 2a . (39) There are 2 fixed points here, (the map oscillates between the 2 as long as 3 < a), so by letting xp1 represent the first and xp2 represent the second, we may state xp1 = a + 1 2a + (a + 1)(a − 3) 2a . (40) xp2 = a + 1 2a − (a + 1)(a − 3) 2a . (41) 8
  • 9. Now we have an expression for each of the 2 fixed points entirely as a function of a. Note that these are only valid for a ≥ 3 (otherwise the results are imaginary), and that for a = 3, xp1 = xp2, which gives the same result as x = 1 − 1/a. Previously, for the intervals 0 < a < 1 and 1 < a ≤ 3, we obtained an equation for the fixed point(s), and then for the derivative evaluated at this point. The same was done here; but because we have 2 fixed points we have two linearizations, given by f (xp1) = −1 + (a + 1)(a − 3) , (42) f (xp2) = −1 − (a + 1)(a − 3) . (43) Our expressions for f (xp1) and f (xp2) offer a method of determining the stability of the period 2 solution. Upon looking at the data table once again, you will notice that within the interval 1 ≤ a < 3, f (x) −→ −1 as a in- creases. Indeed, the next period doubling occurs precisely when the product f (xp1)f (xp2) = −1. Hence, by setting f (xp1)f (xp2) = −1, we can solve for a to find the “maximum a value” for this interval, as well as locate exactly where the next period doubling occurs. f (xp1)f (xp2) = −1 (44) 1 − (a + 1)(a − 3) = −1 (45) a2 − 2a − 5 = 0 . (46) The roots of this final quadratic turn out to be a = 1 ± √ 6. Since we are only looking at 0 < a < 4, a = 1 + √ 6 ≈ 3.4495 is the correct upper limit of stability for to this period 2 interval. Also for this interval, it’s still true that lim n→∞ δn = lim n→∞ n−1 k=0 |f (xk)| δ0 → 0 . (47) because −1 < f (xp1)f (xp2) < 1. (48) At a ≈ 3.4495 there will be another period doubling. This 2nd period doubling also coincides with an additional loss of fixed point stability, as with the first. Here there will be a period 4 orbit for the map, and so it will oscillate between four fixed points. Similar to the previous case, where the map would return to a given fixed point every other iteration, the map will now only return to a fixed point every four iterations. If we wanted to find expressions for each of these four fixed points, the procedure is the same, starting with x = f(f(f(f(x)))) , (49) 9
  • 10. which represents a fixed point in this situation. We could then substitute f(x) = ax(1 − x) into this and solve for x, which would yield a degree 8 equation. Its roots would provide expressions for each fixed point. This equation can be factored, because we already have four fixed point equations, and thus we already have four of its roots. But dividing out all four roots still leaves us with a 4th degree polynomial. Its roots would be much more unpleasant than those previously procured, and would require some careful computing to collect. However, if we were to do so, and found the roots, next we could linearize at each fixed point expression. Finally, by evaluating f (xp1)f (xp2)f (xp3)f (xp4) = −1 (50) (where xp1, xp2, etc. denote fixed points) and solving for a, we could find where the next period doubling takes place, along with the upper boundary of the period 4 fixed point interval. We did not attempt this, but we were able, however, to determine that the interval where these four fixed points exist was something near 1 + √ 6 < a < 3.544. Notice that these intervals between period doublings are decreasing. This is no coincidence; the decrease in interval length with each successive period doubling is geometric, as proven by Feigenbaum’s Theorem [5]. Also, upon first analyzing the map the value of a = 3.544 seemed to introduce the onset of chaos. The findings from all the data and analytical calculations are summarized by the following bifurcation diagram: 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 a (Parameter Value) x(FixedPoint) Bifurcation Diagram: Behavior of Logistic Map as a Function of Parameter "a" 10
  • 11. Hopefully you can see from the trend and from the diagram that following this last interval another period doubling occurs, and then another and an- other. It appears this will continue indefinitely as a increases. But the question remains, “where exactly does the system go chaotic?” True, the behavior be- comes seemingly more and more complicated with each period doubling, but where (and how) do you draw the line between an enormous number of fixed points and the chaotic regime? From the data it seems that this occurs at around a = 3.544, but how do we know for sure? This was where the Lyapunov Exponent came in. Recall that the Lyapunov Exponent (LE) of a system is a measurement or gauge of sorts as to how chaotic a system is. More accurately, the LE of a system measures the rate at which solutions to the system slightly perturbed from each other will diverge. If we refer back to Equation 18 δn = δ0 n−1 k=0 f (xk) , we have a nice way of looking at this mathematically. Another way to view this expression is that we are linearizing at each point in the trajectory (or time series), and between each linearization and the next, getting a sample of the stability and dynamics near that point, and then combining all of these pieces (or stability gauges) to get an overall idea of the entire system dynamics. Hence this can be done for the long term by evaluating lim n→∞ δn δ0 = lim n→∞ n−1 k=0 |f (xk)| , (51) so to observe the average level of stability, we look at the nth root of Equation 51 lim n→∞ δn δ0 1 n = lim n→∞ n−1 k=0 |f (xk)| 1 n . (52) To get the Lyapunov Exponent we take the log of Equation 52: lim n→∞ 1 n log n−1 k=0 |f (xk)| . (53) By determining this quantity (the LE) we have a means of measuring the average system dynamics in the long run, and the amount of diverging or stretching that takes place. Equation 53 was used to find the first LE for the logistic map for 3 ≤ a ≤ 4, and the results are in the table on the next page: 11
  • 12. Parameter a Lyapunov Exponent 3.00 -0.0001 3.10 -0.2638 3.20 -0.9163 3.25 -1.3863 3.30 -0.6189 3.40 -0.1372 3.50 -0.8725 3.56 -0.0771 3.57 0.0106 3.60 0.1818 3.63 -0.0191 3.70 0.3524 3.74 -0.1119 3.75 0.3639 3.80 0.4349 3.83 -0.3695 3.90 0.4968 4.00 0.6933 A negative LE corresponds to the system’s behavior being non-chaotic, while a positive LE corresponds to when it is chaotic. The magnitude of the LE plays a key role as well. If it is positive, then the larger the LE, the more chaotic the system’s overall behavior. Likewise, if the LE is negative, the opposite is true. Note that at values such as a = 3.63, 3.74, and 3.83 (and others) the behavior briefly exits the chaotic regime, as seen in the figure below. 3.5 3.6 3.7 3.8 3.9 4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 Parameter a LyapunovExponent/FixedPoint Comparison of Lyapunov Exponents and Bifurcation Diagram, both as a function of "a"’ 12
  • 13. In fact, an interesting discovery at this stage of the research was that of a period 3 orbit and three fixed points (well after the 2nd period doubling), which was unintentionally detected by arbitrarily setting a = 3.83. This is depicted in the figure below: 0 50 100 150 200 250 300 350 400 450 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 n (# of Iterations of Logistic Map) x(FixedPoints) Period 3 Fixed Points, Found in Map at a = 3.83, x 0 = 0.7 Even though this is not a new result, the independent discovery of a period 3 orbit almost immediately following the chaotic regime was still a fascinating one regardless. The data and figures provided show that the LE is useful for more than just determining when we crossed the line into chaos. Being able to calculate the LE for various a’s and plot it alongside the bifurcation diagram showed that the system does not become increasingly chaotic with increasing a, something which was not obvious, and perhaps even counterintuitive. Now that we have thoroughly investigated the behavior of the logistic map, we wish to explore the concept of synchronization. A 2-dimensional coupled sys- tem can be created from the logistic map using two copies of it and introducing a new parameter , where < 1. The logistic map f(xn) = xn+1 = axn(1 − xn) (54) then becomes the coupled system 13
  • 14. xn+1 = f( (1 − )xn + yn) , (55) yn+1 = f( (1 − )yn + xn) , (56) which is given by xn+1 = a[(1 − )xn + yn][1 − [(1 − )xn + yn]] , (57) yn+1 = a[(1 − )yn + xn][1 − [(1 − )yn + xn]] . (58) Note that the coupled system is derived by replacing xn (or yn) with its weighted average yn (or xn). If = 0.5, then regardless of the initial values x0 and y0, xn will = yn, for n ≥ 1. On the other hand, if = 0, the systems decouple. If the initial conditions in this case are nearby but unequal they will diverge if the logistic map is chaotic. But what are the values that synchronize the coupled system? And how do they relate to the Lyapunov Exponent? There are indeed precise values that synchronize the coupled system, and they are in fact different for different values of a. Moreover, there is not just one value of , but an entire interval where synchronization occurs. Thus, for a given value of a, there exists a range of different values that synchronize the system. These facts were found numerically, and can be seen from the table below. For all values in this table, the initial perturbation was δ0 = 10−4 , the initial condition was x0 = 0.7, and the total number of iterations was n = 4000. Synchronizing Parameter a LE (Measured) Value (Measured) 3.6 0.1818 0.086 3.7 0.3524 0.158 3.8 0.4349 0.180 3.9 0.4968 0.200 Note that for each different a, the interval of synchronization is [ synch., 0.5]; between the value which first synchronized the system and 0.5. This is il- lustrated by the following pair of figures, which were obtained by first running 5000 iterates of xn, and then introducing and iterating the coupled system, with yn = xn +δn. The first shows the system in an unsynchronized state, with a = 3.9 and = 0.197904: 14
  • 15. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.9, x0 = 0.7, d = 0.0001, e = 0.197904, n = 4000 By slightly increasing the value of to 0.197905, synchronization is achieved, and the graph becomes: 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.9, x0 = 0.7, d = 0.0001, e = 0.197905, n = 4000 We can tell that the two systems are synchronized because their behavior is virtually the same. 15
  • 16. There were many other interesting phenomena surrounding the behavior of the system and how it developed in approaching synchrony or moving out of it. A few of these are depicted below. They demonstrate how the coupled systems’ dynamics changed drastically with minor variations of a (specifically changes of +0.1). Here the value of was held constant (at 0.09), along with everything else except a. 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.6, x0 = 0.7, d = 0.0001, e = 0.09, n = 5000 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.7, x0 = 0.7, d = 0.0001, e = 0.09, n = 5000 16
  • 17. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.8, x 0 = 0.7, d = 0.0001, e = 0.09, n = 5000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Coupled System, a = 3.9, x 0 = 0.7, d = 0.0001, e = 0.09, n = 5000 17
  • 18. At this point it is necessary to introduce the Singular Value Decomposition of a matrix. Let A ∈ Mm×n be any matrix, where A : Rn −→ Rm . Then ∃ orthogonal matrices U ∈ Mm×m and V ∈ Mn×n, along with a diagonal matrix Σ ∈ Mm×n, such that A may be factored as: A = UΣV T (59) The elements of Σ, denoted a11 = σ1, a22 = σ2, . . . , amn = σmin(m,n), are uniquely determined by A, and appear in descending order σ1 ≥ σ2 ≥ . . . ≥ σmin(m,n) ≥ 0. These diagonal entries of Σ are known as the Singular Values of A, and Equation 59 is the Singular Value Decomposition of A. Recall that because U is orthogonal, its columns form an orthonormal set and therefore UT U = I = UUT . The same is obviously true for V as well. Another way of saying this is that there are orthonormal bases vj for Rn and ui for Rm such that Avj = σjuj and σj ≥ 0. Here vj and uji are columns of V and U respectively. The implications of the Singular Value Decomposition are far greater than the fact that any matrix can be factored into A = UΣV T . Let S represent the unit sphere (which is the n-dimensional equivalent of the unit circle in R2 , and as such, is centered at the origin), such that S = { s ∈ Rn | s = Σ cjvj, | Σ c2 j = 1}, since the columns vj of V form an orthonormal basis for Rn , and since s 2 = 1. Then the image of S under the application of any linear map A is always a hy- perellipsoid (like the unit sphere, a hyperellipsoid is an n-dimensional equivalent of an ellipse in R2 ) whose principle axes are parallel to the columns of U: AS = { w ∈ Rn | w = A n j=1 cjvj = n j=1 cjσjuj, | n j=1 c2 j = 1 }. Thus a geometrical interpretation of the Singular Value Decomposition is that any linear mapping can be written as a reflection or rotation by U, a stretching (or shrinking) along the principal axis of each dimension by Σ, followed by a reflection or rotation by V . That Avj = σjuj may come as a surprise, but this fact and the relevance thereof can easily be shown. We have A = UΣV T , and because V is orthogonal, we can write AV = ΣU. (60) Now V has columns v1, v2, . . . , vn, so the jth column of AV (Avj), is equal to the jth column of UΣ. But Σ only has diagonal entries, so the jth column of the m × n matrix UΣ is just σjuj. Therefore Avj = σjuj. (61) 18
  • 19. If each σj is distinct, then the vectors uj and vj are unique up to a sign. Because σ1 is the First Singular Value of A, since σ1 ≥ σ2 ≥ . . . ≥ σmin(m,n) ≥ 0, then σ1u1 is the First Singular Vector of A, and corresponds to the largest principal axis of the resulting hyperellipsoid. Likewise σ2u2 is the Second Sin- gular Vector and the second largest principal axis, and so on. The fact that A = UΣV T allows us to write AT A = (UΣV T )T (UΣV T ) = (V ΣUT )(UΣV T ) (62) = V Σ2 V T because ΣT = Σ, and UT U = Im, since U is orthogonal. In the same manner AAT = UΣ2 UT . Now let W = AT A, and let x ∈ Rn be an eigenvector of W, with λ an eigenvalue of W, such that Wx = λx. (63) Substituting AT A for W, we have V Σ2 V T x = λx. (64) Now we introduce another quantity, R = V T x, so that x = V R. By replacing this expression for x into Equation 64, we obtain V Σ2 V T x = V Σ2 V T (V R) = V Σ2 R = λV R (65) and then, applying the transpose of V to both sides, Σ2 R = λR. (66) This implies a very useful result; that σ2 j = |λ| for some j. Suppose that instead of some matrix A ∈ Mm×n, we have a symmetric matrix B ∈ Mm×m, such that BT = B, with eigenvalues λi. B will have Singular Value Decomposition B = UΣV T , as before. However, the quantity W will be W = BT B = B2 = V Σ2 V T . (67) Now recall that in general two matrices, say C and D, are similar if ∃ some invertible matrix X such that C = X−1 DX and D = XCX−1 . Furthermore, if C and D are similar, then their eigenvalues are the same. We can see in Equation 67 that the matrix W = B2 is similar to Σ2 . This implies another useful result - namely, that the eigenvalues of B2 and Σ2 are the same. Since Σ2 is diagonal with entries σ2 1, σ2 2, . . . , σ2 min(m,n), these are the eigenvalues of both matrices, and therefore (σj)2 = (λi)2 . (68) 19
  • 20. Taking the square root of both quantities gives |σj| = |λi|, (69) but we have specified that each σj be non-negative, so then σj = |λi|. (70) Two important results have been demonstrated: that the singular values of a matrix A are equal to the square root of the absolute value of the eigenvalues of A, and that for a symmetric matrix, the singular values are equal to the absolute value of its eigenvalues. These facts can be used to gain further insight into synchronization and to verify the numerical findings in the last table. Our coupled system, based on the logistic map, can be written xn+1 yn+1 = f( (1 − )xn + yn) f( (1 − )yn + xn) = F xn yn ; a, . (71) Let Vn = xn yn (72) so that Equation 71 may be more concisely written as Vn+1 = F(Vn). Also, let x0 and y0 be initial conditions, and let V0 = x0 y0 (73) be the initial state of the system. Similar to the uncoupled system, if we perturb V0 by δ0, the eventual fate of the system will be determined by: δn ≈ F (Vn−1) F (Vn−2) · · · F (V1) F (V0) δ0. (74) But previously F was just a scalar, so what is it in this case? Since F(Vn) is F xn yn = f( (1 − )xn + yn) f( xn + (1 − )yn) (75) and more importantly since the coupled system is 2-dimensional, F (Vn) here is a 2 × 2 Jacobian, given by 20
  • 21. F xn yn =    ∂F1 ∂x1 ∂F1 ∂x2 ∂F2 ∂x1 ∂F2 ∂x2    (76) =   ∂ ∂x f( (1 − )xn + yn) ∂ ∂y f( (1 − )xn + yn) ∂ ∂x f( xn + (1 − )yn) ∂ ∂y f( xn + (1 − )yn)   . Or, after being evaluated F (Vn) = 0 @ f ( (1 − )xn + yn) (1 − ) f ( (1 − )xn + yn) f ( xn + (1 − )yn) f ( xn + (1 − )yn) (1 − ) 1 A . (77) We are interested in the stability of the synchronized solution; that is, if we start with V0 where x0 = y0, such that xn = yn, we want to know what happens to δn. Do its entries become equal in the limit? To determine this, we substitute xn = yn = vn into our last expression, and obtain F (Vn) = 0 @ f (vn) (1 − ) f (vn) f (vn) f (vn) (1 − ) 1 A (78) = f (vn) 0 @ (1 − ) (1 − ) 1 A where f (vn) = f (yn) = f (xn). Now that we have determined F (Vn), we can rewrite Equation 74, the expression for how the initial perturbation changes: δn ≈ n−1Y k=0 f (vn) 0 @ (1 − ) (1 − ) 1 A n δ0. (79) Now let E = 0 @ (1 − ) (1 − ) 1 A n . (80) This is actually why the Singular Values are relevant to our purpose. They correspond to how far a system diverges, or stretches, from its initial state, and therefore can measure if the eventual behavior is chaotic. Let σk(n) ≥ 0 ∈ Rm denote the evolution 21
  • 22. of the kth Singular Value of some matrix as a function of n applications of the map. If σ1 = σ1(n) → ∞ as n → ∞, then the stretching persists indefinitely, and indeed the behavior is chaotic. Moreover, if σ1(n) and σ2(n) → ∞ as n → ∞, then even wilder stretching is occurring, and so forth. If we can find the Singular Values of E we can determine if it will synchronize the system (that is, if δn → 0) and therefore if the synchronization is entirely determined by . Since E is symmetric, we need only find its eigenvalues to obtain the Singular Values (as previously shown). The eigenvalues of E were found to be λ1 = 1, and λ2 = (1 − 2 ), and the corresponding eigenvectors are h1 = (1, 1)T and h2 = (1, −1)T respectively. The initial perturbation, δ0, may be written as a linear combination of these eigen- vectors, since they form a basis: δ0 = α 1 1 + β 1 −1 , (81) where α = x0 + y0 2 , and β = x0 − y0 2 . (82) This can then be placed into Equation 79 to give δn ≈ n−1Y k=0 f (vn) 0 @ (1 − ) (1 − ) 1 A n α 1 1 + β 1 −1 . (83) However, a further generalization can be induced if we observe a simple trend. We already know δ0, so let us continue on and evaluate δ1 and then δ2: δ1 = f (v0) 0 @ (1 − ) (1 − ) 1 A 1 α 1 1 + β 1 −1 = f (v0) α 1 1 + (1 − 2 )β 1 −1 , δ2 = f (v1)f (v0) 0 @ (1 − ) (1 − ) 1 A 2 α 1 1 + β 1 −1 = f (v1)f (v0) α 1 1 + (1 − 2 )2 β 1 −1 . The matrix multiplication is not entirely obvious, but the trend continues, and it follows that Equation 83 becomes 22
  • 23. δn = n−1Y k=0 f (vn) α 1 1 + (1 − 2 )n β 1 −1 . (84) We know that if the original system (the logistic map) is chaotic, then the coupled system is chaotic as well. However, we are interested in how close to synchronization the system is, which is determined by xn − yn = (1, −1)δn, or whether |xn − yn| → 0. Our last equation for δn, Equation 84, can be rearranged to reflect this: (1, −1)δn = yn − xn = (1, −1) n−1Y k=0 f (vn) α 1 1 + (1 − 2 )n β 1 −1 (85) = n−1Y k=0 f (vn)(1 − 2 )n 2β (86) = n−1Y k=0 f (vn)(1 − 2 )n (x0 − y0) (87) because 2β = x0 − y0. Take note that for n = 0, the expression collapses down to the initial conditions, as it should. This last quantity should be a familiar one: it was found in Equation 17, and the LE can be obtained from it by taking the limit and the log, as in Equation 53. Let us denote L as the Lyapunov Exponent, so that L = lim n→∞ 1 n log n−1Y k=0 f (vn). (88) Multiplying both sides by n, and then raising them as exponents of e, we may roughly say that enL = n−1Y k=0 f (vn). (89) Though in these operations we are side-stepping the limit present in this expression and neglecting some deeper mathematics necessary to handle it, this is sufficient for our purposes. Using this last result, we can determine precisely which values of will synchronize the system. If enL (1 − 2 )n 1, then |xn − yn| → 0 and the system synchronizes; if enL (1−2 )n 1 it does not. Thus, synchronization occurs exactly when enL (1−2 )n = 1. From this it follows that 23
  • 24. enL (1 − 2 )n = 1 (90) h eL (1 − 2 ) in = 1 eL (1 − 2 ) = 1 (1 − 2 ) = e−L = 1 − e−L 2 . (91) We have just shown that whether or not the system synchronizes is entirely dependent on , which is a function of the LE of the original system, and is therefore a function of a. These analytical findings were used to verify and test the accuracy of the those found numerically, in the previous table: Synchronizing Parameter a LE (Measured) Value (Measured) 1−e−L 2 3.6 0.1818 0.086 0.083 3.7 0.3524 0.158 0.149 3.8 0.4349 0.180 0.176 3.9 0.4968 0.200 0.196 Apparently the accuracy of the numerical results was right on the mark - these ana- lytical values are quite close to those found in the previous data set! We have determined the values of for which the coupled system synchronizes, and related this to the LE for our one dimensional dynamical system. But we wish to extend the analysis to more complex coupling schemes, and attempt to relate higher order LE’s to the synchronization of more than two copies of higher dimensional maps. This can be approached by building on the generalization already obtained in Equation 9: δn ≈ [F (x)]n δ0, where F (x) was an m×m Jacobian Matrix. It governs the fate of some perturbation δ0 and thus the stability of an m-dimensional map. This generalization has been applied in two cases: for the logistic map, F (x) was simply 1 × 1; for the coupled system, F (x) was 2 × 2. Now we will explore the case where F (x) is m × m. To first approach this, we attempted to make not just 2, but m copies some function F : Rm −→ Rm , not necessarily the logistic map. We also begin by letting Z = 0 B B B @ z1 z2 ... zm 1 C C C A ∈ Rmn , where zj ∈ Rm . (92) 24
  • 25. As with the coupled one dimensional systems, we need to create a coupled system by replacing each vector zj ∈ Z with a weighted average. Our first approach is to start with a simple case. One possible weighted average, which is similar to that chosen for the 2-dimensional coupled system, is z1 −→ (1 − (m − 1) )z1 + z2 + · · · + zm z2 −→ z1 + (1 − (m − 1) )z2 + · · · + zm ... zm −→ z1 + z2 + · · · + (1 − (m − 1) )zm for some ≤ 0.5. One way to create this weighted average is to apply to Z the following block matrix: Q = 0 B B B B @ (1 − (m − 1) )In In . . . In In (1 − (m − 1) )In ... ... ... In In In . . . (1 − (m − 1) )In 1 C C C C A . Here Q is mn × mn, consisting of m rows by m columns of blocks. Each block of Q consists of an n×n matrix; specifically, an n×n identity matrix multiplied by or some constant based on . For each block element Qij of Q, the entry is (1 − (m − 1) )In for i = j, and In for i = j. For our purposes, a more useful way to write Q is to separate the diagonal and off diagonal entries. Expanding the quantity (1 − (m − 1) ), we obtain 1 − m + , so we may write Q = (1 − m )Imn + 0 B @ In . . . In ... ... ... In . . . In 1 C A . Furthermore, we will let K = 0 B @ In . . . In ... ... ... In . . . In 1 C A so that Q may more concisely be written as Q = (1 − m )Imn + K. Thus by applying Q to Z we obtain the desired weighted average, which will be denoted T = QZ. Next we apply F to each block of the new matrix T, and we will call this result N(Z) = F(QZ) = F(T). (93) You will notice the procedure is essentially the same as with the coupling of the 2 one-dimensional systems. But here, instead of replacing x with its weighted average, and applying the logistic map (thereby creating a 2-dimensional coupled system), we 25
  • 26. are replacing each vector z ∈ Z with a weighted average, and applying some general function F. Having created an m-dimensional coupled system with an appropriate weighted average, we now evaluate whether the synchronization of this system is given entirely in terms of . As with the coupled one-dimensional system, the long term fate of the generalized system will be given by δn ≈ [F (QZ)]n δ0 ≈ N (Zn−1) N (Zn−2) · · · N (Z1) N (Z0) δ0. (94) As with each of the previous cases, this means we need to find F . Since N(Z) = F(T) = F(QZ), N (Z) = F (T) = F (QZ) = F (QZ)Q, by the chain rule, we have δn ≈ F (Tn−1)Q F (Tn−2)Q · · · F (T1)Q F (T0)Q δ0. (95) For the logistic map, F was 1 × 1, and was a scalar; for the coupled system, F was a 2 × 2 Jacobian Matrix; for this generalized system, F will have dimension mn × mn. Furthermore, F (T) will be given by F (T) = 0 B B B B @ F (T) 0 . . . 0 0 F (T) ... ... ... 0 0 0 . . . F (T) 1 C C C C A where each of the blocks F (T) along the principle diagonal are n × n Jacobians themselves. Now we introduce some useful properties needed to complete our analysis. Say C and D are any two matrices that ∈ Mm×n, with Singular Values σ1, . . . , σk and ρ1, . . . , ρk respectively. In general, the Singular Values of the matrix CD are not simply σ1ρ1, . . . , σkρk. However, if CD = DC, and CDT = DT C, then the Singular Values of CD and DC are in fact σ1ρ1, . . . , σkρk. Also, the eigenvalues of the matrix C + D are equal to the sum of the eigenvalues of C with the eigenvalues of D. Why are these facts important? Because if they apply, that means by simply finding the eigenvalues of F (T) and Q (which are both symmetric matrices) separately, and then multiplying them together (followed by taking the absolute value of the product), we will have obtained the Singular Values of F (T)Q. From there, we will be able to determine if the synchronization of our general system is based on alone. So the next step was to see if the products F (T)Q and F (T)QT were commutative. Again, knowing that F (T) and Q are symmetric, we know F (T)Q = F (T)QT . Some calculations also showed that F (T)Q and F (T)QT were indeed commutative, and we computationally confirmed that F (T)Q = QF (T), which implies F (T)QT = QT F (T). However, this can easily seen, given that F (T) and Q both have the property that all of their diagonal blocks are the same. So the product of F (T) and Q will simply be the only elements of F (T) (which are the blocks along the diagonal) times the diagonal blocks of Q, regardless of multiplicative order. Either way the product is: 26
  • 27. 0 B B B B @ F (T)(1 − (m − 1) )In In . . . In In F (T)(1 − (m − 1) )In ... ... ... In In In . . . F (T)(1 − (m − 1) )In 1 C C C C A This useful fact allows us to rearrange Equation 95 from δn ≈ F (Tn−1)Q F (Tn−2)Q · · · F (T1)Q F (T0)Q δ0 into the more simplified δn ≈ 0 B @ F (Tn−1)F (Tn−2) · · · F (T0) 0 0 0 ... 0 0 0 F (Tn−1)F (Tn−2) · · · F (T0) 1 C A Qn δ0. We will refer to the matrix product on the right hand side of the above equation as Ψ, and the matrix with diagonal block entries F (Tn−1)F (Tn−2) · · · F (T0) we will call Υ. Whether δn → 0 as n → 0 can be determined from the Singular Values of Ψ. In finding these, the aforementioned facts come into play. We begin with the fact that the eigenvalues of C + D are the same as the eigenvalues of C plus the eigenvalues of D, for any two commutative matrices C and D. Thus the eigenvalues of Q are simply the sum of the eigenvalues of (1 − m )Imn and K. The rank of K is n, so the null space has dimension nm−n = n(m−1), and there- fore one eigenvalue of K is 0, with multiplicity n(m − 1). The only other eigenvalue was found to be m , with multiplicity n. So, since all of the eigenvalues of (1−m )Imn are simply (1−m ), Q has eigenvalues (1−m )+ m = 1 and (1−m )+0 = (1−m ), each with multiplicities n and n(m − 1) respectively. As we established above, since Q is symmetric its Singular Values are equal to the absolute value of its eigenvalues. So then 27
  • 28. |λ1| = · · · = |λm| = σ1 = · · · = σm = 1 (96) and |λm+1| = · · · = |λmn| = σm+1 = · · · = σmn = (1 − m ). (97) Similarly, the Singular Values of Qn will be 1n = 1, with multiplicity n, and |(1−m )n |, with multiplicity n(m − 1). Unfortunately, this means that synchronization is not entirely controlled by , as we would have hoped. To illustrate, let the Singular Values of Ψ, Q, and Υ be ψj(n), qj(n), and σj(n) respectively. Then ψj(n) = qj(n)σj(n). (98) But q1(n) = · · · = qm(n) = 1, and qm+1(n) = · · · = qmn(n) = (1 − m )n . Therefore ψ1(n) = · · · = ψm(n) = σ1(n) = · · · = σm(n), (99) and ψm+1(n) = · · · = ψmn(n) = σ1(n)(1 − m )n = · · · = σmn(n)(1 − m )n . (100) From this it follows that Qn does not completely control synchronization, which shows that the values of which synchronize the system depend only on the first Lyapunov Exponent. This is, of course, disappointing, as we wanted to describe the synchroniza- tion of a generalized coupled system in terms of any LE but the first! So it appears that our basic first approach and weighted average were not sufficient to relate the higher order LE’s of an m-dimensional system to its synchronization. Though it is disappointing our initial, simplistic attempt did not yield further insight into this relation, this possibility was anticipated, and at the least, we have made another small step toward our stated objective by ruling out the weighted average used. Based upon this result, we know that further investigation into generalizing the results of synchronization would require a very different and more complex weighted average. In pursuing the research topic beyond this point, the next step would be to find a weighted average as stated, and complete the same analytical steps with it. However, the challenge would be to properly select a weighted average that would isolate the synchronization of a generalized coupled system purely as a function of its parameter. Moreover, finding exactly which type of weighted average would provide a means to the end we seek would be a task in and of itself. In addition, other coupling schemes should probably be investigated as well. 28
  • 29. References [1] Kellert, S.H. (1993). In the Wake of Chaos: Unpredictable Order in Dynamical Systems, (Chicago and London, The University of Chicago Press). [2] Lorenz, E.N. (1963). Deterministic Nonperiodic Flow, (Journal of the Atmospheric Sciences: Vol. 20, No. 2, pp. 130141) [3] Bergevin, C., and Steinke, S. (1999). Undergraduate Research Projects - Univer- sity of Arizona, University of Arizona Department of Mathematics. 22 Apr. 2006. http://math.arizona.edu/ ura/971/bergevin/. [4] Olver, P.J., and Chehrzad, S. (2006). Applied Linear Algebra, (Pearson Prentice Hall: Theorem 10.12). [5] Glendinning, P. (1994). Stability, Instability, and Chaos, (Cambridge University Press: Ch. 11). [6] Trefethen, L.N., and Bau, D. (1997) Numerical Linear Algebra, (SIAM - Society for Industrial and Applied Mathematics) [7] Gleick, J. (1987). Chaos: Making a New Science, (Viking). 29