3. 1 Introduction
For this report we are looking into the results of a paper done by N.J. Balmforth,
S.G. Llewellyn Smith and W.R. Young[1]
called the “dynamics of interfaces and
layers in a stratified turbulent fluid” and trying to replicate some of these results
using MatLab. This will also be with the use of Chebyshev differentiation
matrices as opposed to the common method of differentiation using Fast Fourier
Transforms. The report itself will look into the results of the paper trying to
be replicated, the method of learning the MatLab style of coding and then onto
the paper replicated results.
3
4. 2 Literature Review
2.1 Dynamics of interfaces and layers in a stratified tur-
bulent fluid
2.1.1 Introduction
The main paper for this project was the “dynamics of interfaces and layers
in a stratified turbulent fluid” [1]. In this paper Balmforth et al. laid out
the scheme to solve the problem of the turbulent mixing (e.g. stirring) of a
fluid (example water) infused with salt. The variables of this paper are the
‘horizontally averaged buoyancy gradient and the density of turbulent kinetic
energy’. A hands on mixing length argument led to the development of coupled
differential equations, that were parabolic in nature. For the purposes of this
project the differential equations were limited to the dimensionless ones below
but the paper lead into a full scope with equation being non dimensionless and
steady state, more about that later.
gt = (le1/2
g)zz, (1)
et = (le1/2
ez)z le1/2
g + ✏l 1
(1 e)e1/2
, (2)
l =
e1/2
(e + g)1/2
(3)
Here gis the buoyancy gradient and e the energy density. These were de-
veloped to look at the instability of linear stratification and buoyancy gradient
which lead to staircases and layers in fluid that had an intermediate stratifi-
cation. These staircases were set up when for example if the “fluid is initially
uniform and stable salt gradient is set into turbulent motion by dragging a rod
or a grid back and forth, then, as a result of turbulent mixing, the density field
evolves into a staircase profile” [1].
2.1.2 Model
To derive these equations a model was devised that describes how the average
buoyancy b(z, t) and average turbulent kinetic energy density e(z, t) change in
time. These are described as two coupled nonlinear diffusion equations:
bt = (le1/2
bz)z,
et = (le1/2
ez)z le1/2
bz ↵l 1
e3/2
+ P,
l in this case is the mixing length that will be discussed shortly. These equations
are built up as follows. The buoyancy field and the kinetic energy density are
transported by a turbulent eddy diffusion that are both proportional to le1/2
,
4
5. is a dimensionless constant that tells how they diffuse in relation to each other.
The next term in the energy equation
le1/2
bz
describes how e decreases with the turbulence from the vertical mixing of the
stable stratification. The term
↵l 1
e3/2
is how the turbulent kinetic energy dissipates. This term is more conveniently
denoted as ". The energy production term is the last term in the energy equa-
tion, P. This is how the stirring motion affects the turbulent motion.
For the diffusion equations the boundary conditions were:
bz = ez = 0
at the top and bottom of the fluid and z = 0 and z = H with H being the
depth of the fluid. This makes sure that the energy and buoyancy flux does not
exceed the boundaries.
The mixing length mention earlier is a length scale to prescribe how big the
stirring device is related to the stratification of the fluid. Equation (3) was used
as if g = 0 then l is determined by the stirrer. If g 6= 0, then the relationship
needed between l and g is an inverse one. So we need a prescription of this and
equation (3) is a simple case of it, there are many more that work.
The last term to deal with is P the energy production. This takes the form
of
(↵e1/2
U1/2
)/l
as “the eddy speed, e1/2
, adjusts to a velocity scale, U(dimensions L/T), which is
proportional to the speed of the stirring device, on the eddy turnover timescale.”
This leads to the kinetic energy equation
et = (le1/2
ez)z le1/2
bz + ↵(e1/2
/l)(U2
e)
. From here it is easy to get the non dimensional equations, with g ⌘ bz.
2.1.3 Numerical Solutions
Balmforth et al solved equations (1) - (3) using a finite element collocation
procedure designed to solve coupled nonlinear differential equations based on the
paper by Keast & Muir 1991[2]
. For this a spatial resolution of 4000 polynomial
interpolants was used. For the initial conditions, g and e were prescribed as:
g = gi{1
cosh[20(z/H 1/2)]
cosh(10)
}, e = ei
These require the initial conditions of gi and ei for this Balmforth et al. used
gi = 0.0218 and ei = 0.0994 with = 1 and ✏ = 1/50.
5
6. 2.2 Spectral Methods in Matlab
2.2.1 Introduction
The main source for learning the coding of Matlab and the spectral methods
came from Lloyd N. Trefethen’s Spectral Methods in MatLab[3]
. This book
introduced the ideas of Chebyshev polynomials and matrices and looked at
them with time-stepping methods which was the crux of solving the equations
laid out in [1].
2.2.2 Chapter 1
The first chapter in the book sets up the idea of differentiation matrices using
the simplest case of a finite differences formula. This is where a set of grid
points {xj} and functions values related to these {u(xj)} are used such that:
xj+1 xj = h and the approximate second order differences derivative is :
wj = u0
(xj) =
uj+1 uj 1
2h
.
To set this up as a matrix-vector multiplication, assuming the function is peri-
odic and that u0 = uN and u1 = uN+1 , the matrix can be derived by consid-
ering the Taylor expansions of u(xj+1) and u(xj 1). Then the finite difference
equation for 2nd order taylor expansion becomes:
0
B
B
B
B
B
@
w1
...
wN
1
C
C
C
C
C
A
= h 1
0
B
B
B
B
B
B
B
@
0 1
2
1
2
1
2
...
...
... 0 1
2
1
2
1
2 0
1
C
C
C
C
C
C
C
A
0
B
B
B
B
B
@
u1
...
uN
1
C
C
C
C
C
A
(any entries omitted in this and other examples are zero). This matrix is said
to be Toeplitz as it has constant entries along the diagonals. This can be taken
further and looked at with higher order matrices. For fourth order the matrix-
vector product becomes:
0
B
B
B
B
B
B
B
B
B
B
B
B
B
@
w1
...
wN
1
C
C
C
C
C
C
C
C
C
C
C
C
C
A
= h 1
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
... 1
12
2
3
... 1
12
1
12
... 2
3
...
... 0
...
... 2
3
...
1
12
1
12
...
2
3
1
12
...
1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
0
B
B
B
B
B
B
B
B
B
B
B
B
B
@
u1
...
uN
1
C
C
C
C
C
C
C
C
C
C
C
C
C
A
.
6
7. These matrices are examples of differentiation matrices. These have order of
accuracy 2 and 4, respectively. This is because the corresponding discrete ap-
proximation to u0
(xj) will converge at the rate of O(h2
) and O(h4
) respectively.
This subsequently lead to the first MatLab program, looked at in the method
section that follows, that showed how the error of fourth order differentiation
matrices compared to the exact derivative decreased as the number of grid
points, N in the case below, increased. These methods are the last to look
at non-spectral methods in the book. The rest of this chapter set up how the
matrices can be adapted for periodic functions with an equispaced grid. This
was not as relevant for what the project needed as the methods to be used in-
volved non-periodic domains with algebraic polynomials on irregular grids as in
chapter 5 and 6 which is next to be looked at.
2.2.3 Chapter 5
Skipping forward to chapter 5 in the book, we are now trying to develop spectral
methods for bounded, non-periodic domains. The idea is to replace the common
trigonometric polynomials (used in previous chapters with Fourier techniques)
with algebraic polynomials of the form p(x) = a0 + a1x + . . . + aN xN
. It
turns out that using equispaced points is ’catastrophically bad in general’[3]
.
They encounter a problem know as the Runge phenomenon (an example will be
looked at this chapter to show how bad it is). This is when a smooth functions is
interpolated by polynomials of N + 1 equally spaced points, the approximation
error increases by rates of up to 2N
. If using these type of points to form
differentiation matrices, then the error would be of a similar order. So, for
setting up the non-equispaced points several options can be chosen but all have
the same theme as N ! 1 asymptotically, the points distributed with density
per unit length of:
density ⇠
N
⇡
p
1 x2
.
The simplest example of this sort of distribution is called Chebyshev points
represented as:
xj = cos(j⇡/N), j = 0, 1, . . . , N.
This can be represented visually as the projections on [ 1, 1] of equidistance
points on the unit semicircle, as shown below.
7
8. Figure 1: Unit semicircle with projection lines[3]
These points {xj} are sometimes referred to as Chebyshev-Lobatto points,
Gauss-Chebyshev-Lobatto points and Chebyshev extreme points. To show how
these would stop the Runge phenomenon, a program was used that displayed
the accuracy of interpolating a function using equispaced and Chebyshev points.
This program will be looked at later. For the rest of the chapter, the book delves
into the subject of potential theory that is related to the spectral methods
of these points but we will move on to chapter 6 and carry on looking into
Chebyshev points and building the Chebyshev differentiation matrices.
2.2.4 Chapter 6
Leading on from the last chapter, we take the previously mentioned idea of a
matrix-vector multiplication and apply it to the Chebyshev points. The method
to get these matrices is, given a grid function vdefined by Chebyshev points
then for a unique polynomial, of degree N (N being any positive integer), the
discrete derivative becomes wj = p0
(xj) with p(xj) = vj. This is represented as
a multiplication by an (N + 1) ⇥ (N + 1) matrix, DN , by:
w = DN v.
To see how this works we will derive D1. As N = 1, there are two Chebyshev
points to deal with, specifically x0 = 1 and x1 = 1. These two points have cor-
responding polynomial data attached to them, v0 and v1 respectively. Written
in Lagrange from the polynomial and derivative is:
p(x) =
1
2
(1 + x)v0 +
1
2
(1 x)v1 ! p0
(x) =
1
2
v0
1
2
v1.
This implies that D1 is a 2 ⇥ 2 matrix with elements:
D1 =
0
@
1
2
1
2
1
2
1
2
1
A .
8
9. This is a simple case of the Chebyshev matrices which are displayed later in the
report. From here the book derives the pattern of the Chebyshev differentiation
matrices element by element. This is best visualized by[3]
:
Figure 2: Formula pattern for Chebyshev Matrices
These are then used to code the differentiation matrices using the property
(DN )ii =
NX
j=0,j6=i
(DN )ij.
For a simpler and more stable way of making the matrices. This leads onto the
Program 11 and 12 which deals with the errors involved in differentiation and
the spectral accuracy of Chebyshev differentiation matrices, results of which are
to follow. This now allows us to set up differential equations in MatLab with
the next chapter discussing how to deal with the problem of boundary values.
2.2.5 Chapter 7
Chapter 7 deals with the problem of the boundary conditions using Cheby-
shev differentiation matrices, specifically the Homogeneous Dirichlet boundary
conditions. The chapter starts with looking at the problem of solving
uxx = e4x
, 1 < x < 1, u(±1) = 0.
9
10. This is a form of the Poisson equation. The problem of representing u(±1) = 0
is one that can be built into the D2 matrix itself, the D2 matrix be the second
derivative matrix D2 = D2
= D ⇥ D. The book layouts that one can equate
the outside wall of the matrix to zero, affectively making the boundary always
zero no matter what the values of the vector. This is represented as:
ignored !
ignored !
0
B
B
B
B
B
B
B
B
B
B
B
@
w0
w1
...
...
...
wN 1
wN
1
C
C
C
C
C
C
C
C
C
C
C
A
=
0
B
B
B
B
B
B
B
B
@
0 0 0 0 0 0 0
0 0
0 0
0 D2
N 0
0 0
0 0
0 0 0 0 0 0 0
1
C
C
C
C
C
C
C
C
A
0
B
B
B
B
B
B
B
B
B
B
B
@
v0
v1
...
...
...
vN 1
vN
1
C
C
C
C
C
C
C
C
C
C
C
A
zeroed
zeroed
.
Taking this idea further the book then looks at the examples of a nonlinear
poisson equation uxx = eu
and a 2 dimensional poisson equation uxx + uyy =
10sin(8x(y 1)) but with Dirichlet boundary conditions. These will be looked
at in the method section as are only extensions of the rule set out above. Now
that the method of solving these type of equations has been set up the next
problem that the book looks at when time-stepping is also involved.
2.2.6 Chapter 10
This chapter introduces a formal approach to time-stepping and the stability
of the different types. The types of time-stepping used were: Euler, leaf frog,
Adams-Bashforth and Runge-Kutta. The example given in the book was the
leaf frog formula for one step in time, i.e. ut = u was given by:
v(n+1)
v(n 1)
2 t
= v(n)
with time step t. The book then looked into the stability of some of the time-
stepping method list prior. This was the first steps into progress time forward
needed for ultimately solving the main equations.
2.2.7 Chapter 13
Continuing from chapter 7, the book now looks into different types of boundary
conditions and a new method for solving differential equations with Dirichlet
boundary conditions. For general boundary conditions, two spectral methods
are laid out with the first method the one used in the previous chapter:
1. Restrict attention to interpolants that satisfy the boundary conditions; or
2. Do not restrict the interpolants, but add additional equations to enforce
the boundary conditions.
10
11. The second method is a lot more flexible and better for more complicated equa-
tions. It is related to tau methods used in Galerkin spectral methods. These
methods are then laid out in programs 32 and 33, that later being compared
to program 13. Program 34 is next looked at which deals with the Allen-Cahn
equation:
ut = ✏uxx + u u3
.
This is an important equation as it is reaction-diffusion equation which can be
related to the equations of the main paper of this report. The book also uses
Chebyshev matrices as the differentiation matrix for the first time in tandem
with a time-stepping process, in this case Euler - u(n+1)
= u(n)
+ t(✏uxx + u
u3
).
The last program of interest in the book relates to the 2D “wave tank”
equation. This introduces the Neumann boundary conditions for the first time,
the boundary conditions being used in the y direction. The wave tank equations
is a second order diffusion equation that looks like:
utt = uxx + uyy
with boundary conditions:
1 < y < 1, 3 < x < 3, u( 3, y, t) = (3, y, t), uy(x, ±1, t),
the initial conditions were chosen just for convenience of the code but the impor-
tant part was dealing with the Neumann boundary conditions and employing it
with a time-stepping regime.
11
12. 3 Method with Results
3.1 Intro and Breakdown
The method consists of:
1. learning the code and the book
2. heat equation
3. actual equation
3.2 Learning the code
The first 4 weeks of the project involved me learning the techniques suitable
for achieving the outcomes set up for the project. The important parts were
learning the syntax of the code as all coding languages have there own style and
the spectral methods which relate how the code can approximate the equations.
At this point to learn both I replicated most of the results in the book as
well as reading each bit throughly and manipulating the examples so that I can
learn how the code reacts with different variables.
The first program looked at was program 1 in the book which deals the
convergence of fourth-order finite differences. This looked at the error in the
finite difference vs proper derivative of u = exp(sin(x)). The image below is the
result of the calculations. This was an important first step in the project as it
was the first time I coded in MatLab and allowed me take start to understand
the basics.
Figure 3: Fourth order finite differences
Chapter 5 is where the next two programs come from. The first being
program 9 which looks at the maximum error involved in interpolating the
12
13. function u(x) = 1/(1 + 16x2
) using equispaced points and Chebyshev points
with the number of points N = 16. The results, below, show that the errors
in the equispaced points exponentially increase which is otherwise know as the
Runge phenomenon.
Figure 4: Program 9: Runge Phenomenon
The other program looked at in this chapter was program 10. This program
shows the relationship similar to the one above but for potentials in the complex
plane. It shows that even though the error for the Chebyshev points does
fluctuate in the middle of the graph, at the boundaries it is better able to deal
with what is going on leading to a significantly less error than equispaced points.
13
14. Figure 5: Program 10: Potential Theory
The next program looked at was from chapter 6 and involved the Cheby-
shev Differentiation Matrices. The program uses the Chebyshev polynomials
discussed prior. It sets up the function called cheb(N) for use whenever a
derivative is needed, the matrix its self relies on the number of chebyshev points
N that is given by the user. The tables below display the first 4 chebyshev
matrices with 1 to 4 chebyshev points.
14
15. Figure 6: Chebyshev matrices
Cheb(1)
0.5000 -0.5000
0.5000 -0.5000
Cheb(2)
1.5000 -2.0000 0.5000
0.5000 -0.0000 -0.5000
-0.5000 2.0000 -1.5000
Cheb(3)
3.1667 -4.0000 1.3333 -0.5000
1.0000 -0.3333 -1.0000 0.3333
-0.3333 1.0000 0.3333 -1.0000
0.5000 -1.3333 4.0000 -3.1667
Cheb(4)
5.5000 -6.8284 2.0000 -1.1716 0.50000
1.7071 -0.7071 -1.4142 0.7071 -0.2929
-0.5000 1.4142 -0.0000 -1.4142 0.5000
0.2929 -0.7071 1.4142 0.7071 -1.7071
-0.5000 1.1716 -2.0000 6.8284 -5.5000
These matrices allow an approximation for a derivative. To get a second
derivative, as will need later, the derivative is multiplied by itself to make a
D2 matrix. To see how they differentiated to smooth non periodic functions,
program 11 was the next program looked it. This dealt with the function u(x) =
ex
sin(5x). It looked at how varying the amount of chebyshev points effected
the error in the derivative. The results below show for N = 10 and N = 20.
15
16. Figure 7: Program 11
From this one can see the pattern of more points then the more accurate
the error in the derivative of u is. Anymore points than 30 would lead to the
error in u0
to be of order 10 14
which is the machine error so no matter how
many points are added from this, the approximation can get no better using
these matrices. This program was modified to be used as a function where the
amount of chebyshev points could be changed by just ’calling’ the function and
entering the desired N, example being program11(100) with N = 100. This
was an important step in the process as now the programs where becoming more
self contained and if needed could be used in later code decreasing errors when
writing out the code. This program was also modified to get the errors for a
function of the form u(x) = exp( (x x0)2
) as when replicating the paper
results, there were a lot of functions that looked like this type of function. The
results will be discussed later.
The last program from chapter 6 dealt with the accuracy of Chebyshev
spectral differentiation. This was Program 12 and used four functions to display
how the errors for each evolved with N, the results of which are situated below.
16
17. Figure 8: Program 12
We know look at chapter 7 and the problems of solving differential equations
using these matrices. As laid out in the literature review of the book the first
differential equation solved using dirichlet boundary conditions was
uxx = e4x
, 1 < x < 1, u(±1) = 0.
This was solved using program 13, setting up the D2 matrix with zeroes on
the outside then comparing it to the exact equation to produce a max error and
a plot of the graph.
17
18. Figure 9: Program 13
Also solved was the nonlinear equation uxx = eu
, u( 1) = u(1) = 0. This
was solved using program 14 and used a similar technique as iteration by in-
spection.
Figure 10: Program 14
Program 15 and 16 dealt with an eigenvalue equation and a 2D example
respectively but were not ultimately used as the equations from the Balmforth,
et al. paper did not involve these type of equations or was not reached in the
time limit of the project.
The last three programs that were used were programs 32, 33 and 37. Pro-
gram 34 was also an important one but will be used in the next chapter. These
18
19. programs all had the purpose of solving a differential equation with different
boundary conditions. 32 and 33 both solved uxx = e4x
but 32 had u(1) = 1
whereas 33 had u0
( 1) = 0 with the other boundary for both cases being zero.
33 was especially important as it was the first time that the book deals with
Neumann conditions with Chebyshev matrices. This was done by change the
last line of D2 to just the D matrices then multiplying u with that matrix to
make it remain zero like Dirichlet example.
Figure 11: Programs 32 and 33
Program 37 dealt with the same sort of conditions but with the example
being in 2 dimensions it was able to use Neumann boundary conditions for y on
both of the boundaries as well as using the time-stepping of the prior chapters.
19
21. 3.3 The Heat Equation
Once I had a sense of how the code worked and how to deal with differential
equations using chebyshev matrices, we moved onto more complicated equations
that could be used as a stepping stone towards the main paper equations. The
equation used was the 1D heat equation. This is because g and e of the dimen-
sionless model was basically two coupled heat or diffusion equations, as shown
below,
gt = (F(e, g, l))zz, et = G(e, l, ez)ezz + H(e, l, g) + L(e, l),
where F, G, H and L represent functions related to the paper equations. As you
can see from the first equation it has the same format as a heat equation but
with a function within the derivative terms whereas the second equation had
the derivative multiplied by a function and has source terms added to it but
ultimately they are very similar to code as the heat equation. This is where the
motivation behind using this type of equation came from.
To solve the heat equation, first we looked at
ut = uxx
with dirichlet boundary conditions u(±1, t) = 0 and with the initial condition
of u(x, 0) = sin(⇡x). We then compared this with the exact solution of uexact =
sin(⇡x)e ⇡2
t
. For two weeks I tried to write my own code for this with limited
results as when the time stepping scheme was used, in this case Euler and
Adams-Bashforth, it would become very unstable but I was able to code for a
heat equation with small diffusivity i.e.
ut = (0.01)uxx.
The results were then compared to the exact solution uexact = sin(⇡x)e (0.01·⇡)2
t
.
Results below show Euler stepping and Adams-Bashforth time-stepping schemes
with a comparison of the exact solution.
figs of that
From here the decay rate of the heat equation, for the exact case = (0.01·
⇡)2
, can be calculated. This is solved by plotting ln(
⌦
u2
↵1/2
) vs time. This was
done for the heat equation with small diffusivity as above with both the Euler
and Adams-Bashforth time progression techniques with the results shown below.
When coding ✓ instead of uwas used in certain programs and this is where the
discrepancy arrises.
21
22. Figure 13: Exact Solution for the heat equation with small diffusivity
22
23. Figure 14: Solved solutions with Adams-Bashforth and Euler time-stepping
23
24. Figure 15: Decay rates for Adams-Bashforth and Euler time-stepping
Note: Adams-Bashforth is on the top with Euler being on the bottom.
24
25. Figure 16: Error in the solved solution compared to the exact for Adams-
Bashforth and Euler time-stepping
As in a later example, for the error in the heat equation using the Euler
time-stepping method there is an increase in this graph. This is a problem as
the error should decrease after a certain amount of time, for each example the
time varies. This could be down to a number of factor but the main one is
that the time-stepping scheme is unstable in nature leading to an increase in
the error with each step.
From here a different method was needed as the program written was just
not capable of solving the heat equation for anything more that a diffusivity of
0.03. This was solved by adjusting program 34. Initial it was designed to solve
the Allen-Cahn equation ut = ✏uxx + u u3
. The code could be manipulated
25
26. removing the two terms at the end to get the heat equation with ✏ also being
changed to equal one. This was the first time that the heat equation was solved
using the Chebyshev matrices in this project and brought it a step closer to
solving (1)-(3).
From here the equation was solved for different boundary conditions, exam-
ple of one of them being u(x, 0) = cos(⇡x). From here we now tried to apply the
Neumann boundary conditions for either boundary as well as both. This was
then when the program 34 became unusable as the code didn’t allow for the use
of Neumann on u0
(1, t) = 0, a problem that has still not been fixed. On looking
for further examples I came across a internet conversation by Lloyd Trefethen
explaining a similar way of applying the Neumann conditions where after each
time-step the part of the vector u(x, t) that is on the boundary is forced to
be zero each time to preserve the condition and not allow the time-stepping to
destabilize on the boundary. This allowed for results like below.
The first result was for the boundary conditions u0
( 1, t) = u(1, t) = 0 and
initial conditions u(x, 0) = cos(⇡(x+1)
4 ). All of these results used the Adams-
Bashforth time-stepping scheme. This was due to the instability in Euler and
Leap frog for these equations. Time-stepping and how it can be modified come
in the chapter about future thinking.
Figure 17: Exact solution
26
28. Figure 19: Error for Neumann on the left boundary and Dirichlet on the right
boundary
The error in this also has the instability mentioned in the previous example.
The next solution for the heat equation looks at the boundary conditions
u0
(1, t) = u( 1, t) = 0 and initial conditions u(x, 0) = sin(3⇡(x+1)
4 ).
28
29. Figure 20: Exact solution for Neumann on the left boundary and Dirichlet on
the other
Figure 21: Solved solution for Neumann on the left boundary and Dirichlet on
the right
29
30. Figure 22: Error in the solved solution for Neumann on the left boundary and
Dirichlet on the right
This led on to the example of having the heat equation with boundary con-
ditions u0
(1, t) = u0
( 1, t) = 0 and initial conditions u(x, 0) = cos(⇡x). This
was a crucial part as the boundary conditions for e had Neumann at both ends
and this being solved allowed for equations (1)-(3) to be tackled.
30
31. Figure 23: Exact Solution for Neumann at both ends
Figure 24: Solved solution for Neumann conditions
31
32. Figure 25: Error in the solved equation for Neumann conditions
To see if this code was of better accuracy Dirichlet boundary conditions were
applied to this program and a diffusivity introduced to produce the results of
my coded program seen previously. This was then used to compare the errors
vs the heat equation with the results below.
32
33. Figure 26: Comparison of the two different programs to solve the same equation
The time step interval, dt = 0.0001, was the same for both. As can be seen
by the results the adjusted code is significantly more accurate then the first code
written to solve the equation.
At this point there was now a firm enough footing that the results did work
for the very similar equation and so (1)-(3) were then analyzed.
3.4 The Equations of Balmforth, et al.
For the last few weeks of the project we attempted to code the equations of
the paper being researched for the project. This proved to be tricky at times
as each run of the code could take up to 4 hours depending on what was being
33
34. outputted. But we did get some results after trying two different codes.
The results looked at g for different points in time as this was trying to
replicate the diagram of figure 6 in the paper. The time step used was dt = 0.001
and the number of chebyshev points was N = 200. This allowed for a good
degree of accuracy as the results show.
Figure 27: Figure 6 from the Balmforth, et al. paper
There was enough time to look at the result of changing the relationship
between the buoyancy and the energy density, this was in the equations. We
tried = 0.5, 1 with the respective results below.
34
35. Figure 28: = 1 and = 0.5, respectively
When comparing the results of g for the paper and the project there is quite
35
36. a difference in the peaks as there is a lot more disturbances for the project
results than the smoother results of the paper. This is assumed to be down
to the time-stepping scheme not being accurate enough giving the area in the
middle with spikes instead of being nice and flat as is in figure 25.
Comparing the two different beta results it is quite clear to see that with
a smaller the peaks of the graph are a lot higher. This implies that the
relationship between and g is one of an inverse nature.
Ultimately the time constraint was too much to be able to get anymore
results.
36
37. 4 Future thinking
Looking forward, the paper results are looking promising. If possible I think
there could be a way of comparing the paper results with the results taking
an seeing the error in the values. This is also the data necessary for plots 6
and 7 in the paper but just needs to be coded in to get something that looks
similar. Another point of interest could be changing the initial conditions to
see how this affect the end results of the buoyancy profiles. The last point, and
probably most important, would be the use of a different time-stepping scheme
as most of the simpler types used in this project ended with the instability of the
solution. One suggestion could be the use of a implicit time-stepping scheme.
5 Conclusions
Looking back at the paper, the main achievements of this project was learning
how to code in Matlab, a very important skill, the understanding of how to
go from a different style of code to one that was very useful when trying to
solve the heat equation with this culminating in get some results for the main
paper. These results did show that using Chebyshev matrices was able to handle
equations of these form and give a different method of looking at differential
equations in the future. If I was to do anything differently I would have liked
to have got onto replication of the results earlier so that I could look into some
different parts of the paper but there just was not enough time.
6 Bibliography
[1] - BALMFORTH, N., LLEWELLYN SMITH, S. and YOUNG, W. 1998.
Dynamics of interfaces and layers in a stratified turbulent fluid. J. Fluid
Mech.. 355,pp.329-358.
[2] - Keast, P. and Muir, P. 1991. Algorithm 688; EPDCOL: a more efficient
PDECOL code. ACM Transactions on Mathematical Software. 17(2),pp.153-
166.
[3] -Trefethen, L. 2000. Spectral methods in MATLAB. Philadelphia, PA: Soci-
ety for Industrial and Applied Mathematics.
37