SlideShare a Scribd company logo
Scientific Computation and
Differential Equations
John Butcher
The University of Auckland
New Zealand
Annual Fundamental Sciences Seminar
June 2006
Institut Kajian Sains Fundamental Ibnu Sina
Scientific Computation and Differential Equations – p. 1/36
Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
Scientific Computation and Differential Equations – p. 2/36
Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
Scientific Computation and Differential Equations – p. 2/36
Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific Computation and Differential Equations – p. 2/36
Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific problems have always been the driving force in
the development of faster and faster computers.
Scientific Computation and Differential Equations – p. 2/36
Overview
This year marks the fiftieth anniversary of the first
computer in the Southern Hemisphere – the SILLIAC
computer at the University of Sydney.
I had the privilege of being present when this computer
did its first calculation and my own first program ran on
this computer soon afterwards.
This computer was built for one reason only – to solve
scientific problems.
Scientific problems have always been the driving force in
the development of faster and faster computers.
Within Scientific Computation, the approximate solution
of differential equations has always been an area of
special challenge. Scientific Computation and Differential Equations – p. 2/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Scientific Computation and Differential Equations – p. 3/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
Scientific Computation and Differential Equations – p. 3/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Scientific Computation and Differential Equations – p. 3/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
Scientific Computation and Differential Equations – p. 3/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
We will then look at some particular questions
concerning the theory of general linear methods.
Scientific Computation and Differential Equations – p. 3/36
Differential equations can usually not be solved
analytically and numerical methods are necessary.
Two main types of numerical methods exist: linear
multistep methods and Runge–Kutta methods.
These traditional methods are special cases within the
larger class of “General Linear Methods”.
Today we will look briefly at the history of numerical
methods for differential equations.
We will then look at some particular questions
concerning the theory of general linear methods.
We will also look at some aspects of their practical
implementation.
Scientific Computation and Differential Equations – p. 3/36
Contents
A short history of numerical differential equations
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Methods with the IRK stability property
Scientific Computation and Differential Equations – p. 4/36
Contents
A short history of numerical differential equations
Linear multistep methods
Runge–Kutta methods
General linear methods
Examples of general linear methods
Order of GLMs
Methods with the IRK stability property
Implementation questions for IRKS methods
Scientific Computation and Differential Equations – p. 4/36
A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Scientific Computation and Differential Equations – p. 5/36
A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Problem (1) is used in traditional descriptions of
numerical methods but in applications we need to use
either (2) or (3).
Scientific Computation and Differential Equations – p. 5/36
A short history of numerical ODEs
We will make use of three standard types of initial value
problems
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1)
y′
(x) = f(x, y(x)), y(x0) = y0 ∈ RN
, (2)
y′
(x) = f(y(x)), y(x0) = y0 ∈ RN
. (3)
Problem (1) is used in traditional descriptions of
numerical methods but in applications we need to use
either (2) or (3).
These are actually equivalent and we will often use (3)
instead of (2) because of its simplicity.
Scientific Computation and Differential Equations – p. 5/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations.
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
Scientific Computation and Differential Equations – p. 6/36
The Euler method
Euler proposed a simple numerical scheme in
approximately 1770; this can be used for a system of first
order equations. The idea is to treat the solution as
though it had constant derivative in each time step.
x0 x1 x2 x3 x4
y0
y1f(y0)
y1 =y0 +hf(y0)
y2
f(y1)
y2 =y1 +hf(y1) y3
f(y2
)
y3 =y2 +hf(y2)
y4
f(y3
)
y4 =y3 +hf(y3)
Scientific Computation and Differential Equations – p. 6/36
More modern methods attempt to improve on the Euler
method by
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
3. Doing both of these
Scientific Computation and Differential Equations – p. 7/36
More modern methods attempt to improve on the Euler
method by
1. Using more past history
– Linear multistep methods
2. Doing more complicated calculations in each step
– Runge–Kutta methods
3. Doing both of these
– General linear methods
Scientific Computation and Differential Equations – p. 7/36
Some important dates
1883 Adams & Bashforth Linear multistep methods
1895 Runge
Runge-Kutta method
1901 Kutta
1925 Nyström Special methods for second order
1926 Moulton Adams-Moulton method
1952 Curtiss & Hirschfelder Stiff problems
Scientific Computation and Differential Equations – p. 8/36
Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
Scientific Computation and Differential Equations – p. 9/36
Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
and the aim, for the moment, will be to calculate
approximations to y(xi), where
xi = x0 + hi, i = 1, 2, 3, . . . ,
and h is the “stepsize”.
Scientific Computation and Differential Equations – p. 9/36
Linear multistep methods
We will write the differential equation in autonomous
form
y′
(x) = f(y(x)), y(x0) = y0,
and the aim, for the moment, will be to calculate
approximations to y(xi), where
xi = x0 + hi, i = 1, 2, 3, . . . ,
and h is the “stepsize”.
Linear multistep methods base the approximation to
y(xn) on a linear combination of approximations to
y(xn−i) and approximations to y′
(xn−i), i = 1, 2, . . . , k.
Scientific Computation and Differential Equations – p. 9/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
β0 = 0: explicit.
Scientific Computation and Differential Equations – p. 10/36
Write yi as the approximation to y(xi) and fi as the
approximation to y′
(xi) = f(y(xi)).
A linear multistep method can be written as
yn =
k
i=1
αiyn−i + h
k
i=0
βifn−i
This is a 1-stage 2k-value method.
1 stage? One evaluation of f per step.
2k value? This many quantities are passed between steps.
β0 = 0: explicit. β0 = 0: implicit.
Scientific Computation and Differential Equations – p. 10/36
Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys,
Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
This is an s-stage 1-value method.
Scientific Computation and Differential Equations – p. 11/36
Runge–Kutta methods
A Runge–Kutta method computes yn in terms of a single
input yn−1 and s stages Y1, Y2, . . . , Ys, where
Yi = yn−1 + h
s
j=1
aijf(Yj), i = 1, 2, . . . , s,
yn = yn−1 + h
s
i=1
bif(Yi).
This is an s-stage 1-value method.
It is natural to ask if there are useful methods which are
multistage (as for Runge–Kutta methods) and multivalue
(as for linear multistep methods).
Scientific Computation and Differential Equations – p. 11/36
In other words, we ask if there is any value in completing
this diagram:
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
In other words, we ask if there is any value in completing
this diagram:
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
In other words, we ask if there is any value in completing
this diagram:
General Linear Methods
Runge-Kutta Linear Multistep
Euler
Scientific Computation and Differential Equations – p. 12/36
General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
Scientific Computation and Differential Equations – p. 13/36
General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
The r values input to step n − 1 will be denoted by
y
[n−1]
i , i = 1, 2, . . . , r with corresponding output values
y
[n]
i and the stage values by Yi, i = 1, 2, . . . , s.
Scientific Computation and Differential Equations – p. 13/36
General linear methods
We will consider methods characterised by an
(s + r) × (s + r) partitioned matrix of the form
A U
B V
s r
s
r
.
The r values input to step n − 1 will be denoted by
y
[n−1]
i , i = 1, 2, . . . , r with corresponding output values
y
[n]
i and the stage values by Yi, i = 1, 2, . . . , s.
The stage derivatives will be denoted by Fi = f(Yi).
Scientific Computation and Differential Equations – p. 13/36
The formula for computing the stages (and
simultaneously the stage derivatives) are:
Yi = h
s
j=1
aijFj +
r
j=1
uijy
[n−1]
j , Fi = f(Yi),
for i = 1, 2, . . . , s.
Scientific Computation and Differential Equations – p. 14/36
The formula for computing the stages (and
simultaneously the stage derivatives) are:
Yi = h
s
j=1
aijFj +
r
j=1
uijy
[n−1]
j , Fi = f(Yi),
for i = 1, 2, . . . , s.
To compute the output values, use the formula
y
[n]
i = h
s
j=1
bijFj +
r
j=1
vijy
[n−1]
j , i = 1, 2, . . . , r.
Scientific Computation and Differential Equations – p. 14/36
For convenience, write
y[n−1]
=





y
[n−1]
1
y
[n−1]
2
...
y
[n−1]
r





, y[n]
=





y
[n]
1
y
[n]
2
...
y
[n]
r





, Y =





Y1
Y2
...
Ys





, F =





F1
F2
...
Fs





,
Scientific Computation and Differential Equations – p. 15/36
For convenience, write
y[n−1]
=





y
[n−1]
1
y
[n−1]
2
...
y
[n−1]
r





, y[n]
=





y
[n]
1
y
[n]
2
...
y
[n]
r





, Y =





Y1
Y2
...
Ys





, F =





F1
F2
...
Fs





,
so that we can write the calculations in a step more
simply as
Y
y[n] =
A U
B V
hF
y[n−1] .
Scientific Computation and Differential Equations – p. 15/36
Examples of general linear methods
We will look at five examples
Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods
We will look at five examples
A Runge–Kutta method
Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
An Adams-Bashforth/Adams-Moulton method
Scientific Computation and Differential Equations – p. 16/36
Examples of general linear methods
We will look at five examples
A Runge–Kutta method
A “re-use” method
An Almost Runge–Kutta method
An Adams-Bashforth/Adams-Moulton method
A modified linear multistep method
Scientific Computation and Differential Equations – p. 16/36
A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=







0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1







Scientific Computation and Differential Equations – p. 17/36
A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=







0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1







In a step from xn−1 to xn = xn−1 + h, the stages give
approximations at
xn−1, xn−1 + θh, xn−1 + 1
2h and xn−1 + h.
Scientific Computation and Differential Equations – p. 17/36
A Runge–Kutta method
One of the famous families of fourth order methods of
Kutta, written as a general linear method, is
A U
B V
=







0 0 0 0 1
θ 0 0 0 1
1
2 − 1
8θ
1
8θ 0 0 1
1
2θ − 1 − 1
2θ 2 0 1
1
6 0 2
3
1
6 1







In a step from xn−1 to xn = xn−1 + h, the stages give
approximations at
xn−1, xn−1 + θh, xn−1 + 1
2h and xn−1 + h.
We will look at the special case θ = −1
2.
Scientific Computation and Differential Equations – p. 17/36
In the special θ = −1
2 case
A U
B V
=







0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1







Scientific Computation and Differential Equations – p. 18/36
In the special θ = −1
2 case
A U
B V
=







0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1







Because the derivative at
xn−1 + θh = xn−1 − 1
2h = xn−2 + 1
2h,
was evaluated in the previous step, we can try re-using
this value.
Scientific Computation and Differential Equations – p. 18/36
In the special θ = −1
2 case
A U
B V
=







0 0 0 0 1
−1
2 0 0 0 1
3
4 −1
4 0 0 1
−2 1 2 0 1
1
6 0 2
3
1
6 1







Because the derivative at
xn−1 + θh = xn−1 − 1
2h = xn−2 + 1
2h,
was evaluated in the previous step, we can try re-using
this value.
This will save one function evaluation.
Scientific Computation and Differential Equations – p. 18/36
A ‘re-use’ method
This gives the re-use method
A U
B V
=







0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0







Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method
This gives the re-use method
A U
B V
=







0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0







Why should this method not be preferred to a standard
Runge–Kutta method?
Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method
This gives the re-use method
A U
B V
=







0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0







Why should this method not be preferred to a standard
Runge–Kutta method?
There are at least two reasons
Stepsize change is complicated and difficult
Scientific Computation and Differential Equations – p. 19/36
A ‘re-use’ method
This gives the re-use method
A U
B V
=







0 0 0 1 0
3
4 0 0 1 −1
4
−2 2 0 1 1
1
6
2
3
1
6 1 0
0 1 0 0 0







Why should this method not be preferred to a standard
Runge–Kutta method?
There are at least two reasons
Stepsize change is complicated and difficult
The stability region is smaller
Scientific Computation and Differential Equations – p. 19/36
To overcome these difficulties, we can do several things:
Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things:
Restore the missing stage,
Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Re-organize the data passed between steps.
Scientific Computation and Differential Equations – p. 20/36
To overcome these difficulties, we can do several things:
Restore the missing stage,
Move the first derivative calculation to the end of the
previous step,
Use a linear combination of the derivatives
computed in the previous step (instead of just one of
these),
Re-organize the data passed between steps.
We then get methods like the following:
Scientific Computation and Differential Equations – p. 20/36
An ARK method











0 0 0 0 1 1 1
2
1
16 0 0 0 1 7
16
1
16
−1
4 2 0 0 1 −3
4 −1
4
0 2
3
1
6 0 1 1
6 0
0 2
3
1
6 0 1 1
6 0
0 0 0 1 0 0 0
−1
3 0 −2
3 2 0 −1 0











,
Scientific Computation and Differential Equations – p. 21/36
An ARK method











0 0 0 0 1 1 1
2
1
16 0 0 0 1 7
16
1
16
−1
4 2 0 0 1 −3
4 −1
4
0 2
3
1
6 0 1 1
6 0
0 2
3
1
6 0 1 1
6 0
0 0 0 1 0 0 0
−1
3 0 −2
3 2 0 −1 0











,
where
y
[n]
1 ≈ y(xn), y
[n]
2 ≈ hy′
(xn), y
[n]
3 ≈ h2
y′′
(xn),
Scientific Computation and Differential Equations – p. 21/36
An ARK method











0 0 0 0 1 1 1
2
1
16 0 0 0 1 7
16
1
16
−1
4 2 0 0 1 −3
4 −1
4
0 2
3
1
6 0 1 1
6 0
0 2
3
1
6 0 1 1
6 0
0 0 0 1 0 0 0
−1
3 0 −2
3 2 0 −1 0











,
where
y
[n]
1 ≈ y(xn), y
[n]
2 ≈ hy′
(xn), y
[n]
3 ≈ h2
y′′
(xn),
with
Y1 ≈ Y3 ≈ Y4 ≈ y(xn), Y2 ≈ y(xn−1 + 1
2h).
Scientific Computation and Differential Equations – p. 21/36
The good things about this “Almost Runge–Kutta
method” are:
Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
This means that the stage values are computed to the
same accuracy as an order 2 Runge-Kutta method.
Scientific Computation and Differential Equations – p. 22/36
The good things about this “Almost Runge–Kutta
method” are:
It has the same stability region as for a genuine
Runge–Kutta method
Unlike standard Runge–Kutta methods, the stage
order is 2.
This means that the stage values are computed to the
same accuracy as an order 2 Runge-Kutta method.
Although it is a multi-value method, both starting
the method and changing stepsize are essentially
cost-free operations.
Scientific Computation and Differential Equations – p. 22/36
An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
It might be asked: Is it possible to obtain improved order
by using values of yn−2, yn−3 in the formulae?
Scientific Computation and Differential Equations – p. 23/36
An Adams-Bashforth/Adams-Moulton method
It is usual practice to combine Adams–Bashforth and
Adams–Moulton methods as a predictor corrector pair.
For example, the ‘PECE’ method of order 3 computes a
predictor y∗
n and a corrector yn by the formulae
y∗
n = yn−1 + h 23
12f(yn−1) − 4
3f(yn−2) + 5
12f(yn−3) ,
yn = yn−1 + h 5
12f(y∗
n) + 2
3f(yn−1) − 1
12f(yn−2) .
It might be asked: Is it possible to obtain improved order
by using values of yn−2, yn−3 in the formulae?
The answer is that not much can be gained because we
are limited by the famous ‘Dahlquist barrier’.
Scientific Computation and Differential Equations – p. 23/36
A modified linear multistep method
But what if we allow off-step points?
Scientific Computation and Differential Equations – p. 24/36
A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
Scientific Computation and Differential Equations – p. 24/36
A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
Scientific Computation and Differential Equations – p. 24/36
A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
y∗
n = 28
5 yn−1 − 23
5 yn−2
+ h 32
15f(y∗
n−1
2
) − 4f(yn−1) − 26
15f(yn−2) ,
Scientific Computation and Differential Equations – p. 24/36
A modified linear multistep method
But what if we allow off-step points?
We can get order 5 if we allow for two predictors, the
first giving an approximation to y(xn − 1
2h).
This new method, with predictors at the off-step point
and also at the end of the step, is
y∗
n−1
2
= yn−2 + h 9
8f(yn−1) + 3
8f(yn−2) ,
y∗
n = 28
5 yn−1 − 23
5 yn−2
+ h 32
15f(y∗
n−1
2
) − 4f(yn−1) − 26
15f(yn−2) ,
yn = 32
31yn−1 − 1
31yn−2
+ h 64
93f(y∗
n−1
2
)+ 5
31f(y∗
n)+ 4
31f(yn−1)− 1
93f(yn−2) .
Scientific Computation and Differential Equations – p. 24/36
Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
Scientific Computation and Differential Equations – p. 25/36
Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
Scientific Computation and Differential Equations – p. 25/36
Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
Scientific Computation and Differential Equations – p. 25/36
Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
The key ideas are
Use a general starting method to represent the input
to a step.
Scientific Computation and Differential Equations – p. 25/36
Order of general linear methods
Classical methods are all built on a plan where we know
in advance what we are trying to approximate.
For an abstract general linear method, the interpretation
of the input and output quantities is quite general.
We want to understand order in a similar general way.
The key ideas are
Use a general starting method to represent the input
to a step.
Require the output to be similarly related to the
starting method applied one time-step later.
Scientific Computation and Differential Equations – p. 25/36
The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
Scientific Computation and Differential Equations – p. 26/36
The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
Scientific Computation and Differential Equations – p. 26/36
The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
Scientific Computation and Differential Equations – p. 26/36
The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
If this can be estimated in terms of hp+1
, then the method
has order p.
Scientific Computation and Differential Equations – p. 26/36
The input to a step is an approximation to some vector of
quantities related to the exact solution at xn−1.
When the step has been completed, the vectors
comprising the output are approximations to the same
quantities, but now related to xn.
If the input is exactly what it is supposed to approximate,
then the “local truncation error” is defined as the error in
the output after a single step.
If this can be estimated in terms of hp+1
, then the method
has order p.
We will refer to the calculation which produces y[n−1]
from y(xn−1) as a “starting method”.
Scientific Computation and Differential Equations – p. 26/36
Let S denote the “starting method”
Scientific Computation and Differential Equations – p. 27/36
Let S denote the “starting method”, that is a mapping
from RN
to RrN
Scientific Computation and Differential Equations – p. 27/36
Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method
Scientific Computation and Differential Equations – p. 27/36
Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
Scientific Computation and Differential Equations – p. 27/36
Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
The order of accuracy of a multivalue method is defined
in terms of the diagram
E
S S
M
Scientific Computation and Differential Equations – p. 27/36
Let S denote the “starting method”, that is a mapping
from RN
to RrN
, and let F : RrN
→ RN
denote a
corresponding finishing method, such that F ◦ S = id.
The order of accuracy of a multivalue method is defined
in terms of the diagram
E
S S
M
O(hp+1
)
(h = stepsize)
Scientific Computation and Differential Equations – p. 27/36
By duplicating this diagram over many steps, global
error estimates can be found.
Scientific Computation and Differential Equations – p. 28/36
By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
Scientific Computation and Differential Equations – p. 28/36
By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
Scientific Computation and Differential Equations – p. 28/36
By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
F
Scientific Computation and Differential Equations – p. 28/36
By duplicating this diagram over many steps, global
error estimates can be found.
E E E
S S S S S
M M
M
O(hp
)
F
O(hp
)
Scientific Computation and Differential Equations – p. 28/36
Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
Scientific Computation and Differential Equations – p. 29/36
Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
Scientific Computation and Differential Equations – p. 29/36
Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
That is, for the problem y′
(x) = qy(x), we have
y[n]
= M(z)y[n−1]
where z = hq
Scientific Computation and Differential Equations – p. 29/36
Methods with the IRK Stability property
An important attribute of a numerical method is its
“stability matrix” M(z) defined by
M(z) = V + zB(I − zA)−1
U.
This represents the behaviour of the method in the case
of linear problems.
That is, for the problem y′
(x) = qy(x), we have
y[n]
= M(z)y[n−1]
where z = hq
In the special case of a Runge–Kutta method, M(z) is a
scalar R(z).
Scientific Computation and Differential Equations – p. 29/36
To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
Scientific Computation and Differential Equations – p. 30/36
To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
Scientific Computation and Differential Equations – p. 30/36
To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
For an A-stable method,
|R(z)| ≤ 1, if Rez ≤ 0.
Scientific Computation and Differential Equations – p. 30/36
To solve “stiff” problems, we want to use A-stable
methods or, even better L-stable methods.
In the case of Runge–Kutta methods the meanings of
these are
For an A-stable method,
|R(z)| ≤ 1, if Rez ≤ 0.
An L-stable method is A-stable and, in addition,
R(∞) = 0.
Scientific Computation and Differential Equations – p. 30/36
A general linear method is said to have
“Runge–Kutta stability”
if the stability matrix for the method M(z) has
characteristic polynomial of the form
det(wI − M(z)) = wr−1
(w − R(z)).
Scientific Computation and Differential Equations – p. 31/36
A general linear method is said to have
“Runge–Kutta stability”
if the stability matrix for the method M(z) has
characteristic polynomial of the form
det(wI − M(z)) = wr−1
(w − R(z)).
This means that the method has exactly the same stability
region as a Runge–Kutta method whose stability
function is R(z).
Scientific Computation and Differential Equations – p. 31/36
Do methods with RK stability exist?
Scientific Computation and Differential Equations – p. 32/36
Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Scientific Computation and Differential Equations – p. 32/36
Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Methods exist for both stiff and non-stiff problems for
arbitrary orders and the only question is how to select the
best methods from the large families that are available.
Scientific Computation and Differential Equations – p. 32/36
Do methods with RK stability exist?
Yes, it is even possible to construct them with rational
operations by imposing a condition known as “Inherent
RK stability”.
Methods exist for both stiff and non-stiff problems for
arbitrary orders and the only question is how to select the
best methods from the large families that are available.
We will give just two examples.
Scientific Computation and Differential Equations – p. 32/36
The following third order method is explicit and suitable
for the solution of non-stiff problems
AU
BV
=
















0 0 0 0 1 1
4
1
32
1
384
− 176
1885 0 0 0 1 2237
3770
2237
15080
2149
90480
−335624
311025
29
55 0 0 1 1619591
1244100
260027
904800
1517801
39811200
−67843
6435
395
33 −5 0 1 29428
6435
527
585
41819
102960
−67843
6435
395
33 −5 0 1 29428
6435
527
585
41819
102960
0 0 0 1 0 0 0 0
82
33 −274
11
170
9 −4
3 0 482
99 0 −161
264
−8 −12 40
3 −2 0 26
3 0 0
















Scientific Computation and Differential Equations – p. 33/36
The following fourth order method is implicit, L-stable,
and suitable for the solution of stiff problems

























1
4
0 0 0 0 1 3
4
1
2
1
4
0
− 513
54272
1
4
0 0 0 1 27649
54272
5601
27136
1539
54272
− 459
6784
3706119
69088256
− 488
3819
1
4
0 0 1 15366379
207264768
756057
34544128
1620299
69088256
− 4854
454528
32161061
197549232
−111814
232959
134
183
1
4
0 1− 32609017
197549232
929753
32924872
4008881
32924872
174981
3465776
− 135425
2948496
− 641
10431
73
183
1
2
1
4
1 − 367313
8845488
− 22727
1474248
40979
982832
323
25864
− 135425
2948496
− 641
10431
73
183
1
2
1
4
1 − 367313
8845488
− 22727
1474248
40979
982832
323
25864
0 0 0 0 1 0 0 0 0 0
2255
2318
−47125
20862
447
122
−11
4
4
3
0 −28745
20862
− 1937
13908
351
18544
65
976
12620
10431
−96388
31293
3364
549
−10
3
4
3
0 −70634
31293
− 2050
10431
− 187
2318
113
366
414
1159
−29954
31293
130
61
−1 1
3
0 −27052
31293
− 113
10431
− 491
4636
161
732


























Scientific Computation and Differential Equations – p. 34/36
Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
Scientific Computation and Differential Equations – p. 35/36
Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Scientific Computation and Differential Equations – p. 35/36
Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Scientific Computation and Differential Equations – p. 35/36
Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Estimate the local truncation error of an alternative
method of higher order
Scientific Computation and Differential Equations – p. 35/36
Implementation questions for IRKS methods
Many implementation questions are similar to those for
traditional methods but there are some new challenges.
We want variable order and stepsize and it is even a
realistic aim to change between stiff and non-stiff
methods automatically.
Because of the variable order and stepsize aims, we wish
to be able to do the following:
Estimate the local truncation error of the current step
Estimate the local truncation error of an alternative
method of higher order
Change the stepsize with little cost and with little
impact on stability
Scientific Computation and Differential Equations – p. 35/36
We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
Scientific Computation and Differential Equations – p. 36/36
We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Scientific Computation and Differential Equations – p. 36/36
We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Zdzisław Jackiewicz Arizona State University, Phoenix AZ
Helmut Podhaisky Martin Luther Universität, Halle
Will Wright La Trobe University, Melbourne
Scientific Computation and Differential Equations – p. 36/36
We believe we have solutions to all these problems and
that we can construct methods of quite high orders which
will work well and competitively.
I would like to name, with gratitude and appreciation, my
principal collaborators in this project:
Zdzisław Jackiewicz Arizona State University, Phoenix AZ
Helmut Podhaisky Martin Luther Universität, Halle
Will Wright La Trobe University, Melbourne
I also express my thanks to other colleagues who are
closely associated with this project, especially:
Robert Chan, Allison Heard, Shirley Huang,
Nicolette Rattenbury, Gustaf Söderlind, Angela Tsai.
Scientific Computation and Differential Equations – p. 36/36

More Related Content

Similar to Scientific Computations and Differential Equations

Runge–Kutta methods for ordinary differential equations
Runge–Kutta methods for ordinary differential equationsRunge–Kutta methods for ordinary differential equations
Runge–Kutta methods for ordinary differential equations
Yuri Vyatkin
 
Comparative Analysis of Different Numerical Methods of Solving First Order Di...
Comparative Analysis of Different Numerical Methods of Solving First Order Di...Comparative Analysis of Different Numerical Methods of Solving First Order Di...
Comparative Analysis of Different Numerical Methods of Solving First Order Di...
ijtsrd
 
Soution of Linear Equations
Soution of Linear EquationsSoution of Linear Equations
Soution of Linear Equations
Arslan Arif
 
20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrisonComputer Science Club
 
Complex variable transformation presentation.pptx
Complex variable transformation presentation.pptxComplex variable transformation presentation.pptx
Complex variable transformation presentation.pptx
HuzaifaAhmad51
 
Parallel Optimization in Machine Learning
Parallel Optimization in Machine LearningParallel Optimization in Machine Learning
Parallel Optimization in Machine Learning
Fabian Pedregosa
 
SPIE Conference V3.0
SPIE Conference V3.0SPIE Conference V3.0
SPIE Conference V3.0Robert Fry
 
Computability and Complexity
Computability and ComplexityComputability and Complexity
Computability and Complexity
Edward Blurock
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
Xin-She Yang
 
Comparative Analysis of Different Numerical Methods for the Solution of Initi...
Comparative Analysis of Different Numerical Methods for the Solution of Initi...Comparative Analysis of Different Numerical Methods for the Solution of Initi...
Comparative Analysis of Different Numerical Methods for the Solution of Initi...
YogeshIJTSRD
 
Cryptography Baby Step Giant Step
Cryptography Baby Step Giant StepCryptography Baby Step Giant Step
Cryptography Baby Step Giant Step
SAUVIK BISWAS
 
Lecture 1 test
Lecture 1 testLecture 1 test
Lecture 1 test
falcarragh
 
Trabajo de metodos numerico.
Trabajo de metodos numerico.Trabajo de metodos numerico.
Trabajo de metodos numerico.
StivenChavezValdizan
 
Trabajo de metodos numerico.
Trabajo de metodos numerico.Trabajo de metodos numerico.
Trabajo de metodos numerico.
StivenChavezValdizan
 
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
theijes
 
Lecture 1 Slides -Introduction to algorithms.pdf
Lecture 1 Slides -Introduction to algorithms.pdfLecture 1 Slides -Introduction to algorithms.pdf
Lecture 1 Slides -Introduction to algorithms.pdf
RanvinuHewage
 
Asynchronous Stochastic Optimization, New Analysis and Algorithms
Asynchronous Stochastic Optimization, New Analysis and AlgorithmsAsynchronous Stochastic Optimization, New Analysis and Algorithms
Asynchronous Stochastic Optimization, New Analysis and Algorithms
Fabian Pedregosa
 
Finding the Extreme Values with some Application of Derivatives
Finding the Extreme Values with some Application of DerivativesFinding the Extreme Values with some Application of Derivatives
Finding the Extreme Values with some Application of Derivatives
ijtsrd
 
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
AI Publications
 

Similar to Scientific Computations and Differential Equations (20)

Runge–Kutta methods for ordinary differential equations
Runge–Kutta methods for ordinary differential equationsRunge–Kutta methods for ordinary differential equations
Runge–Kutta methods for ordinary differential equations
 
Comparative Analysis of Different Numerical Methods of Solving First Order Di...
Comparative Analysis of Different Numerical Methods of Solving First Order Di...Comparative Analysis of Different Numerical Methods of Solving First Order Di...
Comparative Analysis of Different Numerical Methods of Solving First Order Di...
 
Soution of Linear Equations
Soution of Linear EquationsSoution of Linear Equations
Soution of Linear Equations
 
20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison
 
Complex variable transformation presentation.pptx
Complex variable transformation presentation.pptxComplex variable transformation presentation.pptx
Complex variable transformation presentation.pptx
 
Parallel Optimization in Machine Learning
Parallel Optimization in Machine LearningParallel Optimization in Machine Learning
Parallel Optimization in Machine Learning
 
SPIE Conference V3.0
SPIE Conference V3.0SPIE Conference V3.0
SPIE Conference V3.0
 
Computability and Complexity
Computability and ComplexityComputability and Complexity
Computability and Complexity
 
Es272 ch0
Es272 ch0Es272 ch0
Es272 ch0
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
 
Comparative Analysis of Different Numerical Methods for the Solution of Initi...
Comparative Analysis of Different Numerical Methods for the Solution of Initi...Comparative Analysis of Different Numerical Methods for the Solution of Initi...
Comparative Analysis of Different Numerical Methods for the Solution of Initi...
 
Cryptography Baby Step Giant Step
Cryptography Baby Step Giant StepCryptography Baby Step Giant Step
Cryptography Baby Step Giant Step
 
Lecture 1 test
Lecture 1 testLecture 1 test
Lecture 1 test
 
Trabajo de metodos numerico.
Trabajo de metodos numerico.Trabajo de metodos numerico.
Trabajo de metodos numerico.
 
Trabajo de metodos numerico.
Trabajo de metodos numerico.Trabajo de metodos numerico.
Trabajo de metodos numerico.
 
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
The Solution of Maximal Flow Problems Using the Method Of Fuzzy Linear Progra...
 
Lecture 1 Slides -Introduction to algorithms.pdf
Lecture 1 Slides -Introduction to algorithms.pdfLecture 1 Slides -Introduction to algorithms.pdf
Lecture 1 Slides -Introduction to algorithms.pdf
 
Asynchronous Stochastic Optimization, New Analysis and Algorithms
Asynchronous Stochastic Optimization, New Analysis and AlgorithmsAsynchronous Stochastic Optimization, New Analysis and Algorithms
Asynchronous Stochastic Optimization, New Analysis and Algorithms
 
Finding the Extreme Values with some Application of Derivatives
Finding the Extreme Values with some Application of DerivativesFinding the Extreme Values with some Application of Derivatives
Finding the Extreme Values with some Application of Derivatives
 
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
A Class of Continuous Implicit Seventh-eight method for solving y’ = f(x, y) ...
 

More from Yuri Vyatkin

Order and stability for single and multivalue methods for differential equations
Order and stability for single and multivalue methods for differential equationsOrder and stability for single and multivalue methods for differential equations
Order and stability for single and multivalue methods for differential equations
Yuri Vyatkin
 
General linear methods
General linear methodsGeneral linear methods
General linear methods
Yuri Vyatkin
 
Order and stability for general linear methods
Order and stability for general linear methodsOrder and stability for general linear methods
Order and stability for general linear methods
Yuri Vyatkin
 
General linear methods for ordinary differential equations
General linear methods for ordinary differential equationsGeneral linear methods for ordinary differential equations
General linear methods for ordinary differential equations
Yuri Vyatkin
 
Order and stability for general linear methods
Order and stability for general linear methodsOrder and stability for general linear methods
Order and stability for general linear methods
Yuri Vyatkin
 
Thirty years of G-stabitlty
Thirty years of G-stabitltyThirty years of G-stabitlty
Thirty years of G-stabitlty
Yuri Vyatkin
 

More from Yuri Vyatkin (6)

Order and stability for single and multivalue methods for differential equations
Order and stability for single and multivalue methods for differential equationsOrder and stability for single and multivalue methods for differential equations
Order and stability for single and multivalue methods for differential equations
 
General linear methods
General linear methodsGeneral linear methods
General linear methods
 
Order and stability for general linear methods
Order and stability for general linear methodsOrder and stability for general linear methods
Order and stability for general linear methods
 
General linear methods for ordinary differential equations
General linear methods for ordinary differential equationsGeneral linear methods for ordinary differential equations
General linear methods for ordinary differential equations
 
Order and stability for general linear methods
Order and stability for general linear methodsOrder and stability for general linear methods
Order and stability for general linear methods
 
Thirty years of G-stabitlty
Thirty years of G-stabitltyThirty years of G-stabitlty
Thirty years of G-stabitlty
 

Recently uploaded

Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
YOGESH DOGRA
 
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
Studia Poinsotiana
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
muralinath2
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
sanjana502982
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
fafyfskhan251kmf
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
Areesha Ahmad
 
NuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final versionNuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final version
pablovgd
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
Areesha Ahmad
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
sonaliswain16
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
SAMIR PANDA
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
yqqaatn0
 
nodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptxnodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptx
alishadewangan1
 
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiologyBLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
NoelManyise1
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Ana Luísa Pinho
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
silvermistyshot
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
muralinath2
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Sérgio Sacani
 
in vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptxin vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptx
yusufzako14
 

Recently uploaded (20)

Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
 
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
 
NuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final versionNuGOweek 2024 Ghent - programme - final version
NuGOweek 2024 Ghent - programme - final version
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
 
nodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptxnodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptx
 
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiologyBLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
 
in vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptxin vitro propagation of plants lecture note.pptx
in vitro propagation of plants lecture note.pptx
 

Scientific Computations and Differential Equations

  • 1. Scientific Computation and Differential Equations John Butcher The University of Auckland New Zealand Annual Fundamental Sciences Seminar June 2006 Institut Kajian Sains Fundamental Ibnu Sina Scientific Computation and Differential Equations – p. 1/36
  • 2. Overview This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. Scientific Computation and Differential Equations – p. 2/36
  • 3. Overview This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. Scientific Computation and Differential Equations – p. 2/36
  • 4. Overview This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems. Scientific Computation and Differential Equations – p. 2/36
  • 5. Overview This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems. Scientific problems have always been the driving force in the development of faster and faster computers. Scientific Computation and Differential Equations – p. 2/36
  • 6. Overview This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems. Scientific problems have always been the driving force in the development of faster and faster computers. Within Scientific Computation, the approximate solution of differential equations has always been an area of special challenge. Scientific Computation and Differential Equations – p. 2/36
  • 7. Differential equations can usually not be solved analytically and numerical methods are necessary. Scientific Computation and Differential Equations – p. 3/36
  • 8. Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. Scientific Computation and Differential Equations – p. 3/36
  • 9. Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Scientific Computation and Differential Equations – p. 3/36
  • 10. Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations. Scientific Computation and Differential Equations – p. 3/36
  • 11. Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations. We will then look at some particular questions concerning the theory of general linear methods. Scientific Computation and Differential Equations – p. 3/36
  • 12. Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations. We will then look at some particular questions concerning the theory of general linear methods. We will also look at some aspects of their practical implementation. Scientific Computation and Differential Equations – p. 3/36
  • 13. Contents A short history of numerical differential equations Scientific Computation and Differential Equations – p. 4/36
  • 14. Contents A short history of numerical differential equations Linear multistep methods Scientific Computation and Differential Equations – p. 4/36
  • 15. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods Scientific Computation and Differential Equations – p. 4/36
  • 16. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Scientific Computation and Differential Equations – p. 4/36
  • 17. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Scientific Computation and Differential Equations – p. 4/36
  • 18. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs Scientific Computation and Differential Equations – p. 4/36
  • 19. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs Methods with the IRK stability property Scientific Computation and Differential Equations – p. 4/36
  • 20. Contents A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs Methods with the IRK stability property Implementation questions for IRKS methods Scientific Computation and Differential Equations – p. 4/36
  • 21. A short history of numerical ODEs We will make use of three standard types of initial value problems y′ (x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′ (x) = f(x, y(x)), y(x0) = y0 ∈ RN , (2) y′ (x) = f(y(x)), y(x0) = y0 ∈ RN . (3) Scientific Computation and Differential Equations – p. 5/36
  • 22. A short history of numerical ODEs We will make use of three standard types of initial value problems y′ (x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′ (x) = f(x, y(x)), y(x0) = y0 ∈ RN , (2) y′ (x) = f(y(x)), y(x0) = y0 ∈ RN . (3) Problem (1) is used in traditional descriptions of numerical methods but in applications we need to use either (2) or (3). Scientific Computation and Differential Equations – p. 5/36
  • 23. A short history of numerical ODEs We will make use of three standard types of initial value problems y′ (x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′ (x) = f(x, y(x)), y(x0) = y0 ∈ RN , (2) y′ (x) = f(y(x)), y(x0) = y0 ∈ RN . (3) Problem (1) is used in traditional descriptions of numerical methods but in applications we need to use either (2) or (3). These are actually equivalent and we will often use (3) instead of (2) because of its simplicity. Scientific Computation and Differential Equations – p. 5/36
  • 24. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. Scientific Computation and Differential Equations – p. 6/36
  • 25. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. Scientific Computation and Differential Equations – p. 6/36
  • 26. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 Scientific Computation and Differential Equations – p. 6/36
  • 27. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) Scientific Computation and Differential Equations – p. 6/36
  • 28. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) Scientific Computation and Differential Equations – p. 6/36
  • 29. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) y2 f(y1) y2 =y1 +hf(y1) Scientific Computation and Differential Equations – p. 6/36
  • 30. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) y2 f(y1) y2 =y1 +hf(y1) Scientific Computation and Differential Equations – p. 6/36
  • 31. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) y2 f(y1) y2 =y1 +hf(y1) y3 f(y2 ) y3 =y2 +hf(y2) Scientific Computation and Differential Equations – p. 6/36
  • 32. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) y2 f(y1) y2 =y1 +hf(y1) y3 f(y2 ) y3 =y2 +hf(y2) Scientific Computation and Differential Equations – p. 6/36
  • 33. The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first order equations. The idea is to treat the solution as though it had constant derivative in each time step. x0 x1 x2 x3 x4 y0 y1f(y0) y1 =y0 +hf(y0) y2 f(y1) y2 =y1 +hf(y1) y3 f(y2 ) y3 =y2 +hf(y2) y4 f(y3 ) y4 =y3 +hf(y3) Scientific Computation and Differential Equations – p. 6/36
  • 34. More modern methods attempt to improve on the Euler method by Scientific Computation and Differential Equations – p. 7/36
  • 35. More modern methods attempt to improve on the Euler method by 1. Using more past history Scientific Computation and Differential Equations – p. 7/36
  • 36. More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods Scientific Computation and Differential Equations – p. 7/36
  • 37. More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step Scientific Computation and Differential Equations – p. 7/36
  • 38. More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods Scientific Computation and Differential Equations – p. 7/36
  • 39. More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods 3. Doing both of these Scientific Computation and Differential Equations – p. 7/36
  • 40. More modern methods attempt to improve on the Euler method by 1. Using more past history – Linear multistep methods 2. Doing more complicated calculations in each step – Runge–Kutta methods 3. Doing both of these – General linear methods Scientific Computation and Differential Equations – p. 7/36
  • 41. Some important dates 1883 Adams & Bashforth Linear multistep methods 1895 Runge Runge-Kutta method 1901 Kutta 1925 Nyström Special methods for second order 1926 Moulton Adams-Moulton method 1952 Curtiss & Hirschfelder Stiff problems Scientific Computation and Differential Equations – p. 8/36
  • 42. Linear multistep methods We will write the differential equation in autonomous form y′ (x) = f(y(x)), y(x0) = y0, Scientific Computation and Differential Equations – p. 9/36
  • 43. Linear multistep methods We will write the differential equation in autonomous form y′ (x) = f(y(x)), y(x0) = y0, and the aim, for the moment, will be to calculate approximations to y(xi), where xi = x0 + hi, i = 1, 2, 3, . . . , and h is the “stepsize”. Scientific Computation and Differential Equations – p. 9/36
  • 44. Linear multistep methods We will write the differential equation in autonomous form y′ (x) = f(y(x)), y(x0) = y0, and the aim, for the moment, will be to calculate approximations to y(xi), where xi = x0 + hi, i = 1, 2, 3, . . . , and h is the “stepsize”. Linear multistep methods base the approximation to y(xn) on a linear combination of approximations to y(xn−i) and approximations to y′ (xn−i), i = 1, 2, . . . , k. Scientific Computation and Differential Equations – p. 9/36
  • 45. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). Scientific Computation and Differential Equations – p. 10/36
  • 46. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i Scientific Computation and Differential Equations – p. 10/36
  • 47. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i This is a 1-stage 2k-value method. Scientific Computation and Differential Equations – p. 10/36
  • 48. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. Scientific Computation and Differential Equations – p. 10/36
  • 49. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps. Scientific Computation and Differential Equations – p. 10/36
  • 50. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps. β0 = 0: explicit. Scientific Computation and Differential Equations – p. 10/36
  • 51. Write yi as the approximation to y(xi) and fi as the approximation to y′ (xi) = f(y(xi)). A linear multistep method can be written as yn = k i=1 αiyn−i + h k i=0 βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps. β0 = 0: explicit. β0 = 0: implicit. Scientific Computation and Differential Equations – p. 10/36
  • 52. Runge–Kutta methods A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, Scientific Computation and Differential Equations – p. 11/36
  • 53. Runge–Kutta methods A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h s j=1 aijf(Yj), i = 1, 2, . . . , s, Scientific Computation and Differential Equations – p. 11/36
  • 54. Runge–Kutta methods A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h s j=1 aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h s i=1 bif(Yi). Scientific Computation and Differential Equations – p. 11/36
  • 55. Runge–Kutta methods A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h s j=1 aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h s i=1 bif(Yi). This is an s-stage 1-value method. Scientific Computation and Differential Equations – p. 11/36
  • 56. Runge–Kutta methods A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h s j=1 aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h s i=1 bif(Yi). This is an s-stage 1-value method. It is natural to ask if there are useful methods which are multistage (as for Runge–Kutta methods) and multivalue (as for linear multistep methods). Scientific Computation and Differential Equations – p. 11/36
  • 57. In other words, we ask if there is any value in completing this diagram: Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
  • 58. In other words, we ask if there is any value in completing this diagram: Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
  • 59. In other words, we ask if there is any value in completing this diagram: General Linear Methods Runge-Kutta Linear Multistep Euler Scientific Computation and Differential Equations – p. 12/36
  • 60. General linear methods We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V s r s r . Scientific Computation and Differential Equations – p. 13/36
  • 61. General linear methods We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V s r s r . The r values input to step n − 1 will be denoted by y [n−1] i , i = 1, 2, . . . , r with corresponding output values y [n] i and the stage values by Yi, i = 1, 2, . . . , s. Scientific Computation and Differential Equations – p. 13/36
  • 62. General linear methods We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V s r s r . The r values input to step n − 1 will be denoted by y [n−1] i , i = 1, 2, . . . , r with corresponding output values y [n] i and the stage values by Yi, i = 1, 2, . . . , s. The stage derivatives will be denoted by Fi = f(Yi). Scientific Computation and Differential Equations – p. 13/36
  • 63. The formula for computing the stages (and simultaneously the stage derivatives) are: Yi = h s j=1 aijFj + r j=1 uijy [n−1] j , Fi = f(Yi), for i = 1, 2, . . . , s. Scientific Computation and Differential Equations – p. 14/36
  • 64. The formula for computing the stages (and simultaneously the stage derivatives) are: Yi = h s j=1 aijFj + r j=1 uijy [n−1] j , Fi = f(Yi), for i = 1, 2, . . . , s. To compute the output values, use the formula y [n] i = h s j=1 bijFj + r j=1 vijy [n−1] j , i = 1, 2, . . . , r. Scientific Computation and Differential Equations – p. 14/36
  • 65. For convenience, write y[n−1] =      y [n−1] 1 y [n−1] 2 ... y [n−1] r      , y[n] =      y [n] 1 y [n] 2 ... y [n] r      , Y =      Y1 Y2 ... Ys      , F =      F1 F2 ... Fs      , Scientific Computation and Differential Equations – p. 15/36
  • 66. For convenience, write y[n−1] =      y [n−1] 1 y [n−1] 2 ... y [n−1] r      , y[n] =      y [n] 1 y [n] 2 ... y [n] r      , Y =      Y1 Y2 ... Ys      , F =      F1 F2 ... Fs      , so that we can write the calculations in a step more simply as Y y[n] = A U B V hF y[n−1] . Scientific Computation and Differential Equations – p. 15/36
  • 67. Examples of general linear methods We will look at five examples Scientific Computation and Differential Equations – p. 16/36
  • 68. Examples of general linear methods We will look at five examples A Runge–Kutta method Scientific Computation and Differential Equations – p. 16/36
  • 69. Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method Scientific Computation and Differential Equations – p. 16/36
  • 70. Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method Scientific Computation and Differential Equations – p. 16/36
  • 71. Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method Scientific Computation and Differential Equations – p. 16/36
  • 72. Examples of general linear methods We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method A modified linear multistep method Scientific Computation and Differential Equations – p. 16/36
  • 73. A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V =        0 0 0 0 1 θ 0 0 0 1 1 2 − 1 8θ 1 8θ 0 0 1 1 2θ − 1 − 1 2θ 2 0 1 1 6 0 2 3 1 6 1        Scientific Computation and Differential Equations – p. 17/36
  • 74. A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V =        0 0 0 0 1 θ 0 0 0 1 1 2 − 1 8θ 1 8θ 0 0 1 1 2θ − 1 − 1 2θ 2 0 1 1 6 0 2 3 1 6 1        In a step from xn−1 to xn = xn−1 + h, the stages give approximations at xn−1, xn−1 + θh, xn−1 + 1 2h and xn−1 + h. Scientific Computation and Differential Equations – p. 17/36
  • 75. A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V =        0 0 0 0 1 θ 0 0 0 1 1 2 − 1 8θ 1 8θ 0 0 1 1 2θ − 1 − 1 2θ 2 0 1 1 6 0 2 3 1 6 1        In a step from xn−1 to xn = xn−1 + h, the stages give approximations at xn−1, xn−1 + θh, xn−1 + 1 2h and xn−1 + h. We will look at the special case θ = −1 2. Scientific Computation and Differential Equations – p. 17/36
  • 76. In the special θ = −1 2 case A U B V =        0 0 0 0 1 −1 2 0 0 0 1 3 4 −1 4 0 0 1 −2 1 2 0 1 1 6 0 2 3 1 6 1        Scientific Computation and Differential Equations – p. 18/36
  • 77. In the special θ = −1 2 case A U B V =        0 0 0 0 1 −1 2 0 0 0 1 3 4 −1 4 0 0 1 −2 1 2 0 1 1 6 0 2 3 1 6 1        Because the derivative at xn−1 + θh = xn−1 − 1 2h = xn−2 + 1 2h, was evaluated in the previous step, we can try re-using this value. Scientific Computation and Differential Equations – p. 18/36
  • 78. In the special θ = −1 2 case A U B V =        0 0 0 0 1 −1 2 0 0 0 1 3 4 −1 4 0 0 1 −2 1 2 0 1 1 6 0 2 3 1 6 1        Because the derivative at xn−1 + θh = xn−1 − 1 2h = xn−2 + 1 2h, was evaluated in the previous step, we can try re-using this value. This will save one function evaluation. Scientific Computation and Differential Equations – p. 18/36
  • 79. A ‘re-use’ method This gives the re-use method A U B V =        0 0 0 1 0 3 4 0 0 1 −1 4 −2 2 0 1 1 1 6 2 3 1 6 1 0 0 1 0 0 0        Scientific Computation and Differential Equations – p. 19/36
  • 80. A ‘re-use’ method This gives the re-use method A U B V =        0 0 0 1 0 3 4 0 0 1 −1 4 −2 2 0 1 1 1 6 2 3 1 6 1 0 0 1 0 0 0        Why should this method not be preferred to a standard Runge–Kutta method? Scientific Computation and Differential Equations – p. 19/36
  • 81. A ‘re-use’ method This gives the re-use method A U B V =        0 0 0 1 0 3 4 0 0 1 −1 4 −2 2 0 1 1 1 6 2 3 1 6 1 0 0 1 0 0 0        Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult Scientific Computation and Differential Equations – p. 19/36
  • 82. A ‘re-use’ method This gives the re-use method A U B V =        0 0 0 1 0 3 4 0 0 1 −1 4 −2 2 0 1 1 1 6 2 3 1 6 1 0 0 1 0 0 0        Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult The stability region is smaller Scientific Computation and Differential Equations – p. 19/36
  • 83. To overcome these difficulties, we can do several things: Scientific Computation and Differential Equations – p. 20/36
  • 84. To overcome these difficulties, we can do several things: Restore the missing stage, Scientific Computation and Differential Equations – p. 20/36
  • 85. To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Scientific Computation and Differential Equations – p. 20/36
  • 86. To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Scientific Computation and Differential Equations – p. 20/36
  • 87. To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps. Scientific Computation and Differential Equations – p. 20/36
  • 88. To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps. We then get methods like the following: Scientific Computation and Differential Equations – p. 20/36
  • 89. An ARK method            0 0 0 0 1 1 1 2 1 16 0 0 0 1 7 16 1 16 −1 4 2 0 0 1 −3 4 −1 4 0 2 3 1 6 0 1 1 6 0 0 2 3 1 6 0 1 1 6 0 0 0 0 1 0 0 0 −1 3 0 −2 3 2 0 −1 0            , Scientific Computation and Differential Equations – p. 21/36
  • 90. An ARK method            0 0 0 0 1 1 1 2 1 16 0 0 0 1 7 16 1 16 −1 4 2 0 0 1 −3 4 −1 4 0 2 3 1 6 0 1 1 6 0 0 2 3 1 6 0 1 1 6 0 0 0 0 1 0 0 0 −1 3 0 −2 3 2 0 −1 0            , where y [n] 1 ≈ y(xn), y [n] 2 ≈ hy′ (xn), y [n] 3 ≈ h2 y′′ (xn), Scientific Computation and Differential Equations – p. 21/36
  • 91. An ARK method            0 0 0 0 1 1 1 2 1 16 0 0 0 1 7 16 1 16 −1 4 2 0 0 1 −3 4 −1 4 0 2 3 1 6 0 1 1 6 0 0 2 3 1 6 0 1 1 6 0 0 0 0 1 0 0 0 −1 3 0 −2 3 2 0 −1 0            , where y [n] 1 ≈ y(xn), y [n] 2 ≈ hy′ (xn), y [n] 3 ≈ h2 y′′ (xn), with Y1 ≈ Y3 ≈ Y4 ≈ y(xn), Y2 ≈ y(xn−1 + 1 2h). Scientific Computation and Differential Equations – p. 21/36
  • 92. The good things about this “Almost Runge–Kutta method” are: Scientific Computation and Differential Equations – p. 22/36
  • 93. The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Scientific Computation and Differential Equations – p. 22/36
  • 94. The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. Scientific Computation and Differential Equations – p. 22/36
  • 95. The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method. Scientific Computation and Differential Equations – p. 22/36
  • 96. The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage order is 2. This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method. Although it is a multi-value method, both starting the method and changing stepsize are essentially cost-free operations. Scientific Computation and Differential Equations – p. 22/36
  • 97. An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. Scientific Computation and Differential Equations – p. 23/36
  • 98. An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗ n and a corrector yn by the formulae y∗ n = yn−1 + h 23 12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3) , Scientific Computation and Differential Equations – p. 23/36
  • 99. An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗ n and a corrector yn by the formulae y∗ n = yn−1 + h 23 12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3) , yn = yn−1 + h 5 12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2) . Scientific Computation and Differential Equations – p. 23/36
  • 100. An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗ n and a corrector yn by the formulae y∗ n = yn−1 + h 23 12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3) , yn = yn−1 + h 5 12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2) . It might be asked: Is it possible to obtain improved order by using values of yn−2, yn−3 in the formulae? Scientific Computation and Differential Equations – p. 23/36
  • 101. An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗ n and a corrector yn by the formulae y∗ n = yn−1 + h 23 12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3) , yn = yn−1 + h 5 12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2) . It might be asked: Is it possible to obtain improved order by using values of yn−2, yn−3 in the formulae? The answer is that not much can be gained because we are limited by the famous ‘Dahlquist barrier’. Scientific Computation and Differential Equations – p. 23/36
  • 102. A modified linear multistep method But what if we allow off-step points? Scientific Computation and Differential Equations – p. 24/36
  • 103. A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1 2h). Scientific Computation and Differential Equations – p. 24/36
  • 104. A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1 2h). This new method, with predictors at the off-step point and also at the end of the step, is y∗ n−1 2 = yn−2 + h 9 8f(yn−1) + 3 8f(yn−2) , Scientific Computation and Differential Equations – p. 24/36
  • 105. A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1 2h). This new method, with predictors at the off-step point and also at the end of the step, is y∗ n−1 2 = yn−2 + h 9 8f(yn−1) + 3 8f(yn−2) , y∗ n = 28 5 yn−1 − 23 5 yn−2 + h 32 15f(y∗ n−1 2 ) − 4f(yn−1) − 26 15f(yn−2) , Scientific Computation and Differential Equations – p. 24/36
  • 106. A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1 2h). This new method, with predictors at the off-step point and also at the end of the step, is y∗ n−1 2 = yn−2 + h 9 8f(yn−1) + 3 8f(yn−2) , y∗ n = 28 5 yn−1 − 23 5 yn−2 + h 32 15f(y∗ n−1 2 ) − 4f(yn−1) − 26 15f(yn−2) , yn = 32 31yn−1 − 1 31yn−2 + h 64 93f(y∗ n−1 2 )+ 5 31f(y∗ n)+ 4 31f(yn−1)− 1 93f(yn−2) . Scientific Computation and Differential Equations – p. 24/36
  • 107. Order of general linear methods Classical methods are all built on a plan where we know in advance what we are trying to approximate. Scientific Computation and Differential Equations – p. 25/36
  • 108. Order of general linear methods Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation of the input and output quantities is quite general. Scientific Computation and Differential Equations – p. 25/36
  • 109. Order of general linear methods Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation of the input and output quantities is quite general. We want to understand order in a similar general way. Scientific Computation and Differential Equations – p. 25/36
  • 110. Order of general linear methods Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation of the input and output quantities is quite general. We want to understand order in a similar general way. The key ideas are Use a general starting method to represent the input to a step. Scientific Computation and Differential Equations – p. 25/36
  • 111. Order of general linear methods Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation of the input and output quantities is quite general. We want to understand order in a similar general way. The key ideas are Use a general starting method to represent the input to a step. Require the output to be similarly related to the starting method applied one time-step later. Scientific Computation and Differential Equations – p. 25/36
  • 112. The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. Scientific Computation and Differential Equations – p. 26/36
  • 113. The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. Scientific Computation and Differential Equations – p. 26/36
  • 114. The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step. Scientific Computation and Differential Equations – p. 26/36
  • 115. The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step. If this can be estimated in terms of hp+1 , then the method has order p. Scientific Computation and Differential Equations – p. 26/36
  • 116. The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step. If this can be estimated in terms of hp+1 , then the method has order p. We will refer to the calculation which produces y[n−1] from y(xn−1) as a “starting method”. Scientific Computation and Differential Equations – p. 26/36
  • 117. Let S denote the “starting method” Scientific Computation and Differential Equations – p. 27/36
  • 118. Let S denote the “starting method”, that is a mapping from RN to RrN Scientific Computation and Differential Equations – p. 27/36
  • 119. Let S denote the “starting method”, that is a mapping from RN to RrN , and let F : RrN → RN denote a corresponding finishing method Scientific Computation and Differential Equations – p. 27/36
  • 120. Let S denote the “starting method”, that is a mapping from RN to RrN , and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id. Scientific Computation and Differential Equations – p. 27/36
  • 121. Let S denote the “starting method”, that is a mapping from RN to RrN , and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id. The order of accuracy of a multivalue method is defined in terms of the diagram E S S M Scientific Computation and Differential Equations – p. 27/36
  • 122. Let S denote the “starting method”, that is a mapping from RN to RrN , and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id. The order of accuracy of a multivalue method is defined in terms of the diagram E S S M O(hp+1 ) (h = stepsize) Scientific Computation and Differential Equations – p. 27/36
  • 123. By duplicating this diagram over many steps, global error estimates can be found. Scientific Computation and Differential Equations – p. 28/36
  • 124. By duplicating this diagram over many steps, global error estimates can be found. E E E S S S S S M M M Scientific Computation and Differential Equations – p. 28/36
  • 125. By duplicating this diagram over many steps, global error estimates can be found. E E E S S S S S M M M O(hp ) Scientific Computation and Differential Equations – p. 28/36
  • 126. By duplicating this diagram over many steps, global error estimates can be found. E E E S S S S S M M M O(hp ) F Scientific Computation and Differential Equations – p. 28/36
  • 127. By duplicating this diagram over many steps, global error estimates can be found. E E E S S S S S M M M O(hp ) F O(hp ) Scientific Computation and Differential Equations – p. 28/36
  • 128. Methods with the IRK Stability property An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1 U. Scientific Computation and Differential Equations – p. 29/36
  • 129. Methods with the IRK Stability property An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1 U. This represents the behaviour of the method in the case of linear problems. Scientific Computation and Differential Equations – p. 29/36
  • 130. Methods with the IRK Stability property An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1 U. This represents the behaviour of the method in the case of linear problems. That is, for the problem y′ (x) = qy(x), we have y[n] = M(z)y[n−1] where z = hq Scientific Computation and Differential Equations – p. 29/36
  • 131. Methods with the IRK Stability property An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1 U. This represents the behaviour of the method in the case of linear problems. That is, for the problem y′ (x) = qy(x), we have y[n] = M(z)y[n−1] where z = hq In the special case of a Runge–Kutta method, M(z) is a scalar R(z). Scientific Computation and Differential Equations – p. 29/36
  • 132. To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. Scientific Computation and Differential Equations – p. 30/36
  • 133. To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are Scientific Computation and Differential Equations – p. 30/36
  • 134. To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are For an A-stable method, |R(z)| ≤ 1, if Rez ≤ 0. Scientific Computation and Differential Equations – p. 30/36
  • 135. To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are For an A-stable method, |R(z)| ≤ 1, if Rez ≤ 0. An L-stable method is A-stable and, in addition, R(∞) = 0. Scientific Computation and Differential Equations – p. 30/36
  • 136. A general linear method is said to have “Runge–Kutta stability” if the stability matrix for the method M(z) has characteristic polynomial of the form det(wI − M(z)) = wr−1 (w − R(z)). Scientific Computation and Differential Equations – p. 31/36
  • 137. A general linear method is said to have “Runge–Kutta stability” if the stability matrix for the method M(z) has characteristic polynomial of the form det(wI − M(z)) = wr−1 (w − R(z)). This means that the method has exactly the same stability region as a Runge–Kutta method whose stability function is R(z). Scientific Computation and Differential Equations – p. 31/36
  • 138. Do methods with RK stability exist? Scientific Computation and Differential Equations – p. 32/36
  • 139. Do methods with RK stability exist? Yes, it is even possible to construct them with rational operations by imposing a condition known as “Inherent RK stability”. Scientific Computation and Differential Equations – p. 32/36
  • 140. Do methods with RK stability exist? Yes, it is even possible to construct them with rational operations by imposing a condition known as “Inherent RK stability”. Methods exist for both stiff and non-stiff problems for arbitrary orders and the only question is how to select the best methods from the large families that are available. Scientific Computation and Differential Equations – p. 32/36
  • 141. Do methods with RK stability exist? Yes, it is even possible to construct them with rational operations by imposing a condition known as “Inherent RK stability”. Methods exist for both stiff and non-stiff problems for arbitrary orders and the only question is how to select the best methods from the large families that are available. We will give just two examples. Scientific Computation and Differential Equations – p. 32/36
  • 142. The following third order method is explicit and suitable for the solution of non-stiff problems AU BV =                 0 0 0 0 1 1 4 1 32 1 384 − 176 1885 0 0 0 1 2237 3770 2237 15080 2149 90480 −335624 311025 29 55 0 0 1 1619591 1244100 260027 904800 1517801 39811200 −67843 6435 395 33 −5 0 1 29428 6435 527 585 41819 102960 −67843 6435 395 33 −5 0 1 29428 6435 527 585 41819 102960 0 0 0 1 0 0 0 0 82 33 −274 11 170 9 −4 3 0 482 99 0 −161 264 −8 −12 40 3 −2 0 26 3 0 0                 Scientific Computation and Differential Equations – p. 33/36
  • 143. The following fourth order method is implicit, L-stable, and suitable for the solution of stiff problems                          1 4 0 0 0 0 1 3 4 1 2 1 4 0 − 513 54272 1 4 0 0 0 1 27649 54272 5601 27136 1539 54272 − 459 6784 3706119 69088256 − 488 3819 1 4 0 0 1 15366379 207264768 756057 34544128 1620299 69088256 − 4854 454528 32161061 197549232 −111814 232959 134 183 1 4 0 1− 32609017 197549232 929753 32924872 4008881 32924872 174981 3465776 − 135425 2948496 − 641 10431 73 183 1 2 1 4 1 − 367313 8845488 − 22727 1474248 40979 982832 323 25864 − 135425 2948496 − 641 10431 73 183 1 2 1 4 1 − 367313 8845488 − 22727 1474248 40979 982832 323 25864 0 0 0 0 1 0 0 0 0 0 2255 2318 −47125 20862 447 122 −11 4 4 3 0 −28745 20862 − 1937 13908 351 18544 65 976 12620 10431 −96388 31293 3364 549 −10 3 4 3 0 −70634 31293 − 2050 10431 − 187 2318 113 366 414 1159 −29954 31293 130 61 −1 1 3 0 −27052 31293 − 113 10431 − 491 4636 161 732                           Scientific Computation and Differential Equations – p. 34/36
  • 144. Implementation questions for IRKS methods Many implementation questions are similar to those for traditional methods but there are some new challenges. Scientific Computation and Differential Equations – p. 35/36
  • 145. Implementation questions for IRKS methods Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Scientific Computation and Differential Equations – p. 35/36
  • 146. Implementation questions for IRKS methods Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step Scientific Computation and Differential Equations – p. 35/36
  • 147. Implementation questions for IRKS methods Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step Estimate the local truncation error of an alternative method of higher order Scientific Computation and Differential Equations – p. 35/36
  • 148. Implementation questions for IRKS methods Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step Estimate the local truncation error of an alternative method of higher order Change the stepsize with little cost and with little impact on stability Scientific Computation and Differential Equations – p. 35/36
  • 149. We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. Scientific Computation and Differential Equations – p. 36/36
  • 150. We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project: Scientific Computation and Differential Equations – p. 36/36
  • 151. We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project: Zdzisław Jackiewicz Arizona State University, Phoenix AZ Helmut Podhaisky Martin Luther Universität, Halle Will Wright La Trobe University, Melbourne Scientific Computation and Differential Equations – p. 36/36
  • 152. We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project: Zdzisław Jackiewicz Arizona State University, Phoenix AZ Helmut Podhaisky Martin Luther Universität, Halle Will Wright La Trobe University, Melbourne I also express my thanks to other colleagues who are closely associated with this project, especially: Robert Chan, Allison Heard, Shirley Huang, Nicolette Rattenbury, Gustaf Söderlind, Angela Tsai. Scientific Computation and Differential Equations – p. 36/36