NUMERICAL METHODS
Shail Chettri, 22UMPY18
INITAL-VALUE PROBLEMS for ORDINARY
DIFFERENTIAL EQUATIONS
6th July 2023
1 / 27
INTRODUCTION
The motion of a swinging pendulum under certain simplifying
assumptions is described by the second-order differential equation
d2θ
dt2
+
g
L
sin θ = 0,
where L is the length of the pendulum, g ≈ 32.17 ft/s2
is the
gravitational constant of the Earth, and θ is the angle the
pendulum makes with the vertical. If, in addition, we specify the
position of the pendulum when the motion begins, θ(t0) = θ0, and
its velocity at that point, θ̇(t0) = θ̇0, we have what is called an
initial-value problem.
2 / 27
For small values of θ, the approximation θ ≈ sin θ can be used to
simplify this problem to the linear initial-value problem
d2θ
dt2
+
g
L
θ = 0, θ(t0) = θ0, θ̇(t0) = θ̇0.
3 / 27
This problem can be solved by a standard differential equation
technique. For larger values of θ, the assumption that θ = sin θ is
not reasonable, so approximation methods must be used.
4 / 27
some approximation methods which are to be discussed
Euler’s method
Ranga-kutta method
Ranga-kutta felhberg
Extrapolation method
Adams Fourth-Order Predictor-Corrector method
5 / 27
Euler’s Method
Euler’s method is a numerical method for solving ordinary
differential equations (ODEs).
It is a first-order method that approximates the solution by
dividing the interval into small steps.
The method is based on Taylor series expansion and is
relatively simple to implement.
6 / 27
Consider the initial value problem:
dy
dt
= f (t, y), a ≤ t ≤ b
y(a) = y0
Let’s approximate the solution at discrete points t0, t1, . . . , tn
with step size h.
The idea is to approximate the derivative dy
dt with a difference
quotient.
The difference equation for Euler’s method is given by:
yi+1 = yi + h · f (ti , yi )
7 / 27
Algorithm
The algorithm for Euler’s method is as follows:
1 Set the initial condition y0 and the step size h.
2 Initialize t = a and y = y0.
3 Repeat the following steps until t reaches b:
Compute the slope f (t, y).
Update y using the Euler’s method formula: y = y +h ·f (t, y).
Increment t by h: t = t + h.
4 Output the approximated solution at the final time point: yn.
8 / 27
Example 1: Approximating Solution using Euler’s Method
In Example 1, we will use an algorithm for Euler’s method to
approximate the solution to the initial value problem:
dy
dt
= y − t2
+ 1, 0 ≤ t ≤ 2, y(0) = 0.5,
at t = 2. Here we will simply illustrate the steps in the technique
when we have h = 0.5.
9 / 27
For the problem, f (t, y) = y − t2 + 1, so:
w0 = y(0) = 0.5,
w1 = w0 + 0.5(h · f (t0, w0))
= 0.5 + 0.5(0.5 − (0.0)2
+ 1) = 1.25,
w2 = w1 + 0.5(h · f (t1, w1))
= 1.25 + 0.5(1.25 − (0.5)2
+ 1) = 2.25,
w3 = w2 + 0.5(h · f (t2, w2))
= 2.25 + 0.5(2.25 − (1.0)2
+ 1) = 3.375,
y(2) ≈ w4 = w3 + 0.5(h · f (t3, w3))
= 3.375 + 0.5(3.375 − (1.5)2
+ 1) = 4.4375.
10 / 27
Conclusion
Euler’s method is a simple and intuitive numerical method for
solving ODEs.
It provides an approximation of the solution by dividing the
interval into small steps.
However, it is a first-order method and can introduce
significant errors for certain types of problems.
Other higher-order methods, such as the Runge-Kutta
methods, can provide more accurate results.
11 / 27
fourth-order Runge-Kutta method
The fourth-order Runge-Kutta method is a numerical algorithm
used to approximate the solutions of ordinary differential
equations. It is one of the most widely used numerical methods
due to its high accuracy and simplicity.
12 / 27
Algorithm
The fourth-order Runge-Kutta method is an iterative algorithm
that can be summarized as follows:
1 Given an initial value y0 at x0, choose a step size h.
2 Calculate the intermediate values k1, k2, k3, and k4 using the
following formulas:
k1 = hf (xn, yn)
k2 = hf (xn +
h
2
, yn +
k1
2
)
k3 = hf (xn +
h
2
, yn +
k2
2
)
k4 = hf (xn + h, yn + k3)
3 Calculate the next value yn+1 using the weighted sum:
yn+1 = yn +
1
6
(k1 + 2k2 + 2k3 + k4)
4 Repeat steps 2 and 3 until the desired number of iterations is
reached or the desired accuracy is achieved.
13 / 27
Example
Let’s consider the following ordinary differential equation:
dy
dx
= f (x, y) = x2
+ y
with the initial condition y0 = 1 at x0 = 0. We want to
approximate the solution at x = 1 using the fourth-order
Runge-Kutta method.
By applying the algorithm with a suitable step size, we can
iteratively compute the approximate values of y at different x
points.
Let’s choose a step size of h = 0.1 and perform the computations.
14 / 27
Example (Contd.)
Using the fourth-order Runge-Kutta method with a step size of
h = 0.1, we can compute the approximate values of y at different
x points as shown below:
x y
0.0 1.0000
0.1 1.0050
0.2 1.0201
0.3 1.0454
0.4 1.0809
0.5 1.1265
0.6 1.1823
0.7 1.2483
0.8 1.3245
0.9 1.4110
1.0 1.5077
15 / 27
Runge-Kutta-Fehlberg (RKF)
The Runge-Kutta-Fehlberg (RKF) method is a numerical
method for solving ordinary differential equations (ODEs).
It is an adaptive method that allows for control of the error
tolerance.
The RKF method is based on the classic Runge-Kutta method
but incorporates additional computations to estimate the error
and adjust the step size accordingly.
16 / 27
Algorithm
1 Start with an initial condition y0 and a step size h.
2 For each step, calculate the following values:
k1 = hf (tn, yn)
k2 = hf (tn +
h
4
, yn +
k1
4
)
k3 = hf (tn +
3h
8
, yn +
3k1
32
+
9k2
32
)
k4 = hf (tn +
12h
13
, yn +
1932k1
2197
−
7200k2
2197
+
7296k3
2197
)
k5 = hf (tn + h, yn +
439k1
216
− 8k2 +
3680k3
513
−
845k4
4104
)
k6 = hf (tn +
h
2
, yn −
8k1
27
+ 2k2 −
3544k3
2565
+
1859k4
4104
−
11k5
40
)
3 Calculate the approximate solution for the next step:
yn+1 = yn +
25k1
216
+
1408k3
2565
+
2197k4
4104
−
k5
5
17 / 27
Error Estimation
The error is estimated using the difference between the fourth
and fifth order solutions:
error =
|y
(4)
n+1 − y
(5)
n+1|
h
The step size is adjusted based on the error using a scaling
factor and a desired tolerance level.
If the error is within the tolerance, the step is accepted and
the solution is updated. Otherwise, the step is rejected, and a
new step size is computed.
18 / 27
Conclusion
The Runge-Kutta-Fehlberg method is a powerful numerical
method for solving ordinary differential equations.
It provides high accuracy and adaptivity through error
estimation and step size adjustment.
The RKF method is widely used in various fields, including
physics, engineering, and computer science.
19 / 27
Adams Fourth-Order Predictor-Corrector method
The Adams Fourth-Order Predictor-Corrector method is a
numerical method used to solve ordinary differential equations
(ODEs).
It combines the Adams-Bashforth method (predictor) with the
Adams-Moulton method (corrector) to achieve higher
accuracy and stability.
The method is particularly useful for solving stiff ODEs.
20 / 27
Adams Fourth-Order Predictor-Corrector Algorithm
1 Use an initial condition y0 and derivative f0 to compute y1
using a one-step method (e.g., Euler’s method).
2 Use the Adams-Bashforth formula to predict y
(p)
n+1.
3 Use the Adams-Moulton formula tocorrect the predicted value
y
(p)
n+1 and obtain a more accurate estimate y
(p)
n+1.
4 Repeat steps 2 and 3 for each derivative p of the solution
until the desired accuracy is achieved.
21 / 27
Conclusion
The Adams Fourth-Order Predictor-Corrector method is a
powerful numerical method for solving ODEs.
It combines the predictive power of the Adams-Bashforth
method with the corrective accuracy of the Adams-Moulton
method.
The fourth-order version of the method provides higher
accuracy and stability compared to lower-order methods.
The method is particularly useful for solving stiff ODEs where
the solution varies rapidly over a small time scale.
22 / 27
Introduction
Numerical methods are used to solve mathematical problems
computationally.
Extrapolation algorithms are a class of numerical methods
used for improving the accuracy and efficiency of
approximations.
The extrapolation algorithm can be applied to a wide range of
problems, including integration, root finding, and differential
equations.
23 / 27
Basic Idea
The basic idea behind the extrapolation algorithm is to
compute a sequence of approximations that converge rapidly
to the desired solution.
It involves extrapolating a sequence of approximations to a
more accurate solution by using a combination of the previous
approximations.
The extrapolation process can be represented using a table or
a recursive formula.
24 / 27
Algorithm
1 Initialize the extrapolation table with the initial
approximations.
2 Compute the extrapolated values using the previous
approximations and a suitable extrapolation formula.
3 Check for convergence by comparing the new extrapolated
value with the previous approximation.
4 If the convergence criteria are met, stop and output the final
approximation. Otherwise, update the table with the new
values and repeat the process.
25 / 27
Example
Let’s consider the problem of approximating the value of π using
the extrapolation algorithm.
Start with an initial approximation, e.g., π0 = 3.
Use the extrapolation formula to compute the next
approximations: π1 = π0 + π0
22 , π2 = π1 + π1
42 , . . .
Check for convergence by comparing the new approximations
with the previous ones.
Repeat the process until the desired level of accuracy is
achieved.
26 / 27
Conclusion
The extrapolation algorithm is a powerful technique for
improving the accuracy and efficiency of numerical
approximations.
It can be applied to a wide range of problems in numerical
analysis.
The convergence of the extrapolation algorithm depends on
the choice of extrapolation formula and the problem at hand.
27 / 27

numericalmethods.pdf

  • 1.
    NUMERICAL METHODS Shail Chettri,22UMPY18 INITAL-VALUE PROBLEMS for ORDINARY DIFFERENTIAL EQUATIONS 6th July 2023 1 / 27
  • 2.
    INTRODUCTION The motion ofa swinging pendulum under certain simplifying assumptions is described by the second-order differential equation d2θ dt2 + g L sin θ = 0, where L is the length of the pendulum, g ≈ 32.17 ft/s2 is the gravitational constant of the Earth, and θ is the angle the pendulum makes with the vertical. If, in addition, we specify the position of the pendulum when the motion begins, θ(t0) = θ0, and its velocity at that point, θ̇(t0) = θ̇0, we have what is called an initial-value problem. 2 / 27
  • 3.
    For small valuesof θ, the approximation θ ≈ sin θ can be used to simplify this problem to the linear initial-value problem d2θ dt2 + g L θ = 0, θ(t0) = θ0, θ̇(t0) = θ̇0. 3 / 27
  • 4.
    This problem canbe solved by a standard differential equation technique. For larger values of θ, the assumption that θ = sin θ is not reasonable, so approximation methods must be used. 4 / 27
  • 5.
    some approximation methodswhich are to be discussed Euler’s method Ranga-kutta method Ranga-kutta felhberg Extrapolation method Adams Fourth-Order Predictor-Corrector method 5 / 27
  • 6.
    Euler’s Method Euler’s methodis a numerical method for solving ordinary differential equations (ODEs). It is a first-order method that approximates the solution by dividing the interval into small steps. The method is based on Taylor series expansion and is relatively simple to implement. 6 / 27
  • 7.
    Consider the initialvalue problem: dy dt = f (t, y), a ≤ t ≤ b y(a) = y0 Let’s approximate the solution at discrete points t0, t1, . . . , tn with step size h. The idea is to approximate the derivative dy dt with a difference quotient. The difference equation for Euler’s method is given by: yi+1 = yi + h · f (ti , yi ) 7 / 27
  • 8.
    Algorithm The algorithm forEuler’s method is as follows: 1 Set the initial condition y0 and the step size h. 2 Initialize t = a and y = y0. 3 Repeat the following steps until t reaches b: Compute the slope f (t, y). Update y using the Euler’s method formula: y = y +h ·f (t, y). Increment t by h: t = t + h. 4 Output the approximated solution at the final time point: yn. 8 / 27
  • 9.
    Example 1: ApproximatingSolution using Euler’s Method In Example 1, we will use an algorithm for Euler’s method to approximate the solution to the initial value problem: dy dt = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5, at t = 2. Here we will simply illustrate the steps in the technique when we have h = 0.5. 9 / 27
  • 10.
    For the problem,f (t, y) = y − t2 + 1, so: w0 = y(0) = 0.5, w1 = w0 + 0.5(h · f (t0, w0)) = 0.5 + 0.5(0.5 − (0.0)2 + 1) = 1.25, w2 = w1 + 0.5(h · f (t1, w1)) = 1.25 + 0.5(1.25 − (0.5)2 + 1) = 2.25, w3 = w2 + 0.5(h · f (t2, w2)) = 2.25 + 0.5(2.25 − (1.0)2 + 1) = 3.375, y(2) ≈ w4 = w3 + 0.5(h · f (t3, w3)) = 3.375 + 0.5(3.375 − (1.5)2 + 1) = 4.4375. 10 / 27
  • 11.
    Conclusion Euler’s method isa simple and intuitive numerical method for solving ODEs. It provides an approximation of the solution by dividing the interval into small steps. However, it is a first-order method and can introduce significant errors for certain types of problems. Other higher-order methods, such as the Runge-Kutta methods, can provide more accurate results. 11 / 27
  • 12.
    fourth-order Runge-Kutta method Thefourth-order Runge-Kutta method is a numerical algorithm used to approximate the solutions of ordinary differential equations. It is one of the most widely used numerical methods due to its high accuracy and simplicity. 12 / 27
  • 13.
    Algorithm The fourth-order Runge-Kuttamethod is an iterative algorithm that can be summarized as follows: 1 Given an initial value y0 at x0, choose a step size h. 2 Calculate the intermediate values k1, k2, k3, and k4 using the following formulas: k1 = hf (xn, yn) k2 = hf (xn + h 2 , yn + k1 2 ) k3 = hf (xn + h 2 , yn + k2 2 ) k4 = hf (xn + h, yn + k3) 3 Calculate the next value yn+1 using the weighted sum: yn+1 = yn + 1 6 (k1 + 2k2 + 2k3 + k4) 4 Repeat steps 2 and 3 until the desired number of iterations is reached or the desired accuracy is achieved. 13 / 27
  • 14.
    Example Let’s consider thefollowing ordinary differential equation: dy dx = f (x, y) = x2 + y with the initial condition y0 = 1 at x0 = 0. We want to approximate the solution at x = 1 using the fourth-order Runge-Kutta method. By applying the algorithm with a suitable step size, we can iteratively compute the approximate values of y at different x points. Let’s choose a step size of h = 0.1 and perform the computations. 14 / 27
  • 15.
    Example (Contd.) Using thefourth-order Runge-Kutta method with a step size of h = 0.1, we can compute the approximate values of y at different x points as shown below: x y 0.0 1.0000 0.1 1.0050 0.2 1.0201 0.3 1.0454 0.4 1.0809 0.5 1.1265 0.6 1.1823 0.7 1.2483 0.8 1.3245 0.9 1.4110 1.0 1.5077 15 / 27
  • 16.
    Runge-Kutta-Fehlberg (RKF) The Runge-Kutta-Fehlberg(RKF) method is a numerical method for solving ordinary differential equations (ODEs). It is an adaptive method that allows for control of the error tolerance. The RKF method is based on the classic Runge-Kutta method but incorporates additional computations to estimate the error and adjust the step size accordingly. 16 / 27
  • 17.
    Algorithm 1 Start withan initial condition y0 and a step size h. 2 For each step, calculate the following values: k1 = hf (tn, yn) k2 = hf (tn + h 4 , yn + k1 4 ) k3 = hf (tn + 3h 8 , yn + 3k1 32 + 9k2 32 ) k4 = hf (tn + 12h 13 , yn + 1932k1 2197 − 7200k2 2197 + 7296k3 2197 ) k5 = hf (tn + h, yn + 439k1 216 − 8k2 + 3680k3 513 − 845k4 4104 ) k6 = hf (tn + h 2 , yn − 8k1 27 + 2k2 − 3544k3 2565 + 1859k4 4104 − 11k5 40 ) 3 Calculate the approximate solution for the next step: yn+1 = yn + 25k1 216 + 1408k3 2565 + 2197k4 4104 − k5 5 17 / 27
  • 18.
    Error Estimation The erroris estimated using the difference between the fourth and fifth order solutions: error = |y (4) n+1 − y (5) n+1| h The step size is adjusted based on the error using a scaling factor and a desired tolerance level. If the error is within the tolerance, the step is accepted and the solution is updated. Otherwise, the step is rejected, and a new step size is computed. 18 / 27
  • 19.
    Conclusion The Runge-Kutta-Fehlberg methodis a powerful numerical method for solving ordinary differential equations. It provides high accuracy and adaptivity through error estimation and step size adjustment. The RKF method is widely used in various fields, including physics, engineering, and computer science. 19 / 27
  • 20.
    Adams Fourth-Order Predictor-Correctormethod The Adams Fourth-Order Predictor-Corrector method is a numerical method used to solve ordinary differential equations (ODEs). It combines the Adams-Bashforth method (predictor) with the Adams-Moulton method (corrector) to achieve higher accuracy and stability. The method is particularly useful for solving stiff ODEs. 20 / 27
  • 21.
    Adams Fourth-Order Predictor-CorrectorAlgorithm 1 Use an initial condition y0 and derivative f0 to compute y1 using a one-step method (e.g., Euler’s method). 2 Use the Adams-Bashforth formula to predict y (p) n+1. 3 Use the Adams-Moulton formula tocorrect the predicted value y (p) n+1 and obtain a more accurate estimate y (p) n+1. 4 Repeat steps 2 and 3 for each derivative p of the solution until the desired accuracy is achieved. 21 / 27
  • 22.
    Conclusion The Adams Fourth-OrderPredictor-Corrector method is a powerful numerical method for solving ODEs. It combines the predictive power of the Adams-Bashforth method with the corrective accuracy of the Adams-Moulton method. The fourth-order version of the method provides higher accuracy and stability compared to lower-order methods. The method is particularly useful for solving stiff ODEs where the solution varies rapidly over a small time scale. 22 / 27
  • 23.
    Introduction Numerical methods areused to solve mathematical problems computationally. Extrapolation algorithms are a class of numerical methods used for improving the accuracy and efficiency of approximations. The extrapolation algorithm can be applied to a wide range of problems, including integration, root finding, and differential equations. 23 / 27
  • 24.
    Basic Idea The basicidea behind the extrapolation algorithm is to compute a sequence of approximations that converge rapidly to the desired solution. It involves extrapolating a sequence of approximations to a more accurate solution by using a combination of the previous approximations. The extrapolation process can be represented using a table or a recursive formula. 24 / 27
  • 25.
    Algorithm 1 Initialize theextrapolation table with the initial approximations. 2 Compute the extrapolated values using the previous approximations and a suitable extrapolation formula. 3 Check for convergence by comparing the new extrapolated value with the previous approximation. 4 If the convergence criteria are met, stop and output the final approximation. Otherwise, update the table with the new values and repeat the process. 25 / 27
  • 26.
    Example Let’s consider theproblem of approximating the value of π using the extrapolation algorithm. Start with an initial approximation, e.g., π0 = 3. Use the extrapolation formula to compute the next approximations: π1 = π0 + π0 22 , π2 = π1 + π1 42 , . . . Check for convergence by comparing the new approximations with the previous ones. Repeat the process until the desired level of accuracy is achieved. 26 / 27
  • 27.
    Conclusion The extrapolation algorithmis a powerful technique for improving the accuracy and efficiency of numerical approximations. It can be applied to a wide range of problems in numerical analysis. The convergence of the extrapolation algorithm depends on the choice of extrapolation formula and the problem at hand. 27 / 27