2. 2
Lecture 1
Introduction to Numerical Methods
What are NUMERICAL METHODS?
Why do we need them?
Reading Assignment: Pages 3-10 of textbook
3. 3
Numerical Methods
Numerical Methods:
Algorithms that are used to obtain numerical
solutions of a mathematical problem.
Why do we need them?
1. No analytical solution exists,
2. An analytical solution is difficult to obtain
or not practical.
4. 4
What do we need?
Basic Needs in the Numerical Methods:
Practical:
Can be computed in a reasonable amount of time.
Accurate:
Good approximate to the true value,
Information about the approximation error
(Bounds, error order,… ).
5. 5
Outlines of the Course
Taylor Theorem
Number
Representation
Solution of nonlinear
Equations
Interpolation
Numerical
Differentiation
Numerical Integration
Solution of linear
Equations
Optimization
Least Squares curve
fitting
Solution of ordinary
differential equations
Solution of Partial
differential equations
6. 6
Solution of Nonlinear Equations
Some simple equations can be solved analytically:
Many other equations have no analytical solution:
3
1
)
1
(
2
)
3
)(
1
(
4
4
4
solution
Analytic
0
3
4
2
2
x
and
x
roots
x
x
solution
analytic
No
0
5
2 2
9
x
e
x
x
x
8. 8
Methods for Solving Nonlinear Equations
o Bisection Method
o Newton-Raphson Method
o Secant Method
9. 9
Solution of Systems of Linear Equations
unknowns.
1000
in
equations
1000
have
we
if
do
What to
1
2
3
,
2
5
2
3
,
3
:
as
it
solve
can
We
5
2
3
1
2
2
2
2
1
2
1
2
1
x
x
x
x
x
x
x
x
x
x
10. 10
Cramer’s Rule is Not Practical
this.
compute
to
years
10
than
more
needs
computer
super
A
needed.
are
tions
multiplica
10
2.3
system,
30
by
30
a
solve
To
tions.
multiplica
1)N!
1)(N
(N
need
we
unknowns,
N
with
equations
N
solve
To
problems.
large
for
practical
not
is
Rule
s
Cramer'
But
2
2
1
1
1
5
1
3
1
,
1
2
1
1
1
2
5
1
3
:
system
the
solve
to
used
be
can
Rule
s
Cramer'
20
35
2
1
x
x
12. 12
Methods for Solving Systems of Linear
Equations
o Naive Gaussian Elimination
o Gaussian Elimination with Scaled
Partial Pivoting
o Algorithm for Tri-diagonal
Equations
13. 13
Curve Fitting
Given a set of data:
Select a curve that best fits the data. One
choice is to find the curve so that the sum
of the square of the error is minimized.
x 0 1 2
y 0.5 10.3 21.3
14. 14
Interpolation
Given a set of data:
Find a polynomial P(x) whose graph
passes through all tabulated points.
xi 0 1 2
yi 0.5 10.3 15.3
table
in the
is
)
( i
i
i x
if
x
P
y
15. 15
Methods for Curve Fitting
o Least Squares
o Linear Regression
o Nonlinear Least Squares Problems
o Interpolation
o Newton Polynomial Interpolation
o Lagrange Interpolation
16. Optimization
Topic1 16
These problems involve determining a value or
values of an independent variable that correspond
to a “best” or optimal value of a function.
optimization involves identifying maxima and
minima.
17. Methods used for optimization
single- and multi-variable
unconstrained optimization.
Direct methods
Gradient methods
constrained optimization
Linear programming
Nonlinear constrained optimization
Topic1 17
18. 18
Integration
Some functions can be integrated
analytically:
?
:
solutions
analytical
no
have
functions
many
But
4
2
1
2
9
2
1
0
3
1
2
3
1
2
dx
e
x
xdx
a
x
20. 20
Methods for Numerical Integration
o Upper and Lower Sums
o Trapezoid Method
o Romberg Method
o Gauss Quadrature
21. 21
Solution of Ordinary Differential Equations
only.
cases
special
for
available
are
solutions
Analytical
*
equations.
the
satisfies
that
function
a
is
0
)
0
(
;
1
)
0
(
0
)
(
3
)
(
3
)
(
:
equation
al
differenti
the
o
solution t
A
x(t)
x
x
t
x
t
x
t
x
23. 23
Solution of Partial Differential Equations
Partial Differential Equations are more
difficult to solve than ordinary differential
equations:
)
sin(
)
0
,
(
,
0
)
,
1
(
)
,
0
(
0
2
2
2
2
2
x
x
u
t
u
t
u
t
u
x
u
25. 25
Summary
Numerical Methods:
Algorithms that are
used to obtain
numerical solution of a
mathematical problem.
We need them when
No analytical solution
exists or it is difficult
to obtain it.
Solution of Nonlinear Equations
Solution of Linear Equations
Curve Fitting
Least Squares
Interpolation
Numerical Integration
Numerical Differentiation
Solution of Ordinary Differential
Equations
Solution of Partial Differential
Equations
Topics Covered in the Course
26. 26
Number Representation
Normalized Floating Point Representation
Significant Digits
Accuracy and Precision
Rounding and Chopping
Reading Assignment: Chapter 3
Lecture 2
Approximations and Round-Off
Errors
27. Significant Digits
27
The significant digits of a number are those that
can be used with confidence. They correspond to
the number of certain digits plus one estimated
digit.
Thus the speedometer reading would consist of the
three significant figures: 48.5. In a similar fashion,
the odometer would yield a seven significant-figure
reading of 87,324.45.
29. 29
Accuracy and Precision
Accuracy is related to the closeness to the true
value.
Precision is related to the closeness to other
estimated values.
31. ERROR DEFINITIONS
Topic1 31
Numerical errors arise from the use of
approximations to represent exact mathematical
operations and quantities. These include truncation
errors, which result when approximations are used
to represent exact mathematical procedures, and
round-off errors, which result when numbers having
limited significant figures are used to represent
exact numbers.
32. 32
Can be computed if the true value is known:
100
*
value
true
ion
approximat
value
true
Error
Relative
Percent
Absolute
ion
approximat
value
true
Error
True
Absolute
t
t
E
Error Definitions – True Error
34. Topic1 34
However, in real-world applications, we will
obviously not know the true answer a priori. For
these situations, an alternative is to normalize
the error using the best available estimate of
the true value, that is, to the
approximation itself,
Error Definitions – Estimated Error
35. 35
When the true value is not known:
100
*
estimate
current
estimate
previous
estimate
current
Error
Relative
Percent
Absolute
Estimated
estimate
previous
estimate
current
Error
Absolute
Estimated
a
a
E
Error Definitions – Estimated Error
36. Topic1 36
Error Definitions – Estimated Error
One of the challenges of numerical methods is to determine
error estimates in the absence of knowledge regarding the true
value. For example, certain numerical methods use an iterative
approach to compute answers. In such an approach, a present
approximation is made on the basis of a previous
approximation. This process is performed repeatedly, or
iteratively, to successively compute (we hope) better and
better approximations.
37. Topic1 37
Error Definitions – Estimated Error
If this relationship holds, our result is assumed to be within the
prespecified acceptable level.
It is also convenient to relate these errors to the number of
significant figures in the approximation. It can be shown that if
the following criterion is met, we can be assured that the result
is correct to at least n significant figures.
For such cases, the computation is repeated until
40. ROUND-OFF ERRORS
40
round-off errors originate from the fact that
computers retain only a fixed number of
significant figures during a calculation.
Numbers such as pi, cannot be expressed by a
fixed number of significant figures. Therefore,
they cannot be represented exactly by the
computer.
41. 41
Computer Representation of Numbers
You are familiar with the decimal system:
Decimal System: Base = 10 , Digits (0,1,…,9)
Standard Representations:
2
1
0
1
2
10
5
10
4
10
2
10
1
10
3
45
.
312
part
part
fraction
integral
sign
5
4
.
2
1
3
42. 42
How the (a) decimal (base-10) and the (b) binary (base-2) systems
work. In (b), the binary number 10101101 is equivalent to the
decimal number 173.
43. 43
The representation of the decimal integer 2173 on a 16-bit computer
using the signed magnitude method.
45. 45
Normalized Floating Point Representation
Normalized Floating Point Representation:
Scientific Notation: Exactly one non-zero digit appears
before decimal point.
Advantage: Efficient in representing very small or very
large numbers. The normalized floating-point
representation of -5 is -1 * 0.5 * 10 1
exponent
signed
:
,
0
exponent
mantissa
sign
10
4
3
2
1
.
n
d
n
f
f
f
f
d
46. Topic1 46
Binary System
Binary System: Base = 2, Digits {0,1}
exponent
signed
mantissa
sign
2
.
1 4
3
2
1
n
f
f
f
f
10
)
625
.
1
(
10
)
3
2
1
2
2
0
1
2
1
1
(
2
)
101
.
1
(
48. Topic1 48
convert decimal 17.15 to IEEE Single format:
Convert decimal 17 to binary 10001. Convert decimal 0.15 to
the repeating binary fraction 0.001001 Combine integer and
fraction to obtain binary 10001.001001 Normalize the binary
number to obtain 1.0001001001x24 Thus, M = m-1
=0001001001 and E = e+127 = 131 = 10000011.
The number is positive, so S=0. Align the values for M, E, and
S in the correct fields.
0 10000011 00010010011001100110011
49. Topic1 49
Floating-point representation allows both fractions and very
large numbers to be expressed on the computer. However, it
has some disadvantages. For example, floating-point numbers
take up more room and take longer to process than integer
numbers. More significantly, however, their use introduces a
source of error because the mantissa holds only a finite
number of significant figures. Thus, a round-off error is
introduced.
50. Topic1 50
aspects of floating-point representation that have significance
regarding computer round-off errors:
1. There Is a Limited Range of Quantities That May Be
Represented. there are large positive and negative
numbers that cannot be represented. Attempts to employ
numbers outside the acceptable range will result in what is
called an overflow error. very small numbers cannot be
represented. This is illustrated by the underflow “hole”
between zero and the first positive number.
2. There Are Only a Finite Number of Quantities That
Can Be Represented within the Range. Obviously,
irrational numbers cannot be represented exactly. The
errors introduced by Approximating. The actual
approximation is accomplished in either of two ways:
chopping or rounding.
51. Topic1 51
Rounding and Chopping
Chopping: Throw all extra digits.
suppose that the value of pi=3.14159265358 . . .
is to be stored on a base-10 number system
carrying seven significant figures
chop off,” the eighth and higher terms, as
pi= 3.141592
Et = 0.00000065……
52. Topic1 52
Rounding and Chopping
Rounding: Replace the number by the nearest
machine number.
the last retained digit should be rounded up to yield
3.141593. Such rounding reduces the error to
Et = -0.00000035
53. Topic1 53
Significant Digits
Significant digits are those digits that can be
used with confidence.
Single-Precision: 7 Significant Digits
1.175494… × 10-38 to 3.402823… × 1038
Double-Precision: 15 Significant Digits
2.2250738… × 10-308 to 1.7976931… × 10308
55. Topic1 55
Motivation
We can easily compute expressions like:
?
)
6
.
0
sin(
,
4.1
compute
you
do
How
But,
)
4
(
2
10
3 2
x
way?
practical
a
this
Is
sin(0.6)?
compute
to
definition
the
use
we
Can
0.6
a
b
56. Topic1 56
Remark
In this course, all angles are assumed to
be in radian unless you are told otherwise.
57. Topic1 57
THE TAYLOR SERIES
Taylor theorem states that any smooth function
can be approximated as a polynomial.
Taylor series provides a means to express this
idea mathematically in a form that come up with
practical result.
59. Topic1 59
Maclaurin Series
Maclaurin series is a special case of Taylor
series with the center of expansion a = 0.
∑
∞
0
)
(
3
)
3
(
2
)
2
(
'
)
0
(
!
1
)
(
:
write
can
we
converge,
series
the
If
...
!
3
)
0
(
!
2
)
0
(
)
0
(
)
0
(
:
)
(
of
expansion
series
n
Maclauri
The
k
k
k
x
f
k
x
f
x
f
x
f
x
f
f
x
f
60. Topic1 60
Maclaurin Series – Example 1
∞.
x
for
converges
series
The
...
!
3
!
2
1
!
)
0
(
!
1
1
1
)
0
(
)
(
1
)
0
(
)
(
1
)
0
(
'
)
(
'
1
)
0
(
)
(
3
2
∞
0
∞
0
)
(
)
(
)
(
)
2
(
)
2
(
∑
∑
x
x
x
k
x
x
f
k
e
k
for
f
e
x
f
f
e
x
f
f
e
x
f
f
e
x
f
k
k
k
k
k
x
k
x
k
x
x
x
x
e
x
f
)
(
of
expansion
series
n
Maclauri
Obtain
62. Topic1 62
Maclaurin Series – Example 2
∞.
x
for
converges
series
The
....
!
7
!
5
!
3
!
)
0
(
)
sin(
1
)
0
(
)
cos(
)
(
0
)
0
(
)
sin(
)
(
1
)
0
(
'
)
cos(
)
(
'
0
)
0
(
)
sin(
)
(
7
5
3
∞
0
)
(
)
3
(
)
3
(
)
2
(
)
2
(
∑
x
x
x
x
x
k
f
x
f
x
x
f
f
x
x
f
f
x
x
f
f
x
x
f
k
k
k
:
)
sin(
)
(
of
expansion
series
n
Maclauri
Obtain x
x
f
64. Topic1 64
Maclaurin Series – Example 3
∞.
for
converges
series
The
....
!
6
!
4
!
2
1
)
(
!
)
0
(
)
cos(
0
)
0
(
)
sin(
)
(
1
)
0
(
)
cos(
)
(
0
)
0
(
'
)
sin(
)
(
'
1
)
0
(
)
cos(
)
(
6
4
2
∞
0
)
(
)
3
(
)
3
(
)
2
(
)
2
(
∑
x
x
x
x
x
k
f
x
f
x
x
f
f
x
x
f
f
x
x
f
f
x
x
f
k
k
k
)
cos(
)
(
:
of
expansion
series
Maclaurin
Obtain x
x
f
65. Topic1 65
Maclaurin Series – Example 4
1
|
|
for
converges
Series
...
x
x
x
1
x
1
1
:
of
Expansion
Series
Maclaurin
6
)
0
(
1
6
)
(
2
)
0
(
1
2
)
(
1
)
0
(
'
1
1
)
(
'
1
)
0
(
1
1
)
(
of
expansion
series
n
Maclauri
Obtain
3
2
)
3
(
4
)
3
(
)
2
(
3
)
2
(
2
x
f
x
x
f
f
x
x
f
f
x
x
f
f
x
x
f
x
1
1
f(x)
66. Topic1 66
Taylor series also provides a means to predict a
function value at one point in terms of the function
value and its derivatives at another point.
70. Topic1 70
Example 4 - Remarks
Can we apply the series for x≥1??
How many terms are needed to get a good
approximation???
These questions will be answered using
Taylor’s Theorem.
71. Topic1 71
Taylor Series – Example 5
...
)
1
(
)
1
(
)
1
(
1
:
)
1
(
Expansion
Series
Taylor
6
)
1
(
6
)
(
2
)
1
(
2
)
(
1
)
1
(
'
1
)
(
'
1
)
1
(
1
)
(
1
at
of
expansion
series
Taylor
Obtain
3
2
)
3
(
4
)
3
(
)
2
(
3
)
2
(
2
x
x
x
a
f
x
x
f
f
x
x
f
f
x
x
f
f
x
x
f
a
x
1
f(x)
72. Topic1 72
Taylor Series – Example 6
...
)
1
(
3
1
)
1
(
2
1
)
1
(
:
Expansion
Series
Taylor
2
)
1
(
1
)
1
(
,
1
)
1
(
'
,
0
)
1
(
2
)
(
,
1
)
(
,
1
)
(
'
,
)
ln(
)
(
)
1
(
at
)
ln(
of
expansion
series
Taylor
Obtain
3
2
)
3
(
)
2
(
3
)
3
(
2
)
2
(
x
x
x
f
f
f
f
x
x
f
x
x
f
x
x
f
x
x
f
a
x
f(x)
73. Topic1 73
Convergence of Taylor Series
The Taylor series converges fast (few terms
are needed) when x is near the point of
expansion. If |x-a| is large then more terms
are needed to get a good approximation.
75. Topic1 75
Error Term
Suppose that we truncated the Taylor series expansion after the
zero order term to yield
The remainder, or error, of this prediction, which is also
shown in the illustration, consists of the infinite series of
terms that were truncated:
76. Topic1 76
Error Term
It is obviously inconvenient to deal with the remainder in this
infinite series format. One simplification might be to truncate
the remainder itself, as in
77. 77
As in Fig. 4.3, the derivative mean-value theorem states
that if a function f(x) and its first derivative are continuous
over an interval from xi to xi+1, then there exists at least one
point on the function that has a slope, that is parallel to the
line joining f(xi) and f(xi+1).
By invoking this theorem it is simple to realize that,
The higher-order versions are
merely a logical extension of
the above equation. The first-
order version is
80. Topic1 80
Error Term - Example
05
14268
.
8
2
.
0
)!
1
(
)
(
)!
1
(
)
(
1
≥
≤
)
(
)
(
3
1
2
.
0
1
)
1
(
2
.
0
)
(
)
(
E
R
n
e
R
a
x
n
f
R
n
for
e
f
e
x
f
n
n
n
n
n
n
x
n
?
2
.
0
0
at
expansion
series
Taylor
its
of
3)
(
terms
4
first
the
by
)
(
replaced
we
if
error
the
is
large
How
x
when
a
n
e
x
f x
81. Topic1 81
Alternative form of Taylor’s Theorem
h
x
x
where
h
n
f
R
h
R
h
k
x
f
h
x
f
h
x
x
n
x
f
Let
n
n
n
n
n
k
k
k
and
between
is
)!
1
(
)
(
size)
step
(
!
)
(
)
(
:
then
and
containing
interval
an
on
1)
(
...,
2,
1,
orders
of
s
derivative
have
)
(
1
)
1
(
0
)
(
83. Topic1 83
Taylor’s Theorem – Alternative forms
.
and
between
is
)!
1
(
)
(
!
)
(
)
(
,
.
and
between
is
)
(
)!
1
(
)
(
)
(
!
)
(
)
(
1
)
1
(
0
)
(
1
)
1
(
0
)
(
h
x
x
where
h
n
f
h
k
x
f
h
x
f
h
x
x
x
a
x
a
where
a
x
n
f
a
x
k
a
f
x
f
n
n
n
k
k
k
n
n
n
k
k
k
85. Topic1 85
Example 7 – Taylor Series
...
!
)
5
.
0
(
2
...
!
2
)
5
.
0
(
4
)
5
.
0
(
2
)
5
.
0
(
!
)
5
.
0
(
2
)
5
.
0
(
2
)
(
4
)
5
.
0
(
4
)
(
2
)
5
.
0
(
'
2
)
(
'
)
5
.
0
(
)
(
2
2
2
2
2
∞
0
)
(
1
2
2
)
(
1
2
)
(
2
)
2
(
1
2
)
2
(
2
1
2
2
1
2
∑
k
x
e
x
e
x
e
e
x
k
f
e
e
f
e
x
f
e
f
e
x
f
e
f
e
x
f
e
f
e
x
f
k
k
k
k
k
x
k
k
x
k
k
x
x
x
5
.
0
,
)
(
of
expansion
series
Taylor
Obtain 1
2
a
e
x
f x
86. Topic1 86
Example 7 – Error Term
)!
1
(
max
)!
1
(
)
5
.
0
(
2
)!
1
(
)
5
.
0
1
(
2
)
5
.
0
(
)!
1
(
)
(
2
)
(
3
1
2
]
1
,
5
.
0
[
1
1
1
1
2
1
1
)
1
(
1
2
)
(
n
e
Error
e
n
Error
n
e
Error
x
n
f
Error
e
x
f
n
n
n
n
n
n
x
k
k