4. A Mathematical Property
• Well-known Mathematical Property:
• If a function f(x) is continuous on the interval [a..b] and sign of f(a) ≠ sign of
f(b), then
• There is a value c ∈ [a..b] such that: f(c) = 0 I.e., there is
a root c in the interval [a..b]
6. The Bisection Method
• The Bisection Method is a successive approximation method
that narrows down an interval that contains a root of the
function f(x)
• The Bisection Method is given an initial interval [a..b] that
contains a root (We can use the property sign of f(a) ≠ sign of
f(b) to find such an initial interval)
• The Bisection Method will cut the interval into 2 halves and
check which half interval contains a root of the function
• The Bisection Method will keep cut the interval in halves until
the resulting interval is extremely small
The root is then approximately equal to any value in the final
(very small) interval.
7. The Bisection Method (cont.)
• Example:
• Suppose the interval [a..b] is as follows:
8. The Bisection Method (cont.)
• We cut the interval [a..b] in the middle: m = (a+b)/2
9. The Bisection Method (cont.)
• Because sign of f(m) ≠ sign of f(a) , we proceed with the
search in the new interval [a..b]:
10. The Bisection Method (cont.)
We can use this statement to change to the new interval:
b = m;
11. The Bisection Method
• In the above example, we have changed the end point b to
obtain a smaller interval that still contains a root
In other cases, we may need to changed the end point b to
obtain a smaller interval that still contains a root
12. The Bisection Method (cont.)
• Here is an example where you have to change the end
point a:
• Initial interval [a..b]:
13. The Bisection Method (cont.)
• After cutting the interval in half, the root is contained in the
right-half, so we have to change the end point a:
14. The Bisection Method
• Rough description (pseudo code) of the Bisection Method:
Given: interval [a..b] such that: sign of f(a) ≠ sign of f(b)
repeat (until the interval [a..b] is "very small")
{
a+b
m = -----; // m = midpoint of interval [a..b]
2
if ( sign of f(m) ≠ sign of f(b) )
{
use interval [m..b] in the next iteration
15. The Bisection Method
(i.e.: replace a with m)
}
else
{
use interval [a..m] in the next iteration
(i.e.: replace b with m)
}
}
Approximate root = (a+b)/2; (any point between [a..b] will do
because the interval [a..b] is very small)
17. The Bisection Method
• Example execution:
• We will use a simple function to illustrate the execution of the Bisection
Method
• Function used:
Roots: √3 = 1.7320508... and −√3 = −1.7320508...
f(x) = x2 - 3
18. The Bisection Method (cont.)
• We will use the starting interval [0..4] since:
The interval [0..4] contains a root because: sign of f(0) ≠
sign of f(4)
• f(0) = 02 − 3 = −3
• f(4) = 42 − 3 = 13
20. Regula-Falsi Method
Type of Algorithm (Equation Solver)
The Regula-Falsi Method (sometimes called the False Position Method) is a method used
to find a numerical estimate of an equation.
This method attempts to solve an equation of the form f(x)=0. (This is very common in
most numerical analysis applications.) Any equation can be written in this form.
Algorithm Requirements
This algorithm requires a function f(x) and two points a and b for which f(x) is positive for
one of the values and negative for the other. We can write this condition as f(a)f(b)<0.
If the function f(x) is continuous on the interval [a,b] with f(a)f(b)<0, the algorithm will
eventually converge to a solution.
This algorithm can not be implemented to find a tangential root. That is a root that is
tangent to the x-axis and either positive or negative on both side of the root. For example
f(x)=(x-3)2, has a tangential root at x=3.
21. Regula-Falsi Algorithm
The idea for the Regula-Falsi method is to connect the points (a,f(a)) and (b,f(b)) with a
straight line.
Since linear equations are the simplest equations to solve for find the regula-falsi point
(xrfp) which is the solution to the linear equation connecting the endpoints.
Look at the sign of f(xrfp):
If sign(f(xrfp)) = 0 then end algorithm
else If sign(f(xrfp)) = sign(f(a)) then set a = xrfp
else set b = xrfp
x-axis
a b
f(b)
f(a) actual root
f(x)
xrfp
equation of line:
ax
ab
afbf
afy
)()(
)(
solving for xrfp
)()(
)(
)()(
)(
)()(
)(0
afbf
abaf
ax
ax
afbf
abaf
ax
ab
afbf
af
rfp
rfp
rfp
22. Example
Lets look for a solution to the equation x3-2x-3=0.
We consider the function f(x)=x3-2x-3
On the interval [0,2] the function is negative at 0 and positive at 2. This means that a=0
and b=2 (i.e. f(0)f(2)=(-3)(1)=-3<0, this means we can apply the algorithm).
2
3
4
6
31
)2(3
)0()2(
02)0(
0
ff
f
xrfp
8
21
2
3
)(
fxf rfp
This is negative and we will make the a =3/2 and
b is the same and apply the same thing to the
interval [3/2,2].
29
54
58
21
2
3
12
3
)2(
2
2
3
8
21
2
1
8
21
2
3
2
3
2
3
ff
f
xrfp
267785.0
29
54
)(
fxf rfp
This is negative and we will make the a =54/29 and
b is the same and apply the same thing to the
interval [54/29,2].
23. Stopping Conditions
Aside from lucking out and actually hitting the root, the stopping condition is usually fixed
to be a certain number of iterations or for the Standard Cauchy Error in computing the
Regula-Falsi Point (xrfp) to not change more than a prescribed amount (usually denoted ).
24. Interpolation
• Estimation of intermediate values between precise
data points. The most common method is:
• Although there is one and only one nth-order
polynomial that fits n+1 points, there are a variety of
mathematical formats in which this polynomial can be
expressed:
– The Newton polynomial
– The Lagrange polynomial
n
n xaxaxaaxf 2
210)(
25.
26. Newton’s Divided-Difference Interpolating
Polynomials
Linear Interpolation/
• Is the simplest form of interpolation, connecting two data
points with a straight line.
• f1(x) designates that this is a first-order interpolating
polynomial.
)(
)()(
)()(
)()()()(
0
0
01
01
0
01
0
01
xx
xx
xfxf
xfxf
xx
xfxf
xx
xfxf
Linear-interpolation
formula
Slope and a
finite divided
difference
approximation to
1st derivative
27.
28. Quadratic Interpolation/
• If three data points are available, the estimate is
improved by introducing some curvature into the line
connecting the points.
• A simple procedure can be used to determine the
values of the coefficients.
))(()()( 1020102 xxxxbxxbbxf
02
01
01
12
12
22
0
01
11
000
)()()()(
)()(
)(
xx
xx
xfxf
xx
xfxf
bxx
xx
xfxf
bxx
xfbxx
29. General Form of Newton’s Interpolating Polynomials/
0
02111
011
011
0122
011
00
01110
012100100
],,,[],,,[
],,,,[
],[],[
],,[
)()(
],[
],,,,[
],,[
],[
)(
],,,[)())((
],,[))((],[)()()(
xx
xxxfxxxf
xxxxf
xx
xxfxxf
xxxf
xx
xfxf
xxf
xxxxfb
xxxfb
xxfb
xfb
xxxfxxxxxx
xxxfxxxxxxfxxxfxf
n
nnnn
nn
ki
kjji
kji
ji
ji
ji
nnn
nnn
n
Bracketed function
evaluations are finite
divided differences
30. Errors of Newton’s Interpolating Polynomials/
• Structure of interpolating polynomials is similar to the Taylor
series expansion in the sense that finite divided differences are
added sequentially to capture the higher order derivatives.
• For an nth-order interpolating polynomial, an analogous
relationship for the error is:
• For non differentiable functions, if an additional point f(xn+1) is
available, an alternative formula can be used that does not
require prior knowledge of the function:
)())((
)!1(
)(
10
)1(
n
n
n xxxxxx
n
f
R
)())(](,,,,[ 10011 nnnnn xxxxxxxxxxfR
Is somewhere
containing the unknown
and he data
31. Lagrange Interpolating Polynomials
• The Lagrange interpolating polynomial is simply a
reformulation of the Newton’s polynomial that
avoids the computation of divided differences:
n
ij
j ji
j
i
n
i
iin
xx
xx
xL
xfxLxf
0
0
)(
)()()(
32.
)(
)()()(
)()()(
2
1202
10
1
2101
20
0
2010
21
2
1
01
0
0
10
1
1
xf
xxxx
xxxx
xf
xxxx
xxxx
xf
xxxx
xxxx
xf
xf
xx
xx
xf
xx
xx
xf
•As with Newton’s method, the Lagrange version has an
estimated error of:
n
i
innn xxxxxxfR
0
01 )(],,,,[
33.
34. Coefficients of an Interpolating
Polynomial
• Although both the Newton and Lagrange
polynomials are well suited for determining
intermediate values between points, they do not
provide a polynomial in conventional form:
• Since n+1 data points are required to determine n+1
coefficients, simultaneous linear systems of equations
can be used to calculate “a”s.
n
x xaxaxaaxf 2
210)(
37. Spline Interpolation
• There are cases where polynomials can lead to
erroneous results because of round off error
and overshoot.
• Alternative approach is to apply lower-order
polynomials to subsets of data points. Such
connecting polynomials are called spline
functions.
38.
39.
40.
41.
42. NEWTON FORWARD INTERPOLATION ON EQUISPACED POINTS
• Lagrange Interpolation has a number of disadvantages
• The amount of computation required is large
• Interpolation for additional values of requires the same amount of effort
as the first value (i.e. no part of the previous calculation can be used)
• When the number of interpolation points are changed
(increased/decreased), the results of the previous computations can not
be used
• Error estimation is difficult (at least may not be convenient)
• Use Newton Interpolation which is based on developing difference
tables for a given set
of data points
43.
44. Newton’s Divided Difference Polynomial Method
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the
general form of Newton’s divided difference polynomial method is presented. To
illustrate
the general form, cubic interpolation is shown in Figure