SlideShare a Scribd company logo
1 of 44
UNIT IV
The Bisection Method
Introduction
• Root of a function:
• Root of a function f(x) = a value a such that:
• f(a) = 0
Introduction (cont.)
• Example:
Function: f(x) = x2 - 4
Roots: x = -2, x = 2
Because:
f(-2) = (-2)2 - 4 = 4 - 4 = 0
f(2) = (2)2 - 4 = 4 - 4 = 0
A Mathematical Property
• Well-known Mathematical Property:
• If a function f(x) is continuous on the interval [a..b] and sign of f(a) ≠ sign of
f(b), then
• There is a value c ∈ [a..b] such that: f(c) = 0 I.e., there is
a root c in the interval [a..b]
A Mathematical Property (cont.)
• Example:
The Bisection Method
• The Bisection Method is a successive approximation method
that narrows down an interval that contains a root of the
function f(x)
• The Bisection Method is given an initial interval [a..b] that
contains a root (We can use the property sign of f(a) ≠ sign of
f(b) to find such an initial interval)
• The Bisection Method will cut the interval into 2 halves and
check which half interval contains a root of the function
• The Bisection Method will keep cut the interval in halves until
the resulting interval is extremely small
The root is then approximately equal to any value in the final
(very small) interval.
The Bisection Method (cont.)
• Example:
• Suppose the interval [a..b] is as follows:
The Bisection Method (cont.)
• We cut the interval [a..b] in the middle: m = (a+b)/2
The Bisection Method (cont.)
• Because sign of f(m) ≠ sign of f(a) , we proceed with the
search in the new interval [a..b]:
The Bisection Method (cont.)
We can use this statement to change to the new interval:
b = m;
The Bisection Method
• In the above example, we have changed the end point b to
obtain a smaller interval that still contains a root
In other cases, we may need to changed the end point b to
obtain a smaller interval that still contains a root
The Bisection Method (cont.)
• Here is an example where you have to change the end
point a:
• Initial interval [a..b]:
The Bisection Method (cont.)
• After cutting the interval in half, the root is contained in the
right-half, so we have to change the end point a:
The Bisection Method
• Rough description (pseudo code) of the Bisection Method:
Given: interval [a..b] such that: sign of f(a) ≠ sign of f(b)
repeat (until the interval [a..b] is "very small")
{
a+b
m = -----; // m = midpoint of interval [a..b]
2
if ( sign of f(m) ≠ sign of f(b) )
{
use interval [m..b] in the next iteration
The Bisection Method
(i.e.: replace a with m)
}
else
{
use interval [a..m] in the next iteration
(i.e.: replace b with m)
}
}
Approximate root = (a+b)/2; (any point between [a..b] will do
because the interval [a..b] is very small)
The Bisection Method
• Structure Diagram of the Bisection Algorithm:
The Bisection Method
• Example execution:
• We will use a simple function to illustrate the execution of the Bisection
Method
• Function used:
Roots: √3 = 1.7320508... and −√3 = −1.7320508...
f(x) = x2 - 3
The Bisection Method (cont.)
• We will use the starting interval [0..4] since:
The interval [0..4] contains a root because: sign of f(0) ≠
sign of f(4)
• f(0) = 02 − 3 = −3
• f(4) = 42 − 3 = 13
Regula-Falsi Method
Regula-Falsi Method
Type of Algorithm (Equation Solver)
The Regula-Falsi Method (sometimes called the False Position Method) is a method used
to find a numerical estimate of an equation.
This method attempts to solve an equation of the form f(x)=0. (This is very common in
most numerical analysis applications.) Any equation can be written in this form.
Algorithm Requirements
This algorithm requires a function f(x) and two points a and b for which f(x) is positive for
one of the values and negative for the other. We can write this condition as f(a)f(b)<0.
If the function f(x) is continuous on the interval [a,b] with f(a)f(b)<0, the algorithm will
eventually converge to a solution.
This algorithm can not be implemented to find a tangential root. That is a root that is
tangent to the x-axis and either positive or negative on both side of the root. For example
f(x)=(x-3)2, has a tangential root at x=3.
Regula-Falsi Algorithm
The idea for the Regula-Falsi method is to connect the points (a,f(a)) and (b,f(b)) with a
straight line.
Since linear equations are the simplest equations to solve for find the regula-falsi point
(xrfp) which is the solution to the linear equation connecting the endpoints.
Look at the sign of f(xrfp):
If sign(f(xrfp)) = 0 then end algorithm
else If sign(f(xrfp)) = sign(f(a)) then set a = xrfp
else set b = xrfp
x-axis
a b
f(b)
f(a) actual root
f(x)
xrfp
equation of line:
 ax
ab
afbf
afy 



)()(
)(
solving for xrfp
 
 
 
)()(
)(
)()(
)(
)()(
)(0
afbf
abaf
ax
ax
afbf
abaf
ax
ab
afbf
af
rfp
rfp
rfp










Example
Lets look for a solution to the equation x3-2x-3=0.
We consider the function f(x)=x3-2x-3
On the interval [0,2] the function is negative at 0 and positive at 2. This means that a=0
and b=2 (i.e. f(0)f(2)=(-3)(1)=-3<0, this means we can apply the algorithm).
 
2
3
4
6
31
)2(3
)0()2(
02)0(
0 








ff
f
xrfp
8
21
2
3
)(







 fxf rfp
This is negative and we will make the a =3/2 and
b is the same and apply the same thing to the
interval [3/2,2].
  
 
 
29
54
58
21
2
3
12
3
)2(
2
2
3
8
21
2
1
8
21
2
3
2
3
2
3





 

ff
f
xrfp
267785.0
29
54
)( 





 fxf rfp
This is negative and we will make the a =54/29 and
b is the same and apply the same thing to the
interval [54/29,2].
Stopping Conditions
Aside from lucking out and actually hitting the root, the stopping condition is usually fixed
to be a certain number of iterations or for the Standard Cauchy Error in computing the
Regula-Falsi Point (xrfp) to not change more than a prescribed amount (usually denoted ).
Interpolation
• Estimation of intermediate values between precise
data points. The most common method is:
• Although there is one and only one nth-order
polynomial that fits n+1 points, there are a variety of
mathematical formats in which this polynomial can be
expressed:
– The Newton polynomial
– The Lagrange polynomial
n
n xaxaxaaxf  2
210)(
Newton’s Divided-Difference Interpolating
Polynomials
Linear Interpolation/
• Is the simplest form of interpolation, connecting two data
points with a straight line.
• f1(x) designates that this is a first-order interpolating
polynomial.
)(
)()(
)()(
)()()()(
0
0
01
01
0
01
0
01
xx
xx
xfxf
xfxf
xx
xfxf
xx
xfxf









Linear-interpolation
formula
Slope and a
finite divided
difference
approximation to
1st derivative
Quadratic Interpolation/
• If three data points are available, the estimate is
improved by introducing some curvature into the line
connecting the points.
• A simple procedure can be used to determine the
values of the coefficients.
))(()()( 1020102 xxxxbxxbbxf 
02
01
01
12
12
22
0
01
11
000
)()()()(
)()(
)(
xx
xx
xfxf
xx
xfxf
bxx
xx
xfxf
bxx
xfbxx











General Form of Newton’s Interpolating Polynomials/
0
02111
011
011
0122
011
00
01110
012100100
],,,[],,,[
],,,,[
],[],[
],,[
)()(
],[
],,,,[
],,[
],[
)(
],,,[)())((
],,[))((],[)()()(
xx
xxxfxxxf
xxxxf
xx
xxfxxf
xxxf
xx
xfxf
xxf
xxxxfb
xxxfb
xxfb
xfb
xxxfxxxxxx
xxxfxxxxxxfxxxfxf
n
nnnn
nn
ki
kjji
kji
ji
ji
ji
nnn
nnn
n

























Bracketed function
evaluations are finite
divided differences
Errors of Newton’s Interpolating Polynomials/
• Structure of interpolating polynomials is similar to the Taylor
series expansion in the sense that finite divided differences are
added sequentially to capture the higher order derivatives.
• For an nth-order interpolating polynomial, an analogous
relationship for the error is:
• For non differentiable functions, if an additional point f(xn+1) is
available, an alternative formula can be used that does not
require prior knowledge of the function:
)())((
)!1(
)(
10
)1(
n
n
n xxxxxx
n
f
R 





)())(](,,,,[ 10011 nnnnn xxxxxxxxxxfR   
 Is somewhere
containing the unknown
and he data
Lagrange Interpolating Polynomials
• The Lagrange interpolating polynomial is simply a
reformulation of the Newton’s polynomial that
avoids the computation of divided differences:









n
ij
j ji
j
i
n
i
iin
xx
xx
xL
xfxLxf
0
0
)(
)()()(
  
  
  
  
  
  
)(
)()()(
)()()(
2
1202
10
1
2101
20
0
2010
21
2
1
01
0
0
10
1
1
xf
xxxx
xxxx
xf
xxxx
xxxx
xf
xxxx
xxxx
xf
xf
xx
xx
xf
xx
xx
xf















•As with Newton’s method, the Lagrange version has an
estimated error of:

 
n
i
innn xxxxxxfR
0
01 )(],,,,[ 
Coefficients of an Interpolating
Polynomial
• Although both the Newton and Lagrange
polynomials are well suited for determining
intermediate values between points, they do not
provide a polynomial in conventional form:
• Since n+1 data points are required to determine n+1
coefficients, simultaneous linear systems of equations
can be used to calculate “a”s.
n
x xaxaxaaxf  2
210)(
n
nnnnn
n
n
n
n
xaxaxaaxf
xaxaxaaxf
xaxaxaaxf







2
210
1
2
121101
0
2
020100
)(
)(
)(
Where “x”s are the knowns and “a”s are the unknowns.
Spline Interpolation
• There are cases where polynomials can lead to
erroneous results because of round off error
and overshoot.
• Alternative approach is to apply lower-order
polynomials to subsets of data points. Such
connecting polynomials are called spline
functions.
NEWTON FORWARD INTERPOLATION ON EQUISPACED POINTS
• Lagrange Interpolation has a number of disadvantages
• The amount of computation required is large
• Interpolation for additional values of requires the same amount of effort
as the first value (i.e. no part of the previous calculation can be used)
• When the number of interpolation points are changed
(increased/decreased), the results of the previous computations can not
be used
• Error estimation is difficult (at least may not be convenient)
• Use Newton Interpolation which is based on developing difference
tables for a given set
of data points
Newton’s Divided Difference Polynomial Method
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the
general form of Newton’s divided difference polynomial method is presented. To
illustrate
the general form, cubic interpolation is shown in Figure

More Related Content

What's hot

Secent method
Secent methodSecent method
Secent method
ritu1806
 
Bisection method
Bisection methodBisection method
Bisection method
uis
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi method
andrushow
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
Jigisha Dabhi
 

What's hot (20)

False Point Method / Regula falsi method
False Point Method / Regula falsi methodFalse Point Method / Regula falsi method
False Point Method / Regula falsi method
 
Secent method
Secent methodSecent method
Secent method
 
Bisection method
Bisection methodBisection method
Bisection method
 
Benginning Calculus Lecture notes 2 - limits and continuity
Benginning Calculus Lecture notes 2 - limits and continuityBenginning Calculus Lecture notes 2 - limits and continuity
Benginning Calculus Lecture notes 2 - limits and continuity
 
NACA Regula Falsi Method
 NACA Regula Falsi Method NACA Regula Falsi Method
NACA Regula Falsi Method
 
Secant method
Secant methodSecant method
Secant method
 
21 simpson's rule
21 simpson's rule21 simpson's rule
21 simpson's rule
 
Bisection method
Bisection methodBisection method
Bisection method
 
Presentation on Numerical Integration
Presentation on Numerical IntegrationPresentation on Numerical Integration
Presentation on Numerical Integration
 
Presentation5
Presentation5Presentation5
Presentation5
 
Numerical analysis ppt
Numerical analysis pptNumerical analysis ppt
Numerical analysis ppt
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi method
 
Bisection & Regual falsi methods
Bisection & Regual falsi methodsBisection & Regual falsi methods
Bisection & Regual falsi methods
 
Limits and derivatives
Limits and derivativesLimits and derivatives
Limits and derivatives
 
Polynomials and Curve Fitting in MATLAB
Polynomials and Curve Fitting in MATLABPolynomials and Curve Fitting in MATLAB
Polynomials and Curve Fitting in MATLAB
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
 
Newton’s Divided Difference Formula
Newton’s Divided Difference FormulaNewton’s Divided Difference Formula
Newton’s Divided Difference Formula
 
Computational Method to Solve the Partial Differential Equations (PDEs)
Computational Method to Solve the Partial Differential  Equations (PDEs)Computational Method to Solve the Partial Differential  Equations (PDEs)
Computational Method to Solve the Partial Differential Equations (PDEs)
 
bisection method
bisection methodbisection method
bisection method
 
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical MethodsGauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
 

Viewers also liked

Iterative methods
Iterative methodsIterative methods
Iterative methods
Kt Silva
 
algebraic&transdential equations
algebraic&transdential equationsalgebraic&transdential equations
algebraic&transdential equations
8laddu8
 

Viewers also liked (20)

Bisection method
Bisection methodBisection method
Bisection method
 
Bisection
BisectionBisection
Bisection
 
Bisection
BisectionBisection
Bisection
 
Numerical on bisection method
Numerical on bisection methodNumerical on bisection method
Numerical on bisection method
 
Secante
SecanteSecante
Secante
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
 
Bisection method
Bisection methodBisection method
Bisection method
 
Newton raphson method
Newton raphson methodNewton raphson method
Newton raphson method
 
Secant method
Secant methodSecant method
Secant method
 
newton raphson method
newton raphson methodnewton raphson method
newton raphson method
 
NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)NUMERICAL METHODS -Iterative methods(indirect method)
NUMERICAL METHODS -Iterative methods(indirect method)
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
Numerical method
Numerical methodNumerical method
Numerical method
 
Bisection and fixed point method
Bisection and fixed point methodBisection and fixed point method
Bisection and fixed point method
 
Iterative methods
Iterative methodsIterative methods
Iterative methods
 
algebraic&transdential equations
algebraic&transdential equationsalgebraic&transdential equations
algebraic&transdential equations
 
56 system of linear equations
56 system of linear equations56 system of linear equations
56 system of linear equations
 
Roots equation
Roots equationRoots equation
Roots equation
 
Numerical analysis (Bisectional method) application
Numerical analysis (Bisectional method) applicationNumerical analysis (Bisectional method) application
Numerical analysis (Bisectional method) application
 
Numerical Method Analysis: Algebraic and Transcendental Equations (Non-Linear)
Numerical Method Analysis: Algebraic and Transcendental Equations (Non-Linear)Numerical Method Analysis: Algebraic and Transcendental Equations (Non-Linear)
Numerical Method Analysis: Algebraic and Transcendental Equations (Non-Linear)
 

Similar to Unit4

AOT2 Single Variable Optimization Algorithms.pdf
AOT2 Single Variable Optimization Algorithms.pdfAOT2 Single Variable Optimization Algorithms.pdf
AOT2 Single Variable Optimization Algorithms.pdf
SandipBarik8
 
Ap calculus extrema v2
Ap calculus extrema v2Ap calculus extrema v2
Ap calculus extrema v2
gregcross22
 
Nams- Roots of equations by numerical methods
Nams- Roots of equations by numerical methodsNams- Roots of equations by numerical methods
Nams- Roots of equations by numerical methods
Ruchi Maurya
 

Similar to Unit4 (20)

Bisection method in maths 4
Bisection method in maths 4Bisection method in maths 4
Bisection method in maths 4
 
1519 differentiation-integration-02
1519 differentiation-integration-021519 differentiation-integration-02
1519 differentiation-integration-02
 
Applications of numerical methods
Applications of numerical methodsApplications of numerical methods
Applications of numerical methods
 
Applied numerical methods lec4
Applied numerical methods lec4Applied numerical methods lec4
Applied numerical methods lec4
 
Es272 ch6
Es272 ch6Es272 ch6
Es272 ch6
 
Introduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfIntroduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdf
 
Numerical Method 2
Numerical Method 2Numerical Method 2
Numerical Method 2
 
Newton cotes integration method
Newton cotes integration  methodNewton cotes integration  method
Newton cotes integration method
 
AOT2 Single Variable Optimization Algorithms.pdf
AOT2 Single Variable Optimization Algorithms.pdfAOT2 Single Variable Optimization Algorithms.pdf
AOT2 Single Variable Optimization Algorithms.pdf
 
Single_Variable_Optimization_Part1_Dichotomous_moodle.ppsx
Single_Variable_Optimization_Part1_Dichotomous_moodle.ppsxSingle_Variable_Optimization_Part1_Dichotomous_moodle.ppsx
Single_Variable_Optimization_Part1_Dichotomous_moodle.ppsx
 
Ap calculus extrema v2
Ap calculus extrema v2Ap calculus extrema v2
Ap calculus extrema v2
 
Applications of Differentiation
Applications of DifferentiationApplications of Differentiation
Applications of Differentiation
 
Mac2311 study guide-tcm6-49721
Mac2311 study guide-tcm6-49721Mac2311 study guide-tcm6-49721
Mac2311 study guide-tcm6-49721
 
Numerical differentation with c
Numerical differentation with cNumerical differentation with c
Numerical differentation with c
 
Chapter 3
Chapter 3Chapter 3
Chapter 3
 
Derivative rules.docx
Derivative rules.docxDerivative rules.docx
Derivative rules.docx
 
Rate of change and tangent lines
Rate of change and tangent linesRate of change and tangent lines
Rate of change and tangent lines
 
Numerical method for solving non linear equations
Numerical method for solving non linear equationsNumerical method for solving non linear equations
Numerical method for solving non linear equations
 
Mit6 094 iap10_lec03
Mit6 094 iap10_lec03Mit6 094 iap10_lec03
Mit6 094 iap10_lec03
 
Nams- Roots of equations by numerical methods
Nams- Roots of equations by numerical methodsNams- Roots of equations by numerical methods
Nams- Roots of equations by numerical methods
 

Recently uploaded

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 

Recently uploaded (20)

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 

Unit4

  • 2. Introduction • Root of a function: • Root of a function f(x) = a value a such that: • f(a) = 0
  • 3. Introduction (cont.) • Example: Function: f(x) = x2 - 4 Roots: x = -2, x = 2 Because: f(-2) = (-2)2 - 4 = 4 - 4 = 0 f(2) = (2)2 - 4 = 4 - 4 = 0
  • 4. A Mathematical Property • Well-known Mathematical Property: • If a function f(x) is continuous on the interval [a..b] and sign of f(a) ≠ sign of f(b), then • There is a value c ∈ [a..b] such that: f(c) = 0 I.e., there is a root c in the interval [a..b]
  • 5. A Mathematical Property (cont.) • Example:
  • 6. The Bisection Method • The Bisection Method is a successive approximation method that narrows down an interval that contains a root of the function f(x) • The Bisection Method is given an initial interval [a..b] that contains a root (We can use the property sign of f(a) ≠ sign of f(b) to find such an initial interval) • The Bisection Method will cut the interval into 2 halves and check which half interval contains a root of the function • The Bisection Method will keep cut the interval in halves until the resulting interval is extremely small The root is then approximately equal to any value in the final (very small) interval.
  • 7. The Bisection Method (cont.) • Example: • Suppose the interval [a..b] is as follows:
  • 8. The Bisection Method (cont.) • We cut the interval [a..b] in the middle: m = (a+b)/2
  • 9. The Bisection Method (cont.) • Because sign of f(m) ≠ sign of f(a) , we proceed with the search in the new interval [a..b]:
  • 10. The Bisection Method (cont.) We can use this statement to change to the new interval: b = m;
  • 11. The Bisection Method • In the above example, we have changed the end point b to obtain a smaller interval that still contains a root In other cases, we may need to changed the end point b to obtain a smaller interval that still contains a root
  • 12. The Bisection Method (cont.) • Here is an example where you have to change the end point a: • Initial interval [a..b]:
  • 13. The Bisection Method (cont.) • After cutting the interval in half, the root is contained in the right-half, so we have to change the end point a:
  • 14. The Bisection Method • Rough description (pseudo code) of the Bisection Method: Given: interval [a..b] such that: sign of f(a) ≠ sign of f(b) repeat (until the interval [a..b] is "very small") { a+b m = -----; // m = midpoint of interval [a..b] 2 if ( sign of f(m) ≠ sign of f(b) ) { use interval [m..b] in the next iteration
  • 15. The Bisection Method (i.e.: replace a with m) } else { use interval [a..m] in the next iteration (i.e.: replace b with m) } } Approximate root = (a+b)/2; (any point between [a..b] will do because the interval [a..b] is very small)
  • 16. The Bisection Method • Structure Diagram of the Bisection Algorithm:
  • 17. The Bisection Method • Example execution: • We will use a simple function to illustrate the execution of the Bisection Method • Function used: Roots: √3 = 1.7320508... and −√3 = −1.7320508... f(x) = x2 - 3
  • 18. The Bisection Method (cont.) • We will use the starting interval [0..4] since: The interval [0..4] contains a root because: sign of f(0) ≠ sign of f(4) • f(0) = 02 − 3 = −3 • f(4) = 42 − 3 = 13
  • 20. Regula-Falsi Method Type of Algorithm (Equation Solver) The Regula-Falsi Method (sometimes called the False Position Method) is a method used to find a numerical estimate of an equation. This method attempts to solve an equation of the form f(x)=0. (This is very common in most numerical analysis applications.) Any equation can be written in this form. Algorithm Requirements This algorithm requires a function f(x) and two points a and b for which f(x) is positive for one of the values and negative for the other. We can write this condition as f(a)f(b)<0. If the function f(x) is continuous on the interval [a,b] with f(a)f(b)<0, the algorithm will eventually converge to a solution. This algorithm can not be implemented to find a tangential root. That is a root that is tangent to the x-axis and either positive or negative on both side of the root. For example f(x)=(x-3)2, has a tangential root at x=3.
  • 21. Regula-Falsi Algorithm The idea for the Regula-Falsi method is to connect the points (a,f(a)) and (b,f(b)) with a straight line. Since linear equations are the simplest equations to solve for find the regula-falsi point (xrfp) which is the solution to the linear equation connecting the endpoints. Look at the sign of f(xrfp): If sign(f(xrfp)) = 0 then end algorithm else If sign(f(xrfp)) = sign(f(a)) then set a = xrfp else set b = xrfp x-axis a b f(b) f(a) actual root f(x) xrfp equation of line:  ax ab afbf afy     )()( )( solving for xrfp       )()( )( )()( )( )()( )(0 afbf abaf ax ax afbf abaf ax ab afbf af rfp rfp rfp          
  • 22. Example Lets look for a solution to the equation x3-2x-3=0. We consider the function f(x)=x3-2x-3 On the interval [0,2] the function is negative at 0 and positive at 2. This means that a=0 and b=2 (i.e. f(0)f(2)=(-3)(1)=-3<0, this means we can apply the algorithm).   2 3 4 6 31 )2(3 )0()2( 02)0( 0          ff f xrfp 8 21 2 3 )(         fxf rfp This is negative and we will make the a =3/2 and b is the same and apply the same thing to the interval [3/2,2].        29 54 58 21 2 3 12 3 )2( 2 2 3 8 21 2 1 8 21 2 3 2 3 2 3         ff f xrfp 267785.0 29 54 )(        fxf rfp This is negative and we will make the a =54/29 and b is the same and apply the same thing to the interval [54/29,2].
  • 23. Stopping Conditions Aside from lucking out and actually hitting the root, the stopping condition is usually fixed to be a certain number of iterations or for the Standard Cauchy Error in computing the Regula-Falsi Point (xrfp) to not change more than a prescribed amount (usually denoted ).
  • 24. Interpolation • Estimation of intermediate values between precise data points. The most common method is: • Although there is one and only one nth-order polynomial that fits n+1 points, there are a variety of mathematical formats in which this polynomial can be expressed: – The Newton polynomial – The Lagrange polynomial n n xaxaxaaxf  2 210)(
  • 25.
  • 26. Newton’s Divided-Difference Interpolating Polynomials Linear Interpolation/ • Is the simplest form of interpolation, connecting two data points with a straight line. • f1(x) designates that this is a first-order interpolating polynomial. )( )()( )()( )()()()( 0 0 01 01 0 01 0 01 xx xx xfxf xfxf xx xfxf xx xfxf          Linear-interpolation formula Slope and a finite divided difference approximation to 1st derivative
  • 27.
  • 28. Quadratic Interpolation/ • If three data points are available, the estimate is improved by introducing some curvature into the line connecting the points. • A simple procedure can be used to determine the values of the coefficients. ))(()()( 1020102 xxxxbxxbbxf  02 01 01 12 12 22 0 01 11 000 )()()()( )()( )( xx xx xfxf xx xfxf bxx xx xfxf bxx xfbxx           
  • 29. General Form of Newton’s Interpolating Polynomials/ 0 02111 011 011 0122 011 00 01110 012100100 ],,,[],,,[ ],,,,[ ],[],[ ],,[ )()( ],[ ],,,,[ ],,[ ],[ )( ],,,[)())(( ],,[))((],[)()()( xx xxxfxxxf xxxxf xx xxfxxf xxxf xx xfxf xxf xxxxfb xxxfb xxfb xfb xxxfxxxxxx xxxfxxxxxxfxxxfxf n nnnn nn ki kjji kji ji ji ji nnn nnn n                          Bracketed function evaluations are finite divided differences
  • 30. Errors of Newton’s Interpolating Polynomials/ • Structure of interpolating polynomials is similar to the Taylor series expansion in the sense that finite divided differences are added sequentially to capture the higher order derivatives. • For an nth-order interpolating polynomial, an analogous relationship for the error is: • For non differentiable functions, if an additional point f(xn+1) is available, an alternative formula can be used that does not require prior knowledge of the function: )())(( )!1( )( 10 )1( n n n xxxxxx n f R       )())(](,,,,[ 10011 nnnnn xxxxxxxxxxfR     Is somewhere containing the unknown and he data
  • 31. Lagrange Interpolating Polynomials • The Lagrange interpolating polynomial is simply a reformulation of the Newton’s polynomial that avoids the computation of divided differences:          n ij j ji j i n i iin xx xx xL xfxLxf 0 0 )( )()()(
  • 32.                   )( )()()( )()()( 2 1202 10 1 2101 20 0 2010 21 2 1 01 0 0 10 1 1 xf xxxx xxxx xf xxxx xxxx xf xxxx xxxx xf xf xx xx xf xx xx xf                •As with Newton’s method, the Lagrange version has an estimated error of:    n i innn xxxxxxfR 0 01 )(],,,,[ 
  • 33.
  • 34. Coefficients of an Interpolating Polynomial • Although both the Newton and Lagrange polynomials are well suited for determining intermediate values between points, they do not provide a polynomial in conventional form: • Since n+1 data points are required to determine n+1 coefficients, simultaneous linear systems of equations can be used to calculate “a”s. n x xaxaxaaxf  2 210)(
  • 36.
  • 37. Spline Interpolation • There are cases where polynomials can lead to erroneous results because of round off error and overshoot. • Alternative approach is to apply lower-order polynomials to subsets of data points. Such connecting polynomials are called spline functions.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42. NEWTON FORWARD INTERPOLATION ON EQUISPACED POINTS • Lagrange Interpolation has a number of disadvantages • The amount of computation required is large • Interpolation for additional values of requires the same amount of effort as the first value (i.e. no part of the previous calculation can be used) • When the number of interpolation points are changed (increased/decreased), the results of the previous computations can not be used • Error estimation is difficult (at least may not be convenient) • Use Newton Interpolation which is based on developing difference tables for a given set of data points
  • 43.
  • 44. Newton’s Divided Difference Polynomial Method To illustrate this method, linear and quadratic interpolation is presented first. Then, the general form of Newton’s divided difference polynomial method is presented. To illustrate the general form, cubic interpolation is shown in Figure