The document discusses Lagrange interpolation and divided differences. It explains that the Lagrange interpolation polynomial can be written in terms of divided differences, where the coefficients are divided differences of the function values. Divided differences are defined recursively, and a pattern is identified to write them in terms of the function values at nodes. An example divided difference table is given for a set of data points.
This document discusses Newton's forward and backward difference interpolation formulas for equally spaced data points. It provides the formulations for calculating the forward and backward differences up to the kth order. For equally spaced points, the forward difference formula approximates a function f(x) using its kth forward difference at the initial point x0. Similarly, the backward difference formula approximates f(x) using its kth backward difference at x0. The document includes an example problem of using these formulas to estimate the Bessel function and exercises involving interpolation of the gamma function and exponential function.
This document discusses numerical integration and interpolation formulas. It begins by explaining the general formula for numerical integration using equidistant values of a function f(x) between bounds a and b. It then derives Trapezoidal, Simpson's, and Weddle's rules by putting different values for n in the general formula. The document also discusses Newton's forward and backward interpolation formulas, Lagrange interpolation formula, and provides examples of their application. It concludes by comparing Lagrange and Newton interpolation and discussing uses of interpolation in computer science and engineering fields.
Stirling's formula provides an approximation of factorials and is derived as the average of the Gauss forward and backward interpolation formulae. It is most accurate when -1/4 < p < 1/4. The formula is f(x) = f(x0) + f'(x0)(x - x0) + (f"(x0)/2!)(x - x0)^2 + ... + (f^((n))(x0)/n!)(x - x0)^n, where f^((n))(x0) is the nth derivative of f evaluated at x0. Stirling's formula is obtained by taking the average of the Gauss forward and backward difference formulae.
Gauss Forward And Backward Central Difference Interpolation Formula Deep Dalsania
This PPT contains the topic called Gauss Forward And Backward Central Difference Interpolation Formula of subject called Numerical and Statistical Methods for Computer Engineering.
The document introduces Euler's method for numerically solving ordinary differential equations. It provides the formulation of Euler's method as a recurrence relation and gives examples of applying the method to solve various initial value problems by discretizing the interval and time steps. Euler's method approximates the slope of the tangent line at each step to iteratively calculate subsequent y-values.
Newton divided difference interpolationVISHAL DONGA
This document presents Newton's divided difference polynomial method of interpolation. It defines interpolation as finding the value of 'y' at an unspecified value of 'x' given a set of (x,y) data points. Newton's method uses divided differences to determine the coefficients of a polynomial that can be used to interpolate and estimate y-values between the given data points. The document includes an example of applying Newton's method to find the interpolating polynomial and estimate an unknown y-value for a given set of 5 (x,y) data points.
This document discusses topics in partial differentiation including:
1) The geometrical meaning of partial derivatives as the slope of the tangent line to a surface.
2) Finding the equation of the tangent plane and normal line to a surface.
3) Taylor's theorem and Maclaurin's theorem for functions with two variables, which can be used to approximate functions and calculate errors.
This document discusses Newton's forward and backward difference interpolation formulas for equally spaced data points. It provides the formulations for calculating the forward and backward differences up to the kth order. For equally spaced points, the forward difference formula approximates a function f(x) using its kth forward difference at the initial point x0. Similarly, the backward difference formula approximates f(x) using its kth backward difference at x0. The document includes an example problem of using these formulas to estimate the Bessel function and exercises involving interpolation of the gamma function and exponential function.
This document discusses numerical integration and interpolation formulas. It begins by explaining the general formula for numerical integration using equidistant values of a function f(x) between bounds a and b. It then derives Trapezoidal, Simpson's, and Weddle's rules by putting different values for n in the general formula. The document also discusses Newton's forward and backward interpolation formulas, Lagrange interpolation formula, and provides examples of their application. It concludes by comparing Lagrange and Newton interpolation and discussing uses of interpolation in computer science and engineering fields.
Stirling's formula provides an approximation of factorials and is derived as the average of the Gauss forward and backward interpolation formulae. It is most accurate when -1/4 < p < 1/4. The formula is f(x) = f(x0) + f'(x0)(x - x0) + (f"(x0)/2!)(x - x0)^2 + ... + (f^((n))(x0)/n!)(x - x0)^n, where f^((n))(x0) is the nth derivative of f evaluated at x0. Stirling's formula is obtained by taking the average of the Gauss forward and backward difference formulae.
Gauss Forward And Backward Central Difference Interpolation Formula Deep Dalsania
This PPT contains the topic called Gauss Forward And Backward Central Difference Interpolation Formula of subject called Numerical and Statistical Methods for Computer Engineering.
The document introduces Euler's method for numerically solving ordinary differential equations. It provides the formulation of Euler's method as a recurrence relation and gives examples of applying the method to solve various initial value problems by discretizing the interval and time steps. Euler's method approximates the slope of the tangent line at each step to iteratively calculate subsequent y-values.
Newton divided difference interpolationVISHAL DONGA
This document presents Newton's divided difference polynomial method of interpolation. It defines interpolation as finding the value of 'y' at an unspecified value of 'x' given a set of (x,y) data points. Newton's method uses divided differences to determine the coefficients of a polynomial that can be used to interpolate and estimate y-values between the given data points. The document includes an example of applying Newton's method to find the interpolating polynomial and estimate an unknown y-value for a given set of 5 (x,y) data points.
This document discusses topics in partial differentiation including:
1) The geometrical meaning of partial derivatives as the slope of the tangent line to a surface.
2) Finding the equation of the tangent plane and normal line to a surface.
3) Taylor's theorem and Maclaurin's theorem for functions with two variables, which can be used to approximate functions and calculate errors.
The document discusses Lagrange interpolation, which involves constructing a polynomial that passes through a set of known data points. Specifically, it describes:
- The interpolation problem of predicting an unknown value (fI) at a point (xI) given known values (fi) at nodes (xi)
- How Lagrange interpolation polynomials are defined using basis polynomials (Ln,k) such that each basis polynomial is 1 at its node and 0 at other nodes
- An example of constructing a 3rd degree Lagrange interpolation polynomial to interpolate an unknown value f(3) using 4 known data points
The document provides an example of using the substitution method to evaluate the indefinite integral ∫(x2 + 3)3 4x dx. It introduces the substitution u = x2 + 3, which allows the integral to be rewritten as ∫u3 2 du and then evaluated as (1/2)u4 = (1/2)(x2 + 3)4. The solution is compared to directly integrating the expanded polynomial. The document outlines the theory and notation of substitution for indefinite integrals.
This document provides information on several multivariable calculus topics:
1) Finding maxima and minima of functions of two variables using partial derivatives and the second derivative test.
2) Finding the tangent plane and normal line to a surface.
3) Taylor series expansions for functions of two variables.
4) Standard expansions for common functions like e^x, cosh(x), and tanh(x) using Maclaurin series.
5) Linearizing functions around a point using the tangent plane approximation.
6) Lagrange's method of undetermined multipliers for finding extrema with constraints.
SERIES SOLUTION OF ORDINARY DIFFERENTIALL EQUATIONKavin Raval
This document discusses methods for solving ordinary differential equations (ODEs) using power series solutions and the Frobenius method. The power series method assumes solutions of the form of a power series centered at an ordinary point. The Frobenius method extends this to regular singular points by assuming solutions of the form of a power series multiplied by (x-x0)^r, where r is determined from the indicial equation. The document outlines the steps for both methods, which involve substituting the assumed series into the ODE and equating coefficients of like powers of (x-x0).
Computer Oriented Numerical Analysis
What is interpolation?
Many times, data is given only at discrete points such as .
So, how then does one find the value of y at any other value of x ?
Well, a continuous function f(x) may be used to represent the data values with f(x) passing through the points (Figure 1). Then one can find the value of y at any other value of x .
This is called interpolation
Newton’s Divided Difference Formula:
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the general form of Newton’s divided difference polynomial method is presented.
The document discusses partial differentiation and its applications. It covers functions of two variables, first and second partial derivatives, and applications including the Cobb-Douglas production function and finding marginal productivity from a production function. Examples are provided to demonstrate calculating partial derivatives of various functions and applying partial derivatives in contexts like production analysis.
This document discusses the fixed point iteration method for solving nonlinear equations numerically. It begins with an overview of the method, explaining that it involves rewriting equations in the form x=g(x) and then iteratively calculating xn+1=g(xn) until convergence. The document then provides an example of using the method to solve the equation x3+x2-1=0. It shows rewriting the equation, choosing an initial guess, iteratively calculating the next value of x, and checking for convergence. The document concludes by explaining how to implement the fixed point iteration method numerically using loops in code.
The document provides an introduction to partial differential equations (PDEs). Some key points:
- PDEs involve functions of two or more independent variables, and arise in physics/engineering problems.
- PDEs contain partial derivatives with respect to two or more independent variables. Examples of common PDEs are given, including the Laplace, wave, and heat equations.
- The order of a PDE is defined as the order of the highest derivative. Methods for solving PDEs through direct integration and using Lagrange's method are briefly outlined.
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs of a function, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
The document discusses numerical methods for solving differential equations called Runge-Kutta methods. It provides examples of applying the second-order and fourth-order Runge-Kutta methods to solve differential equations. The second-order method uses slopes at the start and middle of each interval to estimate the next value, while the fourth-order method uses slopes at the start, middle, and end of each interval to provide a more accurate estimate. The document also illustrates Heun's method and the second and fourth-order Runge-Kutta methods through examples.
This document discusses Joseph-Louis Lagrange and interpolation. It provides:
1) A brief biography of Joseph-Louis Lagrange, an Italian mathematician who made significant contributions to calculus and probability.
2) A definition of interpolation as producing a function that matches given data points exactly and can be used to approximate values between points.
3) An explanation of Lagrange's interpolation formula for finding a polynomial that fits a set of data points, including an example of applying the formula.
This document defines and provides examples of metric spaces. It begins by introducing metrics as distance functions that satisfy certain properties like non-negativity and the triangle inequality. Examples of metric spaces given include the real numbers under the usual distance, the complex numbers, and the plane under various distance metrics like the Euclidean, taxi cab, and maximum metrics. It is noted that some functions like the minimum function are not valid metrics as they fail to satisfy all the required properties.
1) The document discusses derivatives as rates of change, using the example of a stone thrown straight up.
2) It is found that the stone will stay in the air for 6 seconds, reaching its maximum height of 144 feet after 3 seconds.
3) The derivative of the height function D(t) represents the instantaneous rate of change of height, or speed, at each time t. This rate varies throughout the stone's trajectory.
The document discusses the bisection method for finding roots of equations. It begins by outlining the basis of the bisection method, which is that if a continuous function changes sign between two points, there is a root between those points. It then provides the step-by-step algorithm for implementing the bisection method to iteratively find a root. An example application to finding the resistance of a thermistor at a given temperature is also included. The document concludes by discussing the advantages and drawbacks of the bisection method.
The document discusses linear partial differential equations (PDEs) with constant coefficients. It defines such PDEs and provides examples. It describes how to find the general solution of homogeneous linear PDEs with constant coefficients by finding the roots of the auxiliary equation. The general solution consists of the complementary function plus a particular integral. Methods for finding the particular integral when the right side consists of powers of x and y are also presented.
This document discusses various methods of interpolation and numerical differentiation using divided differences and Newton's formulas. It introduces Lagrange interpolation for both equal and unequal intervals. Inverse interpolation and Newton's divided difference interpolation are also covered. Forward and backward difference formulas are presented for interpolation with equal intervals. Numerical differentiation can be performed by taking derivatives of the interpolation polynomial or using forward difference formulas to estimate derivatives at the data points.
This document provides an introduction to differential equations. It defines differential equations as equations containing an unknown function and its derivatives. It discusses ordinary differential equations which contain one independent variable and partial differential equations which can contain multiple independent variables. The order of a differential equation refers to the order of the highest derivative term. The degree of a differential equation is the power of the highest order derivative term. Linear differential equations have dependent variables and derivatives that are of degree one and have coefficients that do not depend on the dependent variable. Several examples of different types of differential equations are provided.
GATE Engineering Maths : Limit, Continuity and DifferentiabilityParthDave57
This document provides an overview of key concepts in calculus including functions, limits, continuity, and differentiation. It defines a function as a relationship where each input has a single output. Limits describe the behavior of a function as the input value approaches a number. A function is continuous if its limit equals the function value. A function is differentiable at a point if the limit of its difference quotient exists, with the left and right derivatives needing to be equal. Examples are provided to illustrate these fundamental calculus topics.
- A differential equation involves an independent variable, dependent variable, and derivatives of the dependent variable with respect to the independent variable.
- The order of a differential equation is the order of the highest derivative, and the degree is the exponent of the highest order derivative.
- Linear differential equations involve the dependent variable and its derivatives only to the first power. Non-linear equations do not meet this criterion.
- The general solution of a differential equation contains as many arbitrary constants as the order of the equation. A particular solution results from assigning values to the arbitrary constants.
- Differential equations can be solved through methods like variable separation, inspection of reducible forms, and finding homogeneous or linear representations.
1) The document reviews concepts from probability and statistics including discrete and continuous random variables, their distributions (e.g. binomial, Poisson, normal), and multivariate distributions.
2) It then discusses key properties of multivariate normal distributions including their probability density function and how marginal and conditional distributions can be derived from the joint distribution.
3) Concepts like independence, mean vectors, covariance matrices, and their implications are also covered as they relate to multivariate normal distributions.
The document discusses Lagrange interpolation, which involves constructing a polynomial that passes through a set of known data points. Specifically, it describes:
- The interpolation problem of predicting an unknown value (fI) at a point (xI) given known values (fi) at nodes (xi)
- How Lagrange interpolation polynomials are defined using basis polynomials (Ln,k) such that each basis polynomial is 1 at its node and 0 at other nodes
- An example of constructing a 3rd degree Lagrange interpolation polynomial to interpolate an unknown value f(3) using 4 known data points
The document provides an example of using the substitution method to evaluate the indefinite integral ∫(x2 + 3)3 4x dx. It introduces the substitution u = x2 + 3, which allows the integral to be rewritten as ∫u3 2 du and then evaluated as (1/2)u4 = (1/2)(x2 + 3)4. The solution is compared to directly integrating the expanded polynomial. The document outlines the theory and notation of substitution for indefinite integrals.
This document provides information on several multivariable calculus topics:
1) Finding maxima and minima of functions of two variables using partial derivatives and the second derivative test.
2) Finding the tangent plane and normal line to a surface.
3) Taylor series expansions for functions of two variables.
4) Standard expansions for common functions like e^x, cosh(x), and tanh(x) using Maclaurin series.
5) Linearizing functions around a point using the tangent plane approximation.
6) Lagrange's method of undetermined multipliers for finding extrema with constraints.
SERIES SOLUTION OF ORDINARY DIFFERENTIALL EQUATIONKavin Raval
This document discusses methods for solving ordinary differential equations (ODEs) using power series solutions and the Frobenius method. The power series method assumes solutions of the form of a power series centered at an ordinary point. The Frobenius method extends this to regular singular points by assuming solutions of the form of a power series multiplied by (x-x0)^r, where r is determined from the indicial equation. The document outlines the steps for both methods, which involve substituting the assumed series into the ODE and equating coefficients of like powers of (x-x0).
Computer Oriented Numerical Analysis
What is interpolation?
Many times, data is given only at discrete points such as .
So, how then does one find the value of y at any other value of x ?
Well, a continuous function f(x) may be used to represent the data values with f(x) passing through the points (Figure 1). Then one can find the value of y at any other value of x .
This is called interpolation
Newton’s Divided Difference Formula:
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the general form of Newton’s divided difference polynomial method is presented.
The document discusses partial differentiation and its applications. It covers functions of two variables, first and second partial derivatives, and applications including the Cobb-Douglas production function and finding marginal productivity from a production function. Examples are provided to demonstrate calculating partial derivatives of various functions and applying partial derivatives in contexts like production analysis.
This document discusses the fixed point iteration method for solving nonlinear equations numerically. It begins with an overview of the method, explaining that it involves rewriting equations in the form x=g(x) and then iteratively calculating xn+1=g(xn) until convergence. The document then provides an example of using the method to solve the equation x3+x2-1=0. It shows rewriting the equation, choosing an initial guess, iteratively calculating the next value of x, and checking for convergence. The document concludes by explaining how to implement the fixed point iteration method numerically using loops in code.
The document provides an introduction to partial differential equations (PDEs). Some key points:
- PDEs involve functions of two or more independent variables, and arise in physics/engineering problems.
- PDEs contain partial derivatives with respect to two or more independent variables. Examples of common PDEs are given, including the Laplace, wave, and heat equations.
- The order of a PDE is defined as the order of the highest derivative. Methods for solving PDEs through direct integration and using Lagrange's method are briefly outlined.
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs of a function, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
The document discusses numerical methods for solving differential equations called Runge-Kutta methods. It provides examples of applying the second-order and fourth-order Runge-Kutta methods to solve differential equations. The second-order method uses slopes at the start and middle of each interval to estimate the next value, while the fourth-order method uses slopes at the start, middle, and end of each interval to provide a more accurate estimate. The document also illustrates Heun's method and the second and fourth-order Runge-Kutta methods through examples.
This document discusses Joseph-Louis Lagrange and interpolation. It provides:
1) A brief biography of Joseph-Louis Lagrange, an Italian mathematician who made significant contributions to calculus and probability.
2) A definition of interpolation as producing a function that matches given data points exactly and can be used to approximate values between points.
3) An explanation of Lagrange's interpolation formula for finding a polynomial that fits a set of data points, including an example of applying the formula.
This document defines and provides examples of metric spaces. It begins by introducing metrics as distance functions that satisfy certain properties like non-negativity and the triangle inequality. Examples of metric spaces given include the real numbers under the usual distance, the complex numbers, and the plane under various distance metrics like the Euclidean, taxi cab, and maximum metrics. It is noted that some functions like the minimum function are not valid metrics as they fail to satisfy all the required properties.
1) The document discusses derivatives as rates of change, using the example of a stone thrown straight up.
2) It is found that the stone will stay in the air for 6 seconds, reaching its maximum height of 144 feet after 3 seconds.
3) The derivative of the height function D(t) represents the instantaneous rate of change of height, or speed, at each time t. This rate varies throughout the stone's trajectory.
The document discusses the bisection method for finding roots of equations. It begins by outlining the basis of the bisection method, which is that if a continuous function changes sign between two points, there is a root between those points. It then provides the step-by-step algorithm for implementing the bisection method to iteratively find a root. An example application to finding the resistance of a thermistor at a given temperature is also included. The document concludes by discussing the advantages and drawbacks of the bisection method.
The document discusses linear partial differential equations (PDEs) with constant coefficients. It defines such PDEs and provides examples. It describes how to find the general solution of homogeneous linear PDEs with constant coefficients by finding the roots of the auxiliary equation. The general solution consists of the complementary function plus a particular integral. Methods for finding the particular integral when the right side consists of powers of x and y are also presented.
This document discusses various methods of interpolation and numerical differentiation using divided differences and Newton's formulas. It introduces Lagrange interpolation for both equal and unequal intervals. Inverse interpolation and Newton's divided difference interpolation are also covered. Forward and backward difference formulas are presented for interpolation with equal intervals. Numerical differentiation can be performed by taking derivatives of the interpolation polynomial or using forward difference formulas to estimate derivatives at the data points.
This document provides an introduction to differential equations. It defines differential equations as equations containing an unknown function and its derivatives. It discusses ordinary differential equations which contain one independent variable and partial differential equations which can contain multiple independent variables. The order of a differential equation refers to the order of the highest derivative term. The degree of a differential equation is the power of the highest order derivative term. Linear differential equations have dependent variables and derivatives that are of degree one and have coefficients that do not depend on the dependent variable. Several examples of different types of differential equations are provided.
GATE Engineering Maths : Limit, Continuity and DifferentiabilityParthDave57
This document provides an overview of key concepts in calculus including functions, limits, continuity, and differentiation. It defines a function as a relationship where each input has a single output. Limits describe the behavior of a function as the input value approaches a number. A function is continuous if its limit equals the function value. A function is differentiable at a point if the limit of its difference quotient exists, with the left and right derivatives needing to be equal. Examples are provided to illustrate these fundamental calculus topics.
- A differential equation involves an independent variable, dependent variable, and derivatives of the dependent variable with respect to the independent variable.
- The order of a differential equation is the order of the highest derivative, and the degree is the exponent of the highest order derivative.
- Linear differential equations involve the dependent variable and its derivatives only to the first power. Non-linear equations do not meet this criterion.
- The general solution of a differential equation contains as many arbitrary constants as the order of the equation. A particular solution results from assigning values to the arbitrary constants.
- Differential equations can be solved through methods like variable separation, inspection of reducible forms, and finding homogeneous or linear representations.
1) The document reviews concepts from probability and statistics including discrete and continuous random variables, their distributions (e.g. binomial, Poisson, normal), and multivariate distributions.
2) It then discusses key properties of multivariate normal distributions including their probability density function and how marginal and conditional distributions can be derived from the joint distribution.
3) Concepts like independence, mean vectors, covariance matrices, and their implications are also covered as they relate to multivariate normal distributions.
This document discusses bivariate transformations and calculating probabilities of transformed random variables. It introduces calculating the probability that the maximum and minimum of iid random variables fall in a given range using the distribution function technique. This involves finding the probability that all variables are individually greater than some value a to find the probability the minimum is greater than a. Similarly, the probability the maximum is less than a involves finding the probability all variables are individually less than a. It also discusses finding the joint distribution of the maximum and minimum using a double integral over the joint density function.
This document provides an overview of Newton's divided difference interpolation method. It begins by stating that this method can work with unevenly spaced x-values to determine the y-value at any x. It then shows how to construct a divided difference table using the given x and y values. The document explains that Newton's divided difference interpolation formula can be used to determine f(x) at a given value of x by substituting values from the table. Examples are provided to demonstrate finding f(x) at different x values. Applications of the method such as interpolation, curve fitting, and solving differential equations are also discussed.
11.a focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a common fixed point theorem that generalizes an earlier theorem by Bijendra Singh and M.S. Chauhan. It replaces the conditions of compatibility and completeness with weaker conditions of weakly compatible mappings and an associated convergent sequence. The theorem proves that if self-maps A, B, S, and T of a metric space satisfy certain conditions, including (1) A(X) ⊆ T(X) and B(X) ⊆ S(X), (2) the pairs (A,S) and (B,T) are weakly compatible, and (3) the associated sequence converges, then the maps have a unique common fixed point. An example is given where the
A focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a theorem that generalizes an existing fixed point theorem using weaker conditions. Specifically, it replaces the conditions of compatibility and completeness with weakly compatible mappings and an associated convergent sequence. The theorem proves that if four self-maps satisfy certain conditions, including being weakly compatible and having an associated sequence that converges, then the maps have a unique common fixed point. The conditions are shown to be weaker using an example where the associated sequence converges even though the space is not complete.
The document discusses interpolation, which involves using a function to approximate values between known data points. It provides examples of Lagrange interpolation, which finds a polynomial passing through all data points, and Newton's interpolation, which uses divided differences to determine coefficients for approximating between points. The examples demonstrate constructing Lagrange and Newton interpolation polynomials using given data sets.
The document discusses partitions, Riemann sums, and the definite integral. It begins by defining partitions of an interval [a,b] and Riemann sums with respect to those partitions. Examples are given of partitions and calculating Riemann sums. The definite integral is then defined as the limit of Riemann sums as the partition size approaches zero. Several properties of definite integrals are stated, including linearity and the Fundamental Theorems of Calculus. Examples are provided of evaluating definite integrals using these properties.
First principle, power rule, derivative of constant term, product rule, quotient rule, chain rule, derivatives of trigonometric functions and their inverses, derivatives of exponential functions and natural logarithmic functions, implicit differentiation, parametric differentiation, L'Hopital's rule
This document discusses different interpolation methods:
- Interpolation finds values of a function between known x-values where the function values are given.
- Newton's forward and backward interpolation formulas are presented along with examples.
- Newton's divided difference interpolation uses a formula involving differences to find interpolating polynomials.
- Langrange's interpolation formula expresses the interpolating polynomial as a linear combination of basis polynomials defined in terms of the x-values. An example computing an interpolated value is shown.
This document discusses methods for solving algebraic and transcendental equations. It begins by defining key terms like roots, simple roots, and multiple roots. It then distinguishes between direct and iterative methods. Direct methods provide exact solutions, while iterative methods use successive approximations that converge to the exact root. The document focuses on iterative methods and describes how to obtain initial approximations, including using Descartes' rule of signs and the intermediate value theorem. It also discusses criteria for terminating iterations. One iterative method described in detail is the method of false position, which approximates the curve defined by the equation as a straight line between two points.
This document discusses methods for solving algebraic and transcendental equations. It begins by defining key terms like roots, simple roots, and multiple roots. It then distinguishes between direct and iterative methods. Direct methods provide exact solutions, while iterative methods use successive approximations that converge to the exact root. The document focuses on iterative methods and describes how to obtain initial approximations, including using Descartes' rule of signs and the intermediate value theorem. It also discusses criteria for terminating iterations. One iterative method described in detail is the method of false position, which approximates the curve defined by the equation as a straight line between two points.
A Study on Intuitionistic Multi-Anti Fuzzy Subgroupsmathsjournal
This document summarizes research on intuitionistic multi-anti fuzzy subgroups. Key points:
- Intuitionistic multi-fuzzy sets allow elements to have multiple membership values. Intuitionistic multi-anti fuzzy subgroups are intuitionistic multi-fuzzy sets that satisfy certain algebraic properties under group operations.
- The (α,β)-lower cut of an intuitionistic multi-fuzzy set is the crisp multi-set of elements whose membership and non-membership values are below α and above β thresholds. Properties of (α,β)-lower cuts are used to study intuitionistic multi-anti fuzzy subgroups.
- Definitions are provided for intuitionistic multi-fuzzy sets, intuitionistic multi-anti fuzzy subgroups
A Study on Intuitionistic Multi-Anti Fuzzy Subgroups mathsjournal
For any intuitionistic multi-fuzzy set A = { < x , µA(x) , νA(x) > : x∈X} of an universe set X, we study the set [A](α, β) called the (α, β)–lower cut of A. It is the crisp multi-set { x∈X : µi(x) ≤ αi , νi(x) ≥ βi , ∀i } of X. In this paper, an attempt has been made to study some algebraic structure of intuitionistic multi-anti fuzzy subgroups and their properties with the help of their (α, β)–lower cut sets
This document provides a list of commonly used test functions for validating new optimization algorithms. It describes 24 test functions, including functions originally developed by De Jong, Griewank, Rastrigin, and Rosenbrock. The test functions have various properties like being unimodal, multimodal, convex, or stochastic. They serve as benchmarks for comparing how well new algorithms can find the optimal value for problems with different characteristics.
The document discusses matrices and their applications in engineering mathematics. It presents five theorems regarding properties of matrices such as:
1) The eigenvectors of a matrix corresponding to distinct eigenvalues are orthogonal.
2) The characteristic polynomial of the adjoint of a matrix is equal to the characteristic polynomial of the original matrix with the eigenvalues replaced by their reciprocals.
3) The eigenvalues of an orthogonal matrix have absolute value of 1.
4) If the eigenvalue of an orthogonal matrix is not ±1, then the associated eigenvector is the zero vector.
5) The eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal.
It also provides examples of finding the eigenvalues and eigenvectors of specific matrices.
Y.B.Jun et al. [9] introduced the notion of Cubic sets and Cubic subgroups. In this paper we introduced the
notion of cubic BF- Algebra i.e., an interval-valued BF-Algebra and an anti fuzzy BF-Algebra. Intersection of two cubic
BF- Algebras is again a cubic BF-Algebra is also studied.
(1) This document discusses random variables and stochastic processes. It defines key concepts such as random variables, probability mass functions, cumulative distribution functions, discrete and continuous random variables.
(2) It provides examples of defining random variables for experiments involving coin tosses and ball drawings. It illustrates how to determine the probability mass function and cumulative distribution function of discrete random variables.
(3) The document also discusses continuous random variables and their probability density functions. It introduces the concepts of joint probability distributions for two random variables and how to find marginal and conditional probabilities.
Similar to Newtons Divided Difference Formulation (20)
This document discusses orthogonal subspaces and inner products in advanced engineering mathematics. It defines the inner product of two vectors u and v in Rn as the transpose of u dotted with v, which results in a scalar. Two vectors are orthogonal if their inner product is 0. An orthogonal basis for a subspace W is a basis for W that is also an orthogonal set. The document also discusses orthogonal complements, projections, and inner products on function spaces.
Linear Transformation Vector Matrices and SpacesSohaib H. Khan
The document discusses linear transformations between vector spaces. It defines a linear transformation as a mapping between vector spaces that satisfies two conditions: 1) it is additive and 2) it is homogeneous. It also defines the kernel as the set of vectors that map to the zero vector, and the image as the set of vectors in the target space that are the image of vectors in the domain space. The document is about linear transformations presented by Dr. Yasir Ali for an advanced engineering mathematics course.
Production Planning, Scheduling and ControlSohaib H. Khan
This document provides an overview of production scheduling and control. It discusses topics like introduction, aggregate production planning, demand forecasting, workforce planning, production routing, and production scheduling. The introduction defines production scheduling and control and its importance. Aggregate production planning involves medium-term planning to establish rough production levels. Demand forecasting predicts future requirements using qualitative, extrapolative, and causal methods. Workforce planning ensures the right workforce is available. Production routing determines the production path. Production scheduling sets timetables for manufacturing operations.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness and well-being.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against developing mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
1. Adv. Engg. Mathematics
MTH-812 Divided Differences Interpolations
Dr. Yasir Ali (yali@ceme.nust.edu.pk)
DBS&H, CEME-NUST
December 4, 2017
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
2. Interpolation Problem
Given 1 The (n + 1) nodes: x0, x1, ..., xn
2 The functional values f0, f1, ..., fn at these nodes
3 An Intermediate (nontabulated) point: xI
Predict fI : the value at x = xI.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
3. Suppose that Pn(x) is the nth Lagrange polynomial that agrees with
the function f at the distinct numbers x0, x1, ..., xn.
P(x) = Ln,0(x)f(x0) + Ln,1(x)f(x1) + · · · +, Ln,n(x)f(xn)
=
n
k=0
f(xk)Ln,k(x), where Ln,k(x) =
n
i=0
i=k
x − xi
xk − xi
,
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
4. Suppose that Pn(x) is the nth Lagrange polynomial that agrees with
the function f at the distinct numbers x0, x1, ..., xn.
P(x) = Ln,0(x)f(x0) + Ln,1(x)f(x1) + · · · +, Ln,n(x)f(xn)
=
n
k=0
f(xk)Ln,k(x), where Ln,k(x) =
n
i=0
i=k
x − xi
xk − xi
,
Although this polynomial is unique, there are alternate algebraic
representations that are useful in certain situations. The divided
differences of f with respect to x0, x1, ..., xn are used to express Pn(x)
in the form
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
5. Suppose that Pn(x) is the nth Lagrange polynomial that agrees with
the function f at the distinct numbers x0, x1, ..., xn.
P(x) = Ln,0(x)f(x0) + Ln,1(x)f(x1) + · · · +, Ln,n(x)f(xn)
=
n
k=0
f(xk)Ln,k(x), where Ln,k(x) =
n
i=0
i=k
x − xi
xk − xi
,
Although this polynomial is unique, there are alternate algebraic
representations that are useful in certain situations. The divided
differences of f with respect to x0, x1, ..., xn are used to express Pn(x)
in the form
Pn(x) = a0+a1(x−x0)+a2(x−x0)(x−x1)+ · · ·+an(x−x0) · · · (x−xn−1)
(1)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
6. Suppose that Pn(x) is the nth Lagrange polynomial that agrees with
the function f at the distinct numbers x0, x1, ..., xn.
P(x) = Ln,0(x)f(x0) + Ln,1(x)f(x1) + · · · +, Ln,n(x)f(xn)
=
n
k=0
f(xk)Ln,k(x), where Ln,k(x) =
n
i=0
i=k
x − xi
xk − xi
,
Although this polynomial is unique, there are alternate algebraic
representations that are useful in certain situations. The divided
differences of f with respect to x0, x1, ..., xn are used to express Pn(x)
in the form
Pn(x) = a0+a1(x−x0)+a2(x−x0)(x−x1)+ · · ·+an(x−x0) · · · (x−xn−1)
(1)
for appropriate constants
a0, a1, ..., an
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
7. For
Pn(x) = a0+a1(x−x0)+a2(x−x0)(x−x1)+ · · ·+an(x−x0) · · · (x−xn−1)
Pn(x) at x0 leaves
a0 = Pn(x0) = f0 (2)
Pn(x) at x1 leaves
Pn(x1) = a0 + a1(x1 − x0) we know
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
8. For
Pn(x) = a0+a1(x−x0)+a2(x−x0)(x−x1)+ · · ·+an(x−x0) · · · (x−xn−1)
Pn(x) at x0 leaves
a0 = Pn(x0) = f0 (2)
Pn(x) at x1 leaves
Pn(x1) = a0 + a1(x1 − x0) we know Pn(x1) = f1
a1 =
f1 − a0
x1 − x0
using (2), we get
a1 =
f1 − f0
x1 − x0
(3)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
9. Pn(x) at x2 leaves
Pn(x2) = a0 + a1(x2 − x0) + a2(x2 − x0)(x2 − x1)
we know Pn(x2) = f2
a2 =
f2 − a0 − a1(x2 − x0)
(x2 − x0)(x2 − x1)
using (2) and (3), we get
a2 =
f2 − f0 − f1−f0
x1−x0
(x2 − x0)
(x2 − x0)(x2 − x1)
After simplification we get
a2 =
f2−f1
x2−x1
− f1−f0
x1−x0
x2 − x0
(4)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
10. Comparing a0, a1 and a2 can we get some pattern?
a0 = f0
a1 =
f1 − f0
x1 − x0
a2 =
f2−f1
x2−x1
− f1−f0
x1−x0
x2 − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
11. Comparing a0, a1 and a2 can we get some pattern?
a0
depends on x0
= f0
a1
depends on x0 and x1
=
f1 − f0
x1 − x0
a2
depends on x0 x1 and x2
=
f2−f1
x2−x1
− f1−f0
x1−x0
x2 − x0
If we denote a0 = f0 as
f[x0]. Then its easy to write
a1 as
a1 =
f[x1] − f[x0]
x1 − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
12. Comparing a0, a1 and a2 can we get some pattern?
a0
depends on x0
= f0
a1
depends on x0 and x1
=
f1 − f0
x1 − x0
a2
depends on x0 x1 and x2
=
f2−f1
x2−x1
− f1−f0
x1−x0
x2 − x0
If we denote a0 = f0 as
f[x0]. Then its easy to write
a1 as
a1 =
f[x1] − f[x0]
x1 − x0
If we denote a1 = f[x0, x1]. Then
a2 =
f[x2, x1] − f[x1, x0]
x2 − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
13. By looking at the pattern of a1 and a2 is it possible to write a3?
a1
f[x1,x0]
=
f[x1] − f[x0]
x1 − x0
a2
f[x2,x1,x0]
=
f[x2, x1] − f[x1, x0]
x2 − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
14. By looking at the pattern of a1 and a2 is it possible to write a3?
a1
f[x1,x0]
=
f[x1] − f[x0]
x1 − x0
a2
f[x2,x1,x0]
=
f[x2, x1] − f[x1, x0]
x2 − x0
a3
f[x3,x2,x1,x0]
=
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
15. By looking at the pattern of a1 and a2 is it possible to write a3?
a1
f[x1,x0]
=
f[x1] − f[x0]
x1 − x0
a2
f[x2,x1,x0]
=
f[x2, x1] − f[x1, x0]
x2 − x0
a3
f[x3,x2,x1,x0]
=
f[
First three
x3, x2, x1] − f[
Last three
x2, x1, x0]
x3 − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
16. By looking at the pattern of a1 and a2 is it possible to write a3?
a1
f[x1,x0]
=
f[x1] − f[x0]
x1 − x0
a2
f[x2,x1,x0]
=
f[x2, x1] − f[x1, x0]
x2 − x0
a3
f[x3,x2,x1,x0]
=
f[
First three
x3, x2, x1] − f[
Last three
x2, x1, x0]
x3 − x0
ak
f[x0,x1,··· ,xk]
=
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
17. By looking at the pattern of a1 and a2 is it possible to write a3?
a1
f[x1,x0]
=
f[x1] − f[x0]
x1 − x0
a2
f[x2,x1,x0]
=
f[x2, x1] − f[x1, x0]
x2 − x0
a3
f[x3,x2,x1,x0]
=
f[
First three
x3, x2, x1] − f[
Last three
x2, x1, x0]
x3 − x0
ak
f[x0,x1,··· ,xk]
=
f[
First k−1
... ] − f[
Last k−1
... ]
xk − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
18. Newton’s Divided Difference
Hence, the interpolating polynomial (1) may be expressed as
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
f[xi] = f(xi) zeroth divided difference
f[xi+1, xi] =
f[xi+1] − f[xi]
xi+1 − xi
1st divided difference
f[xi+2, xi+1, xi] =
f[xi+2, xi+1] − f[xi+1, xi]
xi+2 − xi
2nd divided difference
f[x0, x1, · · · , xk] =
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
19. Newton’s Divided Difference
Hence, the interpolating polynomial (1) may be expressed as
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
f[xi] = f(xi) zeroth divided difference
f[xi+1, xi] =
f[xi+1] − f[xi]
xi+1 − xi
1st divided difference
f[xi+2, xi+1, xi] =
f[xi+2, xi+1] − f[xi+1, xi]
xi+2 − xi
2nd divided difference
f[x0, x1, · · · , xk] =
f[xk, xk−1 · · · , x1] − f[xk−1, xk−2, · · · , x0]
xk − x0
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
20. Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
21. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
♥
1 1.3 0.6201
2 1.6 0.4554
3 1.9 0.2818
4 2.2 0.1103
♥
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
22. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
♥
1 1.3 0.6201
2 1.6 0.4554
3 1.9 0.2818
4 2.2 0.1103
♥ The First Divided Difference involving x0 and x1 is
f[x0, x1] =
f[x1] − f[x0]
x1 − x0
=
0.6201 − 0.7651
1.3 − 1.0
= −0.4833.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
23. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
♥
1 1.3 0.6201
2 1.6 0.4554
3 1.9 0.2818
4 2.2 0.1103
♥ The First Divided Difference involving x0 and x1 is
f[x0, x1] =
f[x1] − f[x0]
x1 − x0
=
0.6201 − 0.7651
1.3 − 1.0
= −0.4833.
Remaining First Divided Differences are as follows
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
24. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 ♠
-0.549
2 1.6 0.4554
-0.5787
3 1.9 0.2818
-0.5717
4 2.2 0.1103
♠
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
25. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 ♠
-0.549
2 1.6 0.4554
-0.5787
3 1.9 0.2818
-0.5717
4 2.2 0.1103
♠The Second Divided Difference involving x0, x1 and x2 is
f[x2, x1, x0] =
f[x2, x1] − f[x1, x0]
x2 − x0
=
−0.549 − (−0.4833)
1.6 − 1.0
= −0.1094
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
26. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 ♠
-0.549
2 1.6 0.4554
-0.5787
3 1.9 0.2818
-0.5717
4 2.2 0.1103
♠The Second Divided Difference involving x0, x1 and x2 is
f[x2, x1, x0] =
f[x2, x1] − f[x1, x0]
x2 − x0
=
−0.549 − (−0.4833)
1.6 − 1.0
= −0.1094
Remaining Second Divided Difference are as follows
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
27. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 ♣
2 1.6 0.4554 -0.0494
-0.5787
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
♣
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
28. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 ♣
2 1.6 0.4554 -0.0494
-0.5787
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
♣The Third Divided Difference f[x3, x2, x1, x0]
=
f[x3, x2, x1] − f[x2, x1, x0]
x3 − x0
=
−0.1094 − (−0.494)
1.9 − 1.0
= −0.0667
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
29. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 ♣
2 1.6 0.4554 -0.0494
-0.5787
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
♣The Third Divided Difference f[x3, x2, x1, x0]
=
f[x3, x2, x1] − f[x2, x1, x0]
x3 − x0
=
−0.1094 − (−0.494)
1.9 − 1.0
= −0.0667
Remaining Third Divided Differences are as follows
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
30. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 -0.0667
2 1.6 0.4554 -0.0494
-0.5787 0.0679
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
31. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 -0.0667
2 1.6 0.4554 -0.0494
-0.5787 0.0679
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
The Fourth Divided Difference f[x4, x3, x2, x1, x0]
=
f[x4, x3, x2, x1] − f[x3, x2, x1, x0]
x3 − x0
=
−0.0679 − (−0.0667)
2.2 − 1.0
= −0.001
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
32. Complete the Divided Difference table for the following data
x 1.0 1.3 1.6 1.9 2.2
f(x) 0.7651 0.6201 0.4554 0.2818 0.1103
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 -0.0667
2 1.6 0.4554 -0.0494
-0.5787 0.0679
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
The Fourth Divided Difference f[x4, x3, x2, x1, x0]
=
f[x4, x3, x2, x1] − f[x3, x2, x1, x0]
x3 − x0
=
−0.0679 − (−0.0667)
2.2 − 1.0
= −0.001
This would be last divided difference in this case.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
33. The coefficients of the Newton forward divided-difference form of the
interpolating polynomial are along the diagonal in the table. This
polynomial is
P4(x) = 0.7651 − 0.4833(x − 1) − 1094(x − 1)(x − 1.3)+
−0667(x − 1)(x − 1.3)(x − 1.6) − 0.001(x − 1)(x − 1.3)(x − 1.6)(x − 1.9)
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 -0.0667
2 1.6 0.4554 -0.0494
-0.5787 0.0679
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
34. The coefficients of the Newton forward divided-difference form of the
interpolating polynomial are along the diagonal in the table. This
polynomial is
P4(x) = 0.7651 − 0.4833(x − 1) − 1094(x − 1)(x − 1.3)+
−0667(x − 1)(x − 1.3)(x − 1.6) − 0.001(x − 1)(x − 1.3)(x − 1.6)(x − 1.9)
i xi f[xi] f[xi, xi−1] f[xi, xi−1, xi−2] f[xi, xi−1, xi−2, xi−3]
0 1.0 0.7651
-0.4833
1 1.3 0.6201 -0.1094
-0.549 -0.0667
2 1.6 0.4554 -0.0494
-0.5787 0.0679
3 1.9 0.2818 0.0117
-0.5717
4 2.2 0.1103
f[x4, x3, x2, x1, x0] = −0.001
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
35. Newton’s divided-difference formula can be expressed in a
simplified form when the nodes are arranged consecutively
with equal spacing. In this case, we introduce the notation
xi+1 − xi = h for each i = 0, 1, · · · , n − 1
Let x = x0 + sh, s ∈ R, then we can write
x − xi = (x0 + sh) − (x0 + ih)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
36. Newton’s divided-difference formula can be expressed in a
simplified form when the nodes are arranged consecutively
with equal spacing. In this case, we introduce the notation
xi+1 − xi = h for each i = 0, 1, · · · , n − 1
Let x = x0 + sh, s ∈ R, then we can write
x − xi = (x0 + sh) − (x0 + ih) ⇒ x − xi = (s − i)h
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
37. Using x − xi = (s − i)h in
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
We obtain
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
38. Using x − xi = (s − i)h in
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
We obtain
Pn(x) = Pn(x0 + sh) = f[x0] + shf[x1, x0] + s(s − 1)h2
f[x2, x1, x0]+
s(s − 1)(s − 2)h3
f[x3, x2, x1, x0] + · · · +
+s(s − 1)(s − 2) · · · (s − n + 1)hn
f[xn, xn−1, · · · , x0]
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
39. Using x − xi = (s − i)h in
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
We obtain
Pn(x) = Pn(x0 + sh) = f[x0] + shf[x1, x0] + s(s − 1)h2
f[x2, x1, x0]+
s(s − 1)(s − 2)h3
f[x3, x2, x1, x0] + · · · +
+s(s − 1)(s − 2) · · · (s − n + 1)hn
f[xn, xn−1, · · · , x0]
Note that
x − x0 = sh
x − x1 = (s − 1)h & x − x0 = sh ⇒
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
40. Using x − xi = (s − i)h in
Pn(x) = f[x0] + f[x1, x0](x − x0) + a2(x − x0)(x − x1)+
· · · + an(x − x0)(x − x1) · · · (x − xn),
where
ak = f[xk, xk−1, · · · , x1, x0] for k = 0, 1, · · · , n.
We obtain
Pn(x) = Pn(x0 + sh) = f[x0] + shf[x1, x0] + s(s − 1)h2
f[x2, x1, x0]+
s(s − 1)(s − 2)h3
f[x3, x2, x1, x0] + · · · +
+s(s − 1)(s − 2) · · · (s − n + 1)hn
f[xn, xn−1, · · · , x0]
Note that
x − x0 = sh
x − x1 = (s − 1)h & x − x0 = sh ⇒ (x − x1)(x − x0) = s(s − 1)h2
similarly (x − x0)(x − x1)(x − x2) = s(s − 1)(s − 2)h3
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
41. Equi-spaced Divided Difference
Pn(x0 + sh) = f[x0] +
n
k=1
s(s − 1) · · · (s − k + 1)hk
f[xk, xk−1, · · · , x0]
Using binomial-coefficient notation,
s
k
=
s(s − 1) · · · (s − k + 1)
k!
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
42. Equi-spaced Divided Difference
Pn(x0 + sh) = f[x0] +
n
k=1
s(s − 1) · · · (s − k + 1)hk
f[xk, xk−1, · · · , x0]
Using binomial-coefficient notation,
s
k
=
s(s − 1) · · · (s − k + 1)
k!
⇒ s(s−1) · · · (s−k +1) = k!
s
k
.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
43. Equi-spaced Divided Difference
Pn(x0 + sh) = f[x0] +
n
k=1
s(s − 1) · · · (s − k + 1)hk
f[xk, xk−1, · · · , x0]
Using binomial-coefficient notation,
s
k
=
s(s − 1) · · · (s − k + 1)
k!
⇒ s(s−1) · · · (s−k +1) = k!
s
k
.
Thus we can express Pn(x) compactly as
Pn(x) = Pn(x0 + sh) = f[x0] +
n
k=1
k!
s
k
hk
f[xk, xk−1, · · · , x0]
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
44. Newton forward-difference formula
The Newton forward-difference formula, is constructed by making use
of the forward difference notation ∆. With this notation,
f[x1, x0] =
f[x1] − f[x0]
x1 − x0
=
1
h
(f(x1) − f(x0)) =
1
h
∆f(x0)
Similarly
f[x2, x1, x0] =
f[x2, x1] − f[x1, x0]
x2 − x0
=
1
2h
(f[x2, x1] − f[x1, x0])
=
1
2h
1
h
(∆f(x1) − ∆f(x0))
=
1
2h2
(∆(f(x1) − f(x0)))
=
1
2h2
(∆(∆f(x0))) =
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
45. Newton forward-difference formula
The Newton forward-difference formula, is constructed by making use
of the forward difference notation ∆. With this notation,
f[x1, x0] =
f[x1] − f[x0]
x1 − x0
=
1
h
(f(x1) − f(x0)) =
1
h
∆f(x0)
Similarly
f[x2, x1, x0] =
f[x2, x1] − f[x1, x0]
x2 − x0
=
1
2h
(f[x2, x1] − f[x1, x0])
=
1
2h
1
h
(∆f(x1) − ∆f(x0))
=
1
2h2
(∆(f(x1) − f(x0)))
=
1
2h2
(∆(∆f(x0))) =
1
2h2
∆2
f(x0)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
46. Newton forward-difference formula
Note that
f[x1, x0] =
1
h
∆f(x0)
f[x2, x1, x0] =
1
2h2
∆2
f(x0)
In general
f[xk, xk−1, · · · , x0] =
1
k!hk
∆k
f(x0)
Pn(x) = Pn(x0 + sh) = f[x0] +
n
k=1
k!
s
k
hk
f[xk, xk−1, · · · , x0]
Pn(x) = Pn(x0 + sh) = f(x0) +
n
k=1
s
k
∆k
f(x0)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
47. Compute cosh 0.56 using Newton’s Forward Difference
Formula (with 4 values)
i xi fi ∆fi ∆2fi ∆3fi
0 .5 1.127626
0.057839
1 .6 1.185465 0.011865
0.069704 0.000697
2 .7 1.255169 0.012562
0.082266
3 .8 1.337435
Using
P3(x) = P3(x0 + sh) = f(x0) +
3
k=1
s = x−x0
h
k
∆k
f(x0).
With x = 0.56, h = 0.1 and x0 = 0.5, we get
P3(0.56) = f(x0)+
0.6
1
∆f(x0)+
0.6
2
∆2
f(x0)+
0.6
3
∆3
f(x0).
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
48. Compute cosh 0.56 using Newton’s Forward Difference
Formula (with 4 values)
i xi fi ∆fi ∆2fi ∆3fi
0 .5 1.127626
0.057839
1 .6 1.185465 0.011865
0.069704 0.000697
2 .7 1.255169 0.012562
0.082266
3 .8 1.337435
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
49. Compute cosh 0.56 using Newton’s Forward Difference
Formula (with 4 values)
i xi fi ∆fi ∆2fi ∆3fi
0 .5 1.127626
0.057839
1 .6 1.185465 0.011865
0.069704 0.000697
2 .7 1.255169 0.012562
0.082266
3 .8 1.337435
Using
x = 0.56,
h = 0.1
and x0 = 0.5,
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
50. Compute cosh 0.56 using Newton’s Forward Difference
Formula (with 4 values)
i xi fi ∆fi ∆2fi ∆3fi
0 .5 1.127626
0.057839
1 .6 1.185465 0.011865
0.069704 0.000697
2 .7 1.255169 0.012562
0.082266
3 .8 1.337435
Using
x = 0.56,
h = 0.1
and x0 = 0.5,
P3(0.56) = f(x0)+
0.6
1
∆f(x0)+
0.6
2
∆2
f(x0)+
0.6
3
∆3
f(x0).
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
51. Compute cosh 0.56 using Newton’s Forward Difference
Formula (with 4 values)
i xi fi ∆fi ∆2fi ∆3fi
0 .5 1.127626
0.057839
1 .6 1.185465 0.011865
0.069704 0.000697
2 .7 1.255169 0.012562
0.082266
3 .8 1.337435
Using
x = 0.56,
h = 0.1
and x0 = 0.5,
P3(0.56) = f(x0)+
0.6
1
∆f(x0)+
0.6
2
∆2
f(x0)+
0.6
3
∆3
f(x0).
cosh(0.56) ≈ 1.127626 + (0.6)0.057839 +
(0.6)(−0.4)
2
0.011865+
(0.6)(−0.4)(−1.4)
6
0.000697 = 1.160944.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
52. Error in Newton’s Forwards Formula
εn =
hn+1
(n + 1)!
s(s − 1) · · · (s − n)f(n+1)
(t)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
53. Error in Newton’s Forwards Formula
εn =
hn+1
(n + 1)!
s(s − 1) · · · (s − n)f(n+1)
(t)
n = 3 f(t) = cosh t, f4
(t) = cosh(t)
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics
54. Error in Newton’s Forwards Formula
εn =
hn+1
(n + 1)!
s(s − 1) · · · (s − n)f(n+1)
(t)
n = 3 f(t) = cosh t, f4
(t) = cosh(t)
ε3 =
(0.1)4
4!
0.6(−0.4)(−1.4)(−2.4)f(4)
(t) = −0.00000336 cosh t
We do not know t, but we get an inequality by taking the largest and
smallest cosh t in that interval:
A cosh 0.8 ≤ ε3(x) ≤ A cosh 0.5 where A = −0.00000336
Since
f(x) = P3(x) + ε3(x)
P3(0.56) + A cosh 0.8 ≤ cosh(0.56) ≤ P3(0.56)A cosh 0.5
1.160939 ≤ cosh(0.56) ≤ 1.160941.
Dr. Yasir Ali (yali@ceme.nust.edu.pk) Adv. Engg. Mathematics