These are slides of my lectures on numerical methods for graduate students. The lectures cover basic methods of finding roots of functions with one variable like bisection method, fixed point method, and Newton's method.
This document presents reduction formulas for integrals of sinnx and cosnx (where n is greater than or equal to 2). It derives the reduction formulas by repeatedly applying integration by parts. For sinnx, the reduction formula expresses In (the integral of sinnx) in terms of In-1 and In-2. For cosnx, the reduction formula expresses In in terms of In-1 and In-2. The document provides detailed step-by-step working to arrive at each reduction formula.
I. A power series is a polynomial with infinitely many terms of the form Σn=0∞anxn.
II. The radius of convergence R determines the values of x where a power series converges absolutely (for |x|<R), diverges (for |x|>R), or may converge or diverge (for |x|=R).
III. Tests like the ratio test and root test can be used to calculate the radius of convergence R.
The document discusses techniques for combining fractions with opposite denominators. It explains that we can multiply the numerator and denominator by -1 to change the denominator to its opposite. It provides examples of switching fractions to their opposite denominators and combining fractions with opposite denominators by first switching one denominator so they are the same. It also discusses an alternative approach of pulling out a "-" from the denominator and passing it to the numerator when switching denominators, ensuring the leading term is positive for polynomial denominators.
Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...ssuserd6b1fd
Notes for Scilab Programming. This notes includes the mathematics used behind scilab numerical programming. Illustrated with suitable graphics and examples. Each function is explained well with complete example. Helpful to beginners. GUI programming is also explained.
Using implicit differentiation we can treat relations which are not quite functions like they were functions. In particular, we can find the slopes of lines tangent to curves which are not graphs of functions.
Let Pn(x) be the Legendre polynomial of degree n. Then the generating function for Pn(x) is given by:
∞
1
Pn(x)tn = √
n=0
1 − 2xt + t2
Differentiating both sides with respect to t, we get:
∞
∑nPn(x)tn-1 = -xt(1 − 2xt + t2)-1/2 + (1 − 2xt + t2)-3/2
n=1
Multiplying both sides by (1 − 2xt + t2)1/2, we get:
∞
∑
Limit & Continuity of Functions - Differential Calculus by Arun Umraossuserd6b1fd
This books explains about limits and continuity and is base for derivative calculus. Suitable for CBSE Class XII students who are preparing for IIT JEE.
This document presents reduction formulas for integrals of sinnx and cosnx (where n is greater than or equal to 2). It derives the reduction formulas by repeatedly applying integration by parts. For sinnx, the reduction formula expresses In (the integral of sinnx) in terms of In-1 and In-2. For cosnx, the reduction formula expresses In in terms of In-1 and In-2. The document provides detailed step-by-step working to arrive at each reduction formula.
I. A power series is a polynomial with infinitely many terms of the form Σn=0∞anxn.
II. The radius of convergence R determines the values of x where a power series converges absolutely (for |x|<R), diverges (for |x|>R), or may converge or diverge (for |x|=R).
III. Tests like the ratio test and root test can be used to calculate the radius of convergence R.
The document discusses techniques for combining fractions with opposite denominators. It explains that we can multiply the numerator and denominator by -1 to change the denominator to its opposite. It provides examples of switching fractions to their opposite denominators and combining fractions with opposite denominators by first switching one denominator so they are the same. It also discusses an alternative approach of pulling out a "-" from the denominator and passing it to the numerator when switching denominators, ensuring the leading term is positive for polynomial denominators.
Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...ssuserd6b1fd
Notes for Scilab Programming. This notes includes the mathematics used behind scilab numerical programming. Illustrated with suitable graphics and examples. Each function is explained well with complete example. Helpful to beginners. GUI programming is also explained.
Using implicit differentiation we can treat relations which are not quite functions like they were functions. In particular, we can find the slopes of lines tangent to curves which are not graphs of functions.
Let Pn(x) be the Legendre polynomial of degree n. Then the generating function for Pn(x) is given by:
∞
1
Pn(x)tn = √
n=0
1 − 2xt + t2
Differentiating both sides with respect to t, we get:
∞
∑nPn(x)tn-1 = -xt(1 − 2xt + t2)-1/2 + (1 − 2xt + t2)-3/2
n=1
Multiplying both sides by (1 − 2xt + t2)1/2, we get:
∞
∑
Limit & Continuity of Functions - Differential Calculus by Arun Umraossuserd6b1fd
This books explains about limits and continuity and is base for derivative calculus. Suitable for CBSE Class XII students who are preparing for IIT JEE.
The document discusses the extension principle for generalizing crisp mathematical concepts to fuzzy sets. It defines the extension principle for mappings from cartesian products to universes. An example is provided to illustrate defining a fuzzy set in the output universe based on fuzzy sets in the input universes and the mapping between them. Fuzzy numbers are defined to have specific properties including being a normal fuzzy set, closed intervals for membership levels, and bounded support. Positive and negative fuzzy numbers are distinguished based on their membership functions. Binary operations are classified as increasing or decreasing, and it is noted the extension principle can be used to define the fuzzy result of applying increasing or decreasing operations to fuzzy inputs. Notation for fuzzy number algebraic operations is introduced. Several theore
The document discusses various mathematical methods for interpolation and solving equations including:
1) Bisection method, iteration method, and Newton-Raphson method for finding roots of equations.
2) Finite difference methods for numerical differentiation and interpolation using forward, backward, and central difference operators.
3) Newton's forward and backward interpolation formulas for equally spaced data using finite differences.
4) Gauss interpolation and Lagrange interpolation for unequally spaced data points.
The document describes three numerical methods for finding the roots or solutions of equations: the bisection method, Newton's method for single variable equations, and Newton's method for systems of nonlinear equations.
The bisection method works by repeatedly bisecting the interval within which a root is known to exist, narrowing in on the root through iterative halving. Newton's method approximates the function with its tangent line to find a better root estimate with each iteration. For systems of equations, Newton's method involves calculating the Jacobian matrix and solving a system of linear equations at each step to update the solution estimate. Examples are provided to illustrate each method.
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...BRNSS Publication Hub
We know that a large number of problems in differential equations can be reduced to finding the solution x to an equation of the form Tx=y. The operator T maps a subset of a Banach space X into another Banach space Y and y is a known element of Y. If y=0 and Tx=Ux−x, for another operator U, the equation Tx=y is equivalent to the equation Ux=x. Naturally, to solve Ux=x, we must assume that the range R (U) and the domain D (U) have points in common. Points x for which Ux=x are called fixed points of the operator U. In this work, we state the main fixed-point theorems that are most widely used in the field of differential equations. These are the Banach contraction principle, the Schauder–Tychonoff theorem, and the Leray–Schauder theorem. We will only prove the first theorem and then proceed.
This document contains a proof that the integral from 0 to 1 of x dx equals 1/2. It begins by assuming the integral exists, then provides an alternative proof that does not assume the integral exists. This is done by establishing a lemma about Riemann sums and partitions, and showing that the difference between any two Riemann sums is less than epsilon.
This document discusses ordinary differential equations (ODEs). It defines ODEs and differentiates them from partial differential equations. ODEs can be classified by type, order, and linearity. Initial value problems involve solving an ODE with initial conditions specified at a point, while boundary value problems involve conditions at boundary points. The document provides examples of solving first- and second-order initial value problems. It also discusses the existence and uniqueness of solutions to initial value problems under certain continuity conditions on the functions defining the ODE.
Principle of Definite Integra - Integral Calculus - by Arun Umraossuserd6b1fd
Definite integral notes. Best for quick preparation. Easy to understand and colored graphics. Step by step description. Suitable for CBSE board and State Board students in Class XI & XII
This document summarizes several numerical methods for finding roots of nonlinear equations or eigenvalues of matrices:
1) Bisection method, false position method, and secant method are iterative root-finding algorithms for nonlinear equations. They rely on checking the sign of the function at interval endpoints and successively narrowing the interval containing a root.
2) Newton's method and the power method are algorithms for finding roots or eigenvalues by using derivatives or matrix multiplication. Newton's method finds roots by iteratively computing the x-intercept of the tangent line. The power method finds the dominant eigenvalue by repeatedly multiplying a matrix by a vector.
3) Gerschgorin's circle theorem provides bounds on the locations of a
Rational expressions are expressions of the form P/Q, where P and Q are polynomials. Polynomials are expressions of the form anxn + an-1xn-1 + ... + a1x1 + a0. Rational expressions can be written in either expanded or factored form. The factored form is useful for determining the domain of a rational expression, solving equations involving rational expressions, evaluating inputs, and determining the sign of outputs. The domain of a rational expression excludes values of x that make the denominator equal to 0.
Moment closure inference for stochastic kinetic modelsColin Gillespie
This document discusses moment closure inference for stochastic kinetic models. It begins with an introduction to moment closure techniques using a simple birth-death process as a case study. It then discusses how to derive moment equations from the chemical master equation and how the deterministic model can be viewed as an approximation of the stochastic model by setting the variance to zero. The document also examines some limitations of moment closure approximations using examples of heat shock and p53-Mdm2 oscillation models. Finally, it presents a case study of using moment closure to model cotton aphid populations based on field data.
This document provides an introduction to complex numbers. It discusses the mathematical and geometrical requirements for representing complex numbers on a plane with real and imaginary axes. Some key points covered include: complex numbers can be used to solve quadratic equations with negative solutions; a complex number has both a real and imaginary part and can be represented as a point in the complex plane; and the angle of a complex number depends on its position in the complex plane relative to the real and imaginary axes. Several examples of representing and calculating angles of complex numbers are worked through.
The document discusses partial differential equations and their solutions. It can be summarized as:
1) A partial differential equation involves a function of two or more variables and some of its partial derivatives, with one dependent variable and one or more independent variables. Standard notation is presented for partial derivatives.
2) Partial differential equations can be formed by eliminating arbitrary constants or arbitrary functions from an equation relating the dependent and independent variables. Examples of each method are provided.
3) Solutions to partial differential equations can be complete, containing the maximum number of arbitrary constants allowed, particular where the constants are given specific values, or singular where no constants are present. Methods for determining the general solution are described.
Integrals with inverse trigonometric functionsindu thakur
The document discusses techniques for integrating trigonometric functions. It begins by reviewing definitions of trig functions like sine, cosine, tangent, and cotangent. It then provides examples of trig integrals using trig identities and u-substitution. Examples include integrals of sine, cosine, tangent, and secant functions. The document concludes by stating that practicing these types of integrals will help students perform well on exams involving calculus.
The document discusses partial differential equations (PDEs) and numerical methods for solving them. It begins by defining PDEs as equations involving derivatives of an unknown function with respect to two or more independent variables. PDEs describe many physical phenomena involving variations across space and time, such as fluid flow, heat transfer, electromagnetism, and weather prediction. The document then focuses on solving elliptic, parabolic, and hyperbolic PDEs numerically using finite difference and finite element methods. It provides examples of discretizing and solving the Laplace, heat, and wave equations to estimate unknown functions.
A tutorial on the Frobenious Theorem, one of the most important results in differential geometry, with emphasis in its use in nonlinear control theory. All results are accompanied by proofs, but for a more thorough and detailed presentation refer to the book of A. Isidori.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document discusses three methods for finding the roots of nonlinear equations:
1) Bisection method, which converges linearly but is guaranteed to find a root.
2) Newton's method, which converges quadratically (much faster) but may diverge if the starting point is too far from the root.
3) Secant method, which is faster than bisection but slower than Newton's, and also requires starting points close to the root. Newton's and secant methods can be extended to systems of nonlinear equations.
Bisection theorem proof and convergence analysisHamza Nawaz
The document summarizes the Bisection method for finding roots of a continuous function. It presents the Bisection theorem, proves it in two parts: (1) that the absolute error decreases geometrically with each iteration, and (2) the sequence of midpoint estimates converges to the root. It also derives the error bound, showing the error approaches zero as the number of iterations increases, proving convergence of the Bisection method.
I am Marvin Jones, a Number Theory Homework Expert at mathsassignmenthelp.com. I hold a Master's in Mathematics from Columbia University, and have been assisting students with their homework for the past six years. I specialize in number theory assignments.
For any number theory assignment solution or homework help, visit mathsassignmenthelp.com, email info@mathsassignmenthelp.com, or call +1 678 648 4277. This sample assignment solution is a prove of our work.
The document discusses the extension principle for generalizing crisp mathematical concepts to fuzzy sets. It defines the extension principle for mappings from cartesian products to universes. An example is provided to illustrate defining a fuzzy set in the output universe based on fuzzy sets in the input universes and the mapping between them. Fuzzy numbers are defined to have specific properties including being a normal fuzzy set, closed intervals for membership levels, and bounded support. Positive and negative fuzzy numbers are distinguished based on their membership functions. Binary operations are classified as increasing or decreasing, and it is noted the extension principle can be used to define the fuzzy result of applying increasing or decreasing operations to fuzzy inputs. Notation for fuzzy number algebraic operations is introduced. Several theore
The document discusses various mathematical methods for interpolation and solving equations including:
1) Bisection method, iteration method, and Newton-Raphson method for finding roots of equations.
2) Finite difference methods for numerical differentiation and interpolation using forward, backward, and central difference operators.
3) Newton's forward and backward interpolation formulas for equally spaced data using finite differences.
4) Gauss interpolation and Lagrange interpolation for unequally spaced data points.
The document describes three numerical methods for finding the roots or solutions of equations: the bisection method, Newton's method for single variable equations, and Newton's method for systems of nonlinear equations.
The bisection method works by repeatedly bisecting the interval within which a root is known to exist, narrowing in on the root through iterative halving. Newton's method approximates the function with its tangent line to find a better root estimate with each iteration. For systems of equations, Newton's method involves calculating the Jacobian matrix and solving a system of linear equations at each step to update the solution estimate. Examples are provided to illustrate each method.
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...BRNSS Publication Hub
We know that a large number of problems in differential equations can be reduced to finding the solution x to an equation of the form Tx=y. The operator T maps a subset of a Banach space X into another Banach space Y and y is a known element of Y. If y=0 and Tx=Ux−x, for another operator U, the equation Tx=y is equivalent to the equation Ux=x. Naturally, to solve Ux=x, we must assume that the range R (U) and the domain D (U) have points in common. Points x for which Ux=x are called fixed points of the operator U. In this work, we state the main fixed-point theorems that are most widely used in the field of differential equations. These are the Banach contraction principle, the Schauder–Tychonoff theorem, and the Leray–Schauder theorem. We will only prove the first theorem and then proceed.
This document contains a proof that the integral from 0 to 1 of x dx equals 1/2. It begins by assuming the integral exists, then provides an alternative proof that does not assume the integral exists. This is done by establishing a lemma about Riemann sums and partitions, and showing that the difference between any two Riemann sums is less than epsilon.
This document discusses ordinary differential equations (ODEs). It defines ODEs and differentiates them from partial differential equations. ODEs can be classified by type, order, and linearity. Initial value problems involve solving an ODE with initial conditions specified at a point, while boundary value problems involve conditions at boundary points. The document provides examples of solving first- and second-order initial value problems. It also discusses the existence and uniqueness of solutions to initial value problems under certain continuity conditions on the functions defining the ODE.
Principle of Definite Integra - Integral Calculus - by Arun Umraossuserd6b1fd
Definite integral notes. Best for quick preparation. Easy to understand and colored graphics. Step by step description. Suitable for CBSE board and State Board students in Class XI & XII
This document summarizes several numerical methods for finding roots of nonlinear equations or eigenvalues of matrices:
1) Bisection method, false position method, and secant method are iterative root-finding algorithms for nonlinear equations. They rely on checking the sign of the function at interval endpoints and successively narrowing the interval containing a root.
2) Newton's method and the power method are algorithms for finding roots or eigenvalues by using derivatives or matrix multiplication. Newton's method finds roots by iteratively computing the x-intercept of the tangent line. The power method finds the dominant eigenvalue by repeatedly multiplying a matrix by a vector.
3) Gerschgorin's circle theorem provides bounds on the locations of a
Rational expressions are expressions of the form P/Q, where P and Q are polynomials. Polynomials are expressions of the form anxn + an-1xn-1 + ... + a1x1 + a0. Rational expressions can be written in either expanded or factored form. The factored form is useful for determining the domain of a rational expression, solving equations involving rational expressions, evaluating inputs, and determining the sign of outputs. The domain of a rational expression excludes values of x that make the denominator equal to 0.
Moment closure inference for stochastic kinetic modelsColin Gillespie
This document discusses moment closure inference for stochastic kinetic models. It begins with an introduction to moment closure techniques using a simple birth-death process as a case study. It then discusses how to derive moment equations from the chemical master equation and how the deterministic model can be viewed as an approximation of the stochastic model by setting the variance to zero. The document also examines some limitations of moment closure approximations using examples of heat shock and p53-Mdm2 oscillation models. Finally, it presents a case study of using moment closure to model cotton aphid populations based on field data.
This document provides an introduction to complex numbers. It discusses the mathematical and geometrical requirements for representing complex numbers on a plane with real and imaginary axes. Some key points covered include: complex numbers can be used to solve quadratic equations with negative solutions; a complex number has both a real and imaginary part and can be represented as a point in the complex plane; and the angle of a complex number depends on its position in the complex plane relative to the real and imaginary axes. Several examples of representing and calculating angles of complex numbers are worked through.
The document discusses partial differential equations and their solutions. It can be summarized as:
1) A partial differential equation involves a function of two or more variables and some of its partial derivatives, with one dependent variable and one or more independent variables. Standard notation is presented for partial derivatives.
2) Partial differential equations can be formed by eliminating arbitrary constants or arbitrary functions from an equation relating the dependent and independent variables. Examples of each method are provided.
3) Solutions to partial differential equations can be complete, containing the maximum number of arbitrary constants allowed, particular where the constants are given specific values, or singular where no constants are present. Methods for determining the general solution are described.
Integrals with inverse trigonometric functionsindu thakur
The document discusses techniques for integrating trigonometric functions. It begins by reviewing definitions of trig functions like sine, cosine, tangent, and cotangent. It then provides examples of trig integrals using trig identities and u-substitution. Examples include integrals of sine, cosine, tangent, and secant functions. The document concludes by stating that practicing these types of integrals will help students perform well on exams involving calculus.
The document discusses partial differential equations (PDEs) and numerical methods for solving them. It begins by defining PDEs as equations involving derivatives of an unknown function with respect to two or more independent variables. PDEs describe many physical phenomena involving variations across space and time, such as fluid flow, heat transfer, electromagnetism, and weather prediction. The document then focuses on solving elliptic, parabolic, and hyperbolic PDEs numerically using finite difference and finite element methods. It provides examples of discretizing and solving the Laplace, heat, and wave equations to estimate unknown functions.
A tutorial on the Frobenious Theorem, one of the most important results in differential geometry, with emphasis in its use in nonlinear control theory. All results are accompanied by proofs, but for a more thorough and detailed presentation refer to the book of A. Isidori.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document discusses three methods for finding the roots of nonlinear equations:
1) Bisection method, which converges linearly but is guaranteed to find a root.
2) Newton's method, which converges quadratically (much faster) but may diverge if the starting point is too far from the root.
3) Secant method, which is faster than bisection but slower than Newton's, and also requires starting points close to the root. Newton's and secant methods can be extended to systems of nonlinear equations.
Bisection theorem proof and convergence analysisHamza Nawaz
The document summarizes the Bisection method for finding roots of a continuous function. It presents the Bisection theorem, proves it in two parts: (1) that the absolute error decreases geometrically with each iteration, and (2) the sequence of midpoint estimates converges to the root. It also derives the error bound, showing the error approaches zero as the number of iterations increases, proving convergence of the Bisection method.
I am Marvin Jones, a Number Theory Homework Expert at mathsassignmenthelp.com. I hold a Master's in Mathematics from Columbia University, and have been assisting students with their homework for the past six years. I specialize in number theory assignments.
For any number theory assignment solution or homework help, visit mathsassignmenthelp.com, email info@mathsassignmenthelp.com, or call +1 678 648 4277. This sample assignment solution is a prove of our work.
This document discusses numerical integration and interpolation formulas. It begins by explaining the general formula for numerical integration using equidistant values of a function f(x) between bounds a and b. It then derives Trapezoidal, Simpson's, and Weddle's rules by putting different values for n in the general formula. The document also discusses Newton's forward and backward interpolation formulas, Lagrange interpolation formula, and provides examples of their application. It concludes by comparing Lagrange and Newton interpolation and discussing uses of interpolation in computer science and engineering fields.
The document provides an introduction to the binomial theorem. It defines binomial coefficients through the Pascal triangle and gives an explicit formula for computing them using factorials. The binomial theorem is then derived and stated, providing a formula for expanding expressions of the form (a + b)^n in terms of binomial coefficients. Several examples are worked out to demonstrate expanding expressions and finding coefficients using the binomial theorem. Applications to estimating interest calculations are also briefly discussed.
Tricks to remember the quadratic equation.ACTION RESEARCH ON MATHSangelbindusingh
This document provides information about different methods for solving quadratic equations. It discusses factoring the equation, using the quadratic formula, and completing the square. Step-by-step explanations are provided for each method. Factoring involves finding two binomials that multiply to give the quadratic term and add to the linear term. The quadratic formula is given as x = (-b ±√(b2 - 4ac))/2a. Completing the square requires grouping like terms and completing the square of the quadratic term.
This document discusses quadratic equations and methods for solving them. It begins by defining quadratic equations as second degree polynomial equations of the form ax^2 + bx + c = 0, where a is not equal to 0. It then presents several methods for finding the roots or solutions of quadratic equations: factoring, completing the square, and using the quadratic formula. Examples are provided to illustrate each method. The document also discusses graphing quadratic functions and key features of parabolas such as vertex, axis of symmetry, and direction of opening.
The document provides an introduction to the binomial theorem. It begins by discussing binomial coefficients through the Pascal's triangle. It then derives an explicit formula for binomial coefficients using factorials. Finally, it states the binomial theorem and provides examples of using it to expand algebraic expressions and estimate numerical values.
The document discusses implementing the Gauss-Jacobi iterative method to solve systems of linear equations. It begins by providing an overview of the Gauss-Jacobi method and its application to solve a sample system of 3 equations with 3 unknowns. It then compares the Gauss-Jacobi method to the Gauss-Seidel method, noting that Gauss-Seidel uses updated values in the current iteration while Gauss-Jacobi uses values from the previous iteration. The document concludes by providing C code to implement the Gauss-Jacobi method and listing its main advantages as being iterative, and its disadvantages as being inflexible and requiring large set-up time.
Quadratic equations take the form of ax^2 + bx + c = 0, where a, b, and c are constants and a is not equal to 0. There are three main ways to solve quadratic equations: factoring, completing the square, and using the quadratic formula. Factoring involves finding two linear expressions whose product is the quadratic expression. Completing the square transforms the equation into the form (x + p)^2 = q. The quadratic formula provides exact solutions for x in terms of a, b, and c. The discriminant, b^2 - 4ac, determines the nature of the roots.
I am Bella A. I am a Statistical Method In Economics Assignment Expert at economicshomeworkhelper.com/. I hold a Ph.D. in Economics. I have been helping students with their homework for the past 9 years. I solve assignments related to Economics Assignment.
Visit economicshomeworkhelper.com/ or email info@economicshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistical Method In Economics Assignments.
The document explains how to use the quadratic formula to find the roots or solutions of a quadratic equation. It provides the steps which are: 1) write the equation in standard form with all terms on one side, 2) identify the coefficients a, b, and c, and 3) substitute these coefficients into the quadratic formula. The formula is given as x = (-b ±√(b2 - 4ac))/2a. Worked examples are provided to demonstrate how to set up and solve a quadratic equation using the formula.
The document provides steps and examples for solving various types of word problems in algebra, including number, mixture, rate/time/distance, work, coin, and geometric problems. It also covers solving quadratic equations using methods like the square root property, completing the square, quadratic formula, factoring, and using the discriminant. Finally, it discusses linear inequalities, including properties related to addition, multiplication, division, and subtraction of inequalities.
Vedic mathematics is an ancient system of mathematics discovered from the Vedas. It uses unique calculation techniques based on simple principles to solve problems mentally in arithmetic, algebra, geometry, and trigonometry. It allows problems to be solved 10-15 times faster by reducing memorization of tables and scratch work. Vedic mathematics consists of 16 sutras or formulae derived from the Vedas that simplify complex mathematical operations.
The document defines quadratic equations as polynomial equations of the second degree where the highest exponent is 2. It provides the general form of a quadratic equation as ax2 + bx + c = 0, where a, b, and c are constants and a ≠ 0. There are three main ways to solve quadratic equations: factoring, completing the square, and using the quadratic formula. Factoring involves finding two linear expressions whose product is the quadratic, while completing the square rewrites the equation in the form of a perfect square trinomial. The discriminant, b2 - 4ac, determines the nature of the roots.
1) The document introduces concepts of differential calculus including derivatives, limits, continuity, and fundamental rules of taking derivatives.
2) It provides examples of calculating derivatives using notations like delta f, limits, and various derivative rules including the power rule, product rule, and quotient rule.
3) Methods for finding local maxima and minima are discussed, including using the first derivative test and analyzing stationary points where the first derivative is zero based on whether the derivative is increasing or decreasing on both sides.
This document introduces the Method of Least Squares (or Minimum Squares) for fitting curves to data points. It explains that this method finds the coefficients of a function that best approximates the relationship between x- and y-values in a dataset by minimizing the sum of squared residuals between the actual and predicted y-values. The document provides an example of using a linear and quadratic function to fit a dataset, showing how to set up and solve the normal equations to determine the coefficients. It also discusses evaluating the quality of fit using the R-squared value.
This document discusses linear multistep methods for solving initial value differential problems. It makes three key points:
1) Linear multistep methods can be viewed as fixed point iterative methods with a differential operator instead of an integral operator. They converge to a fixed point if consistent and stable.
2) The document proves that any linear multistep method defines a contraction mapping, ensuring convergence to a unique fixed point when the problem is Lipschitz continuous.
3) Examples of explicit and implicit linear multistep methods are given, along with their fixed point iterative formulations. These include Euler's method, trapezoidal method, and Adams-Moulton method.
This document discusses polynomial functions in MATLAB. It covers:
- Defining polynomials as coefficient vectors and finding roots.
- Adding, subtracting, multiplying and dividing polynomials using functions like conv and deconv.
- Evaluating and differentiating polynomials with polyval and polyder.
- Using polyfit for polynomial curve fitting to minimize squared errors between a polynomial and data set.
- An example of fitting increasing degree polynomials from 2 to 8 to cosine wave data, showing better fitting with higher degrees.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
CLASS 12th CHEMISTRY SOLID STATE ppt (Animated)eitps1506
Description:
Dive into the fascinating realm of solid-state physics with our meticulously crafted online PowerPoint presentation. This immersive educational resource offers a comprehensive exploration of the fundamental concepts, theories, and applications within the realm of solid-state physics.
From crystalline structures to semiconductor devices, this presentation delves into the intricate principles governing the behavior of solids, providing clear explanations and illustrative examples to enhance understanding. Whether you're a student delving into the subject for the first time or a seasoned researcher seeking to deepen your knowledge, our presentation offers valuable insights and in-depth analyses to cater to various levels of expertise.
Key topics covered include:
Crystal Structures: Unravel the mysteries of crystalline arrangements and their significance in determining material properties.
Band Theory: Explore the electronic band structure of solids and understand how it influences their conductive properties.
Semiconductor Physics: Delve into the behavior of semiconductors, including doping, carrier transport, and device applications.
Magnetic Properties: Investigate the magnetic behavior of solids, including ferromagnetism, antiferromagnetism, and ferrimagnetism.
Optical Properties: Examine the interaction of light with solids, including absorption, reflection, and transmission phenomena.
With visually engaging slides, informative content, and interactive elements, our online PowerPoint presentation serves as a valuable resource for students, educators, and enthusiasts alike, facilitating a deeper understanding of the captivating world of solid-state physics. Explore the intricacies of solid-state materials and unlock the secrets behind their remarkable properties with our comprehensive presentation.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...
Introduction to root finding
1. Introduction to root finding
Sanjeev Kumar Verma
Department of Physics and Astrophysics
University of Delhi
Delhi, INDIA - 110007
sanjeevkumarverma.wordpress.com
Sanjeev Kumar Verma Introduction to root finding
3. Introduction
On a Babylonian clay tablet dated BC7289, a square is drawn
with its two diagonals and the diagonal to side ratio has been
written as 1.24, 51, 10.
Babylonians, unlike us, used sexagesimal number system. So,
the value is 1 + 24/60 + 51/602 + 10/603 = 1.41421296...
Exercise: What is the accuracy of this value?
Exercise: Can you calculate it without a calculator?
Exercise: Can you calculate it without using the standard
method based upon division and completion of square?
Sanjeev Kumar Verma Introduction to root finding
4. Babylonian method of finding square roots
As an example, let’s find
√
2.
Start with any guess. Say, the root is 1.
Babylonian method says that a better guess is average of the
old guess and the given number divided by the old guess.
So, new guess = average of 1 and 2/1 = 1.5.
Exercise. Take 1.5 as an old guess and calculate the new
guess.
Answer: average of 1.5 and 2/1.5 = 1.4167.
Repeating this process yields the series
{1, 1.5, 1.4166667, 1.4142157, 1.4142136, 1.4142136}
.
Sanjeev Kumar Verma Introduction to root finding
5. Babylonian method of finding square roots
Theorem: Let xn+1 = 1
2 xn + N
xn
. Then, xn → x for any x0
with sufficiently large n, where x =
√
N.
The idea is to start with a guess x0 and generate the series
{xn} = {x0, x1, x2, x3, x4, ...} using
xn+1 =
1
2
xn +
N
xn
. (1)
x0 is called initial guess. Each application of Eq. (1) is called
an iteration. xn is the value of the root after n iterations.
With successive iterations, the gap between xn+1 and xn goes
on becoming smaller and smaller. The difference between xn
and the true root x also goes on becoming smaller and smaller
with successive iterations. This is called convergence.
Sanjeev Kumar Verma Introduction to root finding
6. Exercises: Iterative methods
Ex. 1. Babylonian method of finding square roots illustrates
the key ideas of any iterative method. List them.
We have an initial solution.
We have an iterative formula.
Successive iterations give better approximations to the
solution.
An iterative method should converge towards the actual
solution.
Ex. 2. Try to develop a Babylonian like iterative method for
finding the cube root of a number. Use your intuition in
absence of any theoretical guidlines.
Sanjeev Kumar Verma Introduction to root finding
7. Bisection method: basic idea
The concept behind bisection method is illustrated with help
of following example.
Example Find the solution to the equation x2 = 2.
1 < 2 < 4
⇒ 1 <
√
2 < 2
So, the interval {1, 2} contains the root. Bisect the interval.
We get two new intervals: {1, 1.5} and {1.5, 2}. Which one
contains the root?
First interval, because 12 <
√
2
2
< 1.52 ⇒ 1 <
√
2 < 1.5.
Bisect the interval {1, 1.5} and repeat the procedure.
Each iteration will narrows down the root containing interval
by a factor of half successively. Make a nice table of iterations.
Ex. How do you select the root containing interval?
Hint: Function changes sign at the root.
Sanjeev Kumar Verma Introduction to root finding
8. Bisection method: iteration table
Table: Iteration table for solving f (x) = x2
− 2 = 0 using bisection
method.
n a b c=(a+b)/2 f(a) f(b) f(c)
1 1 2 1.5 - + +
2 1 1.5 1.25 - - +
3 1 .250 1.500 1.375 - + -
Sanjeev Kumar Verma Introduction to root finding
10. Bisection method: Estimate of convergence
Bisection method converge very slowly. How do we estimate the
number of iterations needed to solve an equation correct upto (say)
3, 4 or 5 places of decimal?
Suppose, we have to solve f (x) = 0 and the true solution is p. Let
[a, b] is the interval which contains the root.
Bisection method successively bisects the interval [a, b] into smaller
intervals [a1, b1], [a2, b2], [a3, b3] ... [an, bn] where
(bn − an) =
1
2n
(b − a). (2)
Bisection method successively approximates the root by
pn =
1
2
(an + bn) (3)
so that the sequence {pn} approaches p in the large n limit with
|pn − p| ≤
b − a
2n
. (4)
Actual error can be smaller than the above estimate.
Sanjeev Kumar Verma Introduction to root finding
11. Bisection method: Estimate of convergence
Eq. (4) is the required estimate of convergence.
Proof: Using Eq. (3), we have
pn − p =
1
2
(an + bn) − p
Since, |pn − p| < |pn − an|, we have
|pn − p| ≤
1
2
(an + bn) − an
=
1
2
(bn − an)
=
1
2n
(b − a)
Exercise What are the no. of iterations needed to determine
the root of equation x2 − 2 = 0 to an accuracy of 10−2 to
10−7?
accuracy 10−2 10−3 10−4 10−5 10−6 10−7
n 7 10 13 17 20 23
Sanjeev Kumar Verma Introduction to root finding
12. Bisection method: final comments
It is most fundamental method of root finding. When nothing
works, use it.
It comes with a guarantee of convergence.
Main drawback is the speed of convergence.
You can use it initially to narrow down the root containing
interval and then switch over to faster methods.
Ex. Solve x2 − 5 = 0, cos x − x = 0 and tan x = x using
bisection method to an accuracy of 10−3.
Sanjeev Kumar Verma Introduction to root finding
14. Fixed point method: basic idea
If x0 = 1, we have x1 = 1 + 1/2 = 1.5, x2 = 1 + 1/2.5 = 1.4
and so on. Essentially, we get the following sequence:
{xn} =
{1, 1.5, 1.4, 1.41667, 1.41379, 1.41429, 1.41420, 1.41422...}
7 iterations give you an accuracy of 10−5. Compare it with
bisection method.
Even if you start with x0 = 10, accuracy of 10−5 is achieved in
just 7 iterations!
Ex. Solve N = N0eλt + 3
λ for λ where N = 180, N0 = 100
and t = 1.
Hint: The iteration formula is λn+1 = 3/(180 − 100eλn ).
Solution: {λn} =
{1, −0.00326697, 0.0360515, 0.0393035, 0.0394782, 0.0394876,
0.0394881}.
Sanjeev Kumar Verma Introduction to root finding
15. Fixed point method: basic idea
The idea is to rewrite the equation f (x) = 0 as x = g(x). If
x = p is the root of f (x) = 0, it is called fixed point of g(x).
With this rearrangement, finding the root of f (x) is equivalent
to finding the fixed point of g(x).
Def. x = p is a fixed point of a function g(x) if g(p) = p.
Exercise: Find out the geometric meaning of the fixed point.
But, is there a guarantee that a fixed point always exists for a
function? If there is one, how to find the interval containing
it? How do you know that there will be only one fixed point in
the given root containing interval?
Ans. There is no guarantee for existence of a fixed point for a
function in the given interval! However, it’s existence is not
guided by your luck! There is a theorem for it.
Sanjeev Kumar Verma Introduction to root finding
16. Fixed point method: It fails here!
Let’s again try to solve x2 − 2 = 0.
Rewrite it as x2 + x − x − 2 = 0 and rearrange it to have
x = x2 + x − 2.
This gives you an iteration formula: xn+1 = x2
n + xn − 2.
Start with x0 = 1 and you get {xn} = {1, 0, −2, 0, −2, ...}. It
is not converging, it is oscillating!
Start with x0 = 2 and you get
{xn} = {2, 4, 18, 340, 115938...} : it is diverging!
What is wrong here? Can we diagnose the problem without
using mathematics?
If we can’t, let’s be mathematicians for a while!
Sanjeev Kumar Verma Introduction to root finding
17. Fixed point theorem
(i) Consider a continuous function g(x) defined on the
interval [a, b]. If g(x) ∈ [a, b] ∀ x ∈ [a, b], ∃ a fixed point
p ∈ [a, b] defined as g(p) = p. (existence of fixed point)
(ii) If the function g(x) is differentiable and g (x) is bounded
from above in the interval [a, b] and |g (x)| ≤ k < 1 for some
positive number k, then the fixed point p is unique.
(condition for divergence or oscillation)
(iii)For any p0 ∈ [a, b], the sequence [pn] defined as
pn = g(pn−1)
converges to the unique fixed point p in [a, b]. (condition for
convergence)
Sanjeev Kumar Verma Introduction to root finding
18. Fixed point theorem: application
Let g(x) = x2 + x − 2. Let x ∈ [0, 1]. Is g(x) ∈ [0, 1]? If yes,
there will be a fixed point in [0, 1].
Here, g(x) is a monotonically increasing function and
g(x) ∈ [−2, 0]. g(x) is completely outside the desired
interval! That was why you didn’t get the root in this case!
Now, take g(x) = 1 + 1
1+x . Let x ∈ [0, 1]. Is g(x) ∈ [0, 1]? If
yes, there will be a fixed point in [0, 1]. Otherwise, not.
Here, g(x) is monotonically decreasing function and
g(x) ∈ [1.5, 2]. So, there is no fixed point in [0, 1].
Is there a fixed point in [1, 2]?
If x ∈ [1, 2], then g(x) ∈ [1.33, 1.5] and hence g(x) ∈ [1, 2].
So, there will be a fixed point in [1, 2]!
Here, |g (x)| = 1
(1+x)2 < 1
4 which is always smaller than 1 in
[1, 2]. So, the fixed point is unique and there is guarantee of
convergence in this case.
Sanjeev Kumar Verma Introduction to root finding
19. Fixed point theorem: Proof
If you are not interested in proof of this theorem, I can give
you two motivations:
A corollary of the theorem will give you the order of
convergence of this method! So, you can calculate how many
iterations are needed to get the fixed point correct upto 5
places of decimals.
A question will definitely be asked based upon the fixed point
method in the finals!
Sanjeev Kumar Verma Introduction to root finding
20. Fixed point theorem: Proof
(i) If a or b is a fixed point, then g(a) = a or g(b) = b.
If a and b are not fixed points, but there is a fixed point in
[a, b], then a < g(a) and g(b) < b. Why?
Let h(x) = g(x) − x. Then, h(a) = g(a) − a > 0 and
h(b) = g(b) − b < 0. Does this remind you of something?
Bisection method! ∃p ∈ (a, b) s.t.
h(p) = 0 ⇒ g(p) − p = 0 ⇒ g(p) = p. Hence, there exists a
fixed point p ∈ (a, b).
Sanjeev Kumar Verma Introduction to root finding
21. Fixed point theorem: Proof
(ii) Let g (x) ≤ k < 1. Let p and q are two fixed points in
(a, b). If p = q, then ∃ζ ∈ (p, q) s.t.
g(p) − g(q)
p − q
= g (ζ).
So,
|p − q| = |g(p) − g(q)| = |g (ζ)||p − q| ≤ k|p − q| < |p − q|
since k < 1. But, this is a contradiction! Hence, our
assumption is false: the fixed point is unique.
Sanjeev Kumar Verma Introduction to root finding
22. Fixed point theorem: Proof
(iii) Since g maps [a, b] into itself, the series pn = g(pn−1)
defined and pn ∈ [a, b] ∀ n.
Since, |g (x)| < k for all x, for each n we have
|pn −p| = |g(pn−1)−g(p)| = |g (ζn)||pn−1 −p| ≤ k|pn−1 −p|.
for some ζn ∈ (a, b).
So, |pn − p| ≤ k|pn−1 − p| ≤ k2|pn−2 − p| and so on.
Finally, |pn − p| ≤ kn|p0 − p|.
In the limiting case when n approaches infinity, kn → 0 and so
|pn − p| → 0 or pn → p.
Sanjeev Kumar Verma Introduction to root finding
23. Fixed point theorem: Corollary
(i) The error in pn is given by
|pn − p| ≤ kn max{p0 − a, b − p0}
(ii) Also, |pn − p| ≤ kn
1−k |p1 − p0| ∀ n ≥ 1.
Proof (i) Since, |pn − p| ≤ |p0 − p|, we have
|pn − p| ≤ kn
max{p0 − a, b − p0}
Why?
The p0 and p are between points a and b. So, the distance of
between p and p0 is always smaller than or equal to the
distance between p0 and a or b, whichever is maximum.
−−−−−−−−−−a−−−−−−−−−p0−−−−p−−−−−−−−−−−−b−−−−−−−−−−−
Sanjeev Kumar Verma Introduction to root finding
24. Fixed point theorem: Proof of Corollary
Proof (ii) For n ≥ 1,
|pn+1 − pn| = |g(pn) − g(pn)| ≤ k|pn − pn−1|... ≤ kn
|p1 − p0|.
For m > n,
|pm − pn| = |pm − pm−1| + |pm−1 − pm−2| + ... + |pn+1 − pn|
|pm − pn| ≤ km−1
|p1 − p0| + km−2
|p1 − p0| + ... + kn
|p1 − p0|
= kn
|p1 − p0|(1 + k + k2
+ ... + km−n−1
)
Taking the limit n approaching infinity so that pm→p, we have
|p − pn| ≤ lim
m→∞
kn
|p1 − p0|
m−n−1
i=0
ki
≤ kn
|pn − p0|
∞
i=0
ki
≤
kn
1 − k
|p1 − p0|.
Sanjeev Kumar Verma Introduction to root finding
25. Fixed point theorem: Application
Consider g(x) = 1 + 1
1+x . The fixed point of this function is
√
2.
|g (x)| = 1
(1+x)2 . For x ∈ [1, 2], we have |g (x)| < k = 1
4.
In six iterations, we have |p − pn| < 0.0002.
The speed of convergence, therefore, depends upon the
maximum value of |g (x)| viz. k.
In each iteration, the error in fixed point reduces by a factor
of k.
Ex. Can fixed point method converge at a slower rate than
bisection method?
Ans. For 0.5 < k < 1, fixed point method converges at a
slower rate than bisection method!
Lesson: Beware of simple minded thumb rules like bisection
method is slowest method. There is no way to escape
mathematical logic.
Sanjeev Kumar Verma Introduction to root finding
26. Fixed point theorem: Exercises
Solve tan x = x. Ans. x = 4.49341.
xn = {4.2, 1.77778, −4.76211, 20.0949, 2.96332, −0.180187,
−0.182163, −0.184205, −0.186317, −0.188503, −0.190768}.
Ex. Can you explain why it happens?
Solve cos x = x. Start with x0 = 0.7.
xn = {0.7, 0.764842, 0.721492, 0.750821, 0.731129,
0.744421, 0.73548, 0.741509, 0.73745, 0.740185, 0.738344}. It
is veeeery slow!
Solve xe−x + sin x − x = 0.
xn = {1., 1.20935, 1.29625, 1.31714, 1.32086, 1.32147,
1.32157, 1.32159, 1.32159, 1.32159, 1.32159}. Reasonable but
not good! We are still behind ancient Bablonians!
Sanjeev Kumar Verma Introduction to root finding
27. Newton’s method
The convergence speed the two methods we studied is much
slower as compared to the method used by Babylonians to
calculate square roots 10000 years ago!
Modern world was introduced to such a fast method only 300
years ago by Newton!
The iteration formula in newton’s method to calculate the
root of the function f (x) is
xn+1 = xn −
f (xn)
f (xn)
Ex. For f (x) = x2 − 2, we have
xn+1 = xn−
x2
n − 2
2xn
= xn−
xn
2
+
1
xn
=
xn
2
+
2
2xn
=
1
2
xn +
2
xn
.
This is the old Babylonian formula!
Sanjeev Kumar Verma Introduction to root finding
28. Newton’s method: Application
Let’s go back to the tedious example f (x) = cos x − x = 0
with x0 = 0.7.
The iteration formula is
xn+1 = xn −
cos xn − xn
(− sin xn − 1)
= xn +
cos xn − xn
1 + sin xn
.
The iterations give
{xn} = {0.7, 0.739436, 0.739085, 0.739085}: the root is
obtained to an accuracy of 10−6 in three iterations!
This is the fastest method we will study.
Sanjeev Kumar Verma Introduction to root finding
29. Newton’s Method: Applications
Now, you know that Newton’s method is behind the speed of
Babylonian method. So, derive Babylonian like formulas for
solving following problems:
Finding cube roots:
The iterative formula for x3 = N is
xn+1 =
1
3
2xn +
N
x2
n
.
Finding fixed point of a function g(x):
The fixed point of g(x) is the root of f (x) = g(x) − x.
Therefore, the iterative formula is
xn+1 = xn −
x − g(x)
1 − g (x)
.
Sanjeev Kumar Verma Introduction to root finding
30. Newton’s method: Logic behind it
Consider x = p is the root of f (x). So, f (p) = 0. Let,
f (p) = 0, xn be an approximation to the root and |xn − p| is
small.
Taylor expansion around x = xn:
f (x) = f (xn) + (x − xn)f (xn) +
(x − xn)2
2
f (xn) + ...
For x = p, we have
0 ≈ f (xn) + (p − xn)f (xn) +
(p − xn)2
2
f (xn) + ...
Since, |p − xn| is small and f (x) = 0, we have
p ≈ xn +
f (xn)
f (xn)
.
Hence, if xn is an approximation to root, a better approximation will
be
xn+1 = xn +
f (xn)
f (xn)
.
Sanjeev Kumar Verma Introduction to root finding
31. Newton’s method: A theorem
Consider a continuous and differentiable function f (x) defined
over an interval [a, b]. If p ∈ [a, b] is such that f (p) = 0 and
f (p) = 0, then there exists a δ > 0 such that Newton’s
method generates a sequence {xn} converging to p for any
initial guess x0 ∈ [p − δ, p + δ].
This theorem tell us about the inherent danger in Newton’s
method. If the value of δ is too small and the initial guess is
chosen outside the interval [p − δ, p + δ], the sequence {xn}
will diverge.
To sense this danger beforehand in practical applications, we
should know what δ is.
Sanjeev Kumar Verma Introduction to root finding
32. Newton’s method: Proof of the theorem
Newton’s method for finding the root of f (x) is equivalent to the
fixed point method for finding the fixed point of the function
g(x) = x − f (x)
f (x) .
Here,
g (x) = 1 −
f (x)f (x) − f (x)f (x)
[f (x)]2
=
f (x)f (x)
[f (x)]2
.
Since, f (p) = 0, g (p) = 0. So, |g (x)| vanishes at x = p. In the
interval [p − δ, p + δ] around p, |g (x)| will increase. We can always
choose a sufficiently small δ so that |g (x)| ≤ k < 1.
Let’s show that the interval [p − δ, p + δ] maps g(x) into itself. For
all x ∈ [p − δ, p + δ], |x − p| < δ.
So,
|g(x)−p| = |g(x)−g(p)| = |g (ζ)||(x −p)| ≤ k|x −p| < |x −p| < d
which implies that g(x) is also contained in the interval
[p − δ, p + δ] because g (x) exists and |g(x)| ≤ k < 1.
Hence, the series pn = g(pn−1) will converge to p according to fixed
point theorem.
Sanjeev Kumar Verma Introduction to root finding