SlideShare a Scribd company logo
Christopher Carbone
We will connect the Best Approximation Theory to Least Squares
approximation through the use of a motivating example. We will further
explore data modeling by using least squares to approximate a line of
best fit, or a curve of best fit.
You complete an experiment to weigh Fiddler Crabs that
were submerged in a saline solution.
Time
(minutes)
Mean Change of Weight
(grams)
0 0.0
20 0.9
40 1.1
60 1.1
80 1.1
100 1.2
Table 1
We want to predict the weight of the Fiddler Crabs at 120
minutes.
Take x1 to be the initial weight of the Fiddler Crabs and
take x2 to be the weight gain per minute:
x1 + 0x2 = 0.0
x1 + 20x2 = 0.9
x1 + 40x2 = 1.1
x1 + 60x2 = 1.1
x1 + 80x2 = 1.1
x1 + 100x2 = 1.2
To approximate the vector u on the plane W:
We will show that projWu is the best approximation of u by
vectors in W. To do this, choose a vector v∈W such that v ≠
projWu.
Approximating ||u-projWu||, amounts to minimizing ||u-v||.
Given a finite-dimension subspace W, we need to write a
vector u as a linear combination of vectors in the
subspace W. But, we cannot do this exactly since u∉W.
We will show that the best we can do is projWu since
for all v∈W where v ≠ projWu, ||u-projwu|| < ||u – v||.
Therefore, projwu is the “best approximation” of u by
vectors in W.
Proof:
 Let W be a finite-dimension subspace of an inner
product space V with u∈V and v∈W.
 projWu,v∈W, so (projWu – v)∈W.
 u – projWu is orthogonal to W, so u – projWu is
orthogonal to projWu – v as well.
 u – v = (u – projWu) + (projWu – v).
 We can apply the Pythagorean Theorem,
||u-v||2 = ||u – projWu||2 + ||projWu - v||2.
 As long as v ≠ projWu, ||u-v||2 > ||u – projWu||2.
Therefore, ||u-v|| > || u – projWu||. ☐
If we have a matrix A, then we can let the column space
of A be W. Thus, W represents the vectors Ax for all
x∈Rn. Since the system Ax = b is inconsistent, b∉W.
Thus, ∄x∈Rn such that Ax = b. We will find x’∈Rn such
that Ax’∈W where Ax’ is the best approximation for
b∉W.
 Given the inconsistent linear system Ax = b, we want
to find the least squares solution x’ to minimize
||Ax - b||. If ||Ax’ – b|| is large, then x’ is regarded as a
poor solution. If ||Ax’ – b|| is small, x’ is a good
solution. Thus, this solution x’ is called a least squares
solution of Ax = b.
 Let e = Ax – b. Then writing e component-wise yields
e = (e1, e2, … , em). Since we try to minimize this
vector, we are minimizing ||e|| = .
Therefore, the solution will minimize ||e||2 = (e1
2 + e2
2
+ … + em
2) as well. Thus, we are trying to find the sum
of the least squares to minimize that vector.
 From the Best Approximation Theorem, the closest
vector x’ is the orthogonal projection of b on W.
 For x’ to be a least squares solution to the system
Ax=b, Ax’ = projWb.
 Since b∉W, b – Ax’ = b – projWb is orthogonal to W.
 W is the column space of A, so b – Ax’ must lie in the
nullspace of AT.
 A least squares solution x’ of Ax’ = b would satisfy
AT(b – Ax’) = 0, or equivalently ATAx’ = ATb.
 The system must be consistent as we are assuming
Ax’ = b, since x’ is the least squares solution to Ax = b.
For an n x m matrix A, A has linearly independent
column vectors if and only if ATA is invertible.
The square matrix that is obtained from ATA is invertible
when the matrix A has linearly independent column
vectors.
Thus, from ATAx’ = ATb. We obtain x’ = (ATA)-1ATb.
Recalling our earlier example about the Fiddler Crabs,
the water regulation is called osmoregulation, such
that the crabs will undergo osmosis in the solution,
and thus gain weight. Let’s see the linear equations
produced again:
x1 + 0x2 = 0.0
x1 + 20x2 = 0.9
x1 + 40x2 = 1.1
x1 + 60x2 = 1.1
x1 + 80x2 = 1.1
x1 + 100x2 = 1.2
Given a set of data of the form (x1,y1), (x2,y2), …, (xn,yn), we
model the data by trying to fit an equation to the data.
n + 1 sets of data can be modeled by polynomials all the way
up to the nth-degree polynomial. But, when we try to fit a
curve to data, some measurement error and rounding
issues exists. Therefore, the curve cannot fit the data
exactly. So, we will use the least squares technique.
We minimize the error between the data points and the
actual line formed from the line of best fit. This is the
vertical distance between the points and the line,
denoted as the distance dj.
We believe that there is an additive error with the
vertical coordinates, not the horizontal coordinates,
producing yj = a0 + a1xj + dj.
The equation of the line would be y = a0 + a1x. So, the
resulting linear system would look like:
y1 = a0 + a1x1
y2 = a0 + a1x2
…
yn = a0 + a1xn
Or equivalently,
In the Example, we have carried out the least squares
solution for a straight line of best fit.
The equation would be y = 0.42857 + 0.00943x.
The equation is y = a0 + a1x + a2x2.
The equation is y = 0.14286 + 0.03086x – 0.00002x2.
The equation is y = a0 + a1x + a2x2 + a3x3.
The equation is
y = 0.0150794 + 0.0600331x – 0.0010129x2 + 0.0000053x3
The number of columns increased in matrix A for each
degree of the proposed polynomial function. We can
fit a polynomial of degree n to m data points as:
y1 = a0 + a1x1 + a2x1
2 + … + anx1
n
y2 = a0 + a1x2 + a2x2
2 + … + anx2
n
…
ym = a0 + a1xm + a2xm
2 + … + anxm
n
We can solve for the least squares solution x’ =
(ATA)-1ATb to fit m data points with an m – 1 degree
polynomial:
Anton, Howard. Elementary Linear Algebra. 9th ed. John
Wiley & Sons. United States of America. 2005.
Johnson, Arlene Prof. Animal Osmoregulation. Biology
Lab. BI114L. 26 February 2010.

More Related Content

What's hot

METHOD OF LEAST SQURE
METHOD OF LEAST SQUREMETHOD OF LEAST SQURE
METHOD OF LEAST SQURE
Danial Mirza
 
Gamma function
Gamma functionGamma function
Gamma function
Solo Hermelin
 
TRACING OF CURVE (CARTESIAN AND POLAR)
TRACING OF CURVE (CARTESIAN AND POLAR)TRACING OF CURVE (CARTESIAN AND POLAR)
TRACING OF CURVE (CARTESIAN AND POLAR)
Smit Shah
 
20 sequences x
20 sequences x20 sequences x
20 sequences x
math266
 
Homogeneous Linear Differential Equations
 Homogeneous Linear Differential Equations Homogeneous Linear Differential Equations
Homogeneous Linear Differential Equations
AMINULISLAM439
 
INTEGRATION BY PARTS PPT
INTEGRATION BY PARTS PPT INTEGRATION BY PARTS PPT
INTEGRATION BY PARTS PPT
03062679929
 
6 volumes of solids of revolution ii x
6 volumes of solids of revolution ii x6 volumes of solids of revolution ii x
6 volumes of solids of revolution ii x
math266
 
Introduction to Differential Equations
Introduction to Differential EquationsIntroduction to Differential Equations
Introduction to Differential Equations
Vishvaraj Chauhan
 
Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)
Syed Ahmed Zaki
 
Methods of solving ODE
Methods of solving ODEMethods of solving ODE
Methods of solving ODE
kishor pokar
 
Simpson's rule of integration
Simpson's rule of integrationSimpson's rule of integration
Simpson's rule of integration
VARUN KUMAR
 
7 cavalieri principle-x
7 cavalieri principle-x7 cavalieri principle-x
7 cavalieri principle-x
math266
 
Cramer's Rule
Cramer's RuleCramer's Rule
Cramer's Rule
Abdul SAttar
 
Applied numerical methods lec10
Applied numerical methods lec10Applied numerical methods lec10
Applied numerical methods lec10
Yasser Ahmed
 
Legendre functions
Legendre functionsLegendre functions
Legendre functions
Solo Hermelin
 
Integration by partial fraction
Integration by partial fractionIntegration by partial fraction
Integration by partial fraction
Ayesha Ch
 
Quadratic equation slideshare
Quadratic equation slideshareQuadratic equation slideshare
Quadratic equation slideshare
Anusharani771
 
Logarithm
LogarithmLogarithm
Logarithm
itutor
 
Integration by Parts & by Partial Fractions
Integration by Parts & by Partial FractionsIntegration by Parts & by Partial Fractions
Integration by Parts & by Partial Fractions
MuhammadAliSiddique1
 
28 work and line integrals
28 work and line integrals28 work and line integrals
28 work and line integrals
math267
 

What's hot (20)

METHOD OF LEAST SQURE
METHOD OF LEAST SQUREMETHOD OF LEAST SQURE
METHOD OF LEAST SQURE
 
Gamma function
Gamma functionGamma function
Gamma function
 
TRACING OF CURVE (CARTESIAN AND POLAR)
TRACING OF CURVE (CARTESIAN AND POLAR)TRACING OF CURVE (CARTESIAN AND POLAR)
TRACING OF CURVE (CARTESIAN AND POLAR)
 
20 sequences x
20 sequences x20 sequences x
20 sequences x
 
Homogeneous Linear Differential Equations
 Homogeneous Linear Differential Equations Homogeneous Linear Differential Equations
Homogeneous Linear Differential Equations
 
INTEGRATION BY PARTS PPT
INTEGRATION BY PARTS PPT INTEGRATION BY PARTS PPT
INTEGRATION BY PARTS PPT
 
6 volumes of solids of revolution ii x
6 volumes of solids of revolution ii x6 volumes of solids of revolution ii x
6 volumes of solids of revolution ii x
 
Introduction to Differential Equations
Introduction to Differential EquationsIntroduction to Differential Equations
Introduction to Differential Equations
 
Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)
 
Methods of solving ODE
Methods of solving ODEMethods of solving ODE
Methods of solving ODE
 
Simpson's rule of integration
Simpson's rule of integrationSimpson's rule of integration
Simpson's rule of integration
 
7 cavalieri principle-x
7 cavalieri principle-x7 cavalieri principle-x
7 cavalieri principle-x
 
Cramer's Rule
Cramer's RuleCramer's Rule
Cramer's Rule
 
Applied numerical methods lec10
Applied numerical methods lec10Applied numerical methods lec10
Applied numerical methods lec10
 
Legendre functions
Legendre functionsLegendre functions
Legendre functions
 
Integration by partial fraction
Integration by partial fractionIntegration by partial fraction
Integration by partial fraction
 
Quadratic equation slideshare
Quadratic equation slideshareQuadratic equation slideshare
Quadratic equation slideshare
 
Logarithm
LogarithmLogarithm
Logarithm
 
Integration by Parts & by Partial Fractions
Integration by Parts & by Partial FractionsIntegration by Parts & by Partial Fractions
Integration by Parts & by Partial Fractions
 
28 work and line integrals
28 work and line integrals28 work and line integrals
28 work and line integrals
 

Viewers also liked

Application of least square method
Application of least square methodApplication of least square method
Application of least square method
are you
 
The least square method
The least square methodThe least square method
The least square method
kevinlefol
 
Method of least square
Method of least squareMethod of least square
Method of least square
Somya Bagai
 
Ordinary least squares linear regression
Ordinary least squares linear regressionOrdinary least squares linear regression
Ordinary least squares linear regression
Elkana Rorio
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Brian Erandio
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares RegressionChapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regression
nszakir
 
sampling ppt
sampling pptsampling ppt
sampling ppt
Swati Luthra
 
Least square method
Least square methodLeast square method
Least square method
Somya Bagai
 

Viewers also liked (8)

Application of least square method
Application of least square methodApplication of least square method
Application of least square method
 
The least square method
The least square methodThe least square method
The least square method
 
Method of least square
Method of least squareMethod of least square
Method of least square
 
Ordinary least squares linear regression
Ordinary least squares linear regressionOrdinary least squares linear regression
Ordinary least squares linear regression
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares RegressionChapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regression
 
sampling ppt
sampling pptsampling ppt
sampling ppt
 
Least square method
Least square methodLeast square method
Least square method
 

Similar to Least Squares

Least Squares
Least SquaresLeast Squares
Least Squares
Christopher Carbone
 
Linear Algebra Assignment Help
Linear Algebra Assignment HelpLinear Algebra Assignment Help
Linear Algebra Assignment Help
Maths Assignment Help
 
Calculas
CalculasCalculas
Calculas
Vatsal Manavar
 
The uvw method
The uvw methodThe uvw method
The uvw method
ChnhMinhNguyn
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebra
meezanchand
 
Lecture 15(graphing of cartesion curves)
Lecture 15(graphing of cartesion curves)Lecture 15(graphing of cartesion curves)
Lecture 15(graphing of cartesion curves)
FahadYaqoob5
 
Tracing of cartesian curve
Tracing of cartesian curveTracing of cartesian curve
Tracing of cartesian curve
Kaushal Patel
 
Lecture 15
Lecture 15Lecture 15
Lecture 15
FahadYaqoob5
 
Lecture 3 - Linear Regression
Lecture 3 - Linear RegressionLecture 3 - Linear Regression
Lecture 3 - Linear Regression
Harsha Vardhan Tetali
 
compressed-sensing
compressed-sensingcompressed-sensing
compressed-sensing
Steve Dias da Cruz
 
Imc2016 day1-solutions
Imc2016 day1-solutionsImc2016 day1-solutions
Imc2016 day1-solutions
Christos Loizos
 
orthogonal.pptx
orthogonal.pptxorthogonal.pptx
orthogonal.pptx
JaseSharma
 
Proofs nearest rank
Proofs nearest rankProofs nearest rank
Proofs nearest rank
fithisux
 
Inmo 2010 problems and solutions
Inmo 2010 problems and solutionsInmo 2010 problems and solutions
Inmo 2010 problems and solutions
askiitians
 
Vcla - Inner Products
Vcla - Inner ProductsVcla - Inner Products
Vcla - Inner Products
Preetshah1212
 
Ch05 1
Ch05 1Ch05 1
Ch05 1
Rendy Robert
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Dahua Lin
 
On Uq(sl2)-actions on the quantum plane
On Uq(sl2)-actions on the quantum planeOn Uq(sl2)-actions on the quantum plane
On Uq(sl2)-actions on the quantum plane
Steven Duplij (Stepan Douplii)
 
Vectors space definition with axiom classification
Vectors space definition with axiom classificationVectors space definition with axiom classification
Vectors space definition with axiom classification
kishor pokar
 
Linear Algebra.pptx
Linear Algebra.pptxLinear Algebra.pptx
Linear Algebra.pptx
Maths Assignment Help
 

Similar to Least Squares (20)

Least Squares
Least SquaresLeast Squares
Least Squares
 
Linear Algebra Assignment Help
Linear Algebra Assignment HelpLinear Algebra Assignment Help
Linear Algebra Assignment Help
 
Calculas
CalculasCalculas
Calculas
 
The uvw method
The uvw methodThe uvw method
The uvw method
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebra
 
Lecture 15(graphing of cartesion curves)
Lecture 15(graphing of cartesion curves)Lecture 15(graphing of cartesion curves)
Lecture 15(graphing of cartesion curves)
 
Tracing of cartesian curve
Tracing of cartesian curveTracing of cartesian curve
Tracing of cartesian curve
 
Lecture 15
Lecture 15Lecture 15
Lecture 15
 
Lecture 3 - Linear Regression
Lecture 3 - Linear RegressionLecture 3 - Linear Regression
Lecture 3 - Linear Regression
 
compressed-sensing
compressed-sensingcompressed-sensing
compressed-sensing
 
Imc2016 day1-solutions
Imc2016 day1-solutionsImc2016 day1-solutions
Imc2016 day1-solutions
 
orthogonal.pptx
orthogonal.pptxorthogonal.pptx
orthogonal.pptx
 
Proofs nearest rank
Proofs nearest rankProofs nearest rank
Proofs nearest rank
 
Inmo 2010 problems and solutions
Inmo 2010 problems and solutionsInmo 2010 problems and solutions
Inmo 2010 problems and solutions
 
Vcla - Inner Products
Vcla - Inner ProductsVcla - Inner Products
Vcla - Inner Products
 
Ch05 1
Ch05 1Ch05 1
Ch05 1
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
 
On Uq(sl2)-actions on the quantum plane
On Uq(sl2)-actions on the quantum planeOn Uq(sl2)-actions on the quantum plane
On Uq(sl2)-actions on the quantum plane
 
Vectors space definition with axiom classification
Vectors space definition with axiom classificationVectors space definition with axiom classification
Vectors space definition with axiom classification
 
Linear Algebra.pptx
Linear Algebra.pptxLinear Algebra.pptx
Linear Algebra.pptx
 

Least Squares

  • 1. Christopher Carbone We will connect the Best Approximation Theory to Least Squares approximation through the use of a motivating example. We will further explore data modeling by using least squares to approximate a line of best fit, or a curve of best fit.
  • 2. You complete an experiment to weigh Fiddler Crabs that were submerged in a saline solution. Time (minutes) Mean Change of Weight (grams) 0 0.0 20 0.9 40 1.1 60 1.1 80 1.1 100 1.2 Table 1
  • 3. We want to predict the weight of the Fiddler Crabs at 120 minutes. Take x1 to be the initial weight of the Fiddler Crabs and take x2 to be the weight gain per minute: x1 + 0x2 = 0.0 x1 + 20x2 = 0.9 x1 + 40x2 = 1.1 x1 + 60x2 = 1.1 x1 + 80x2 = 1.1 x1 + 100x2 = 1.2
  • 4. To approximate the vector u on the plane W: We will show that projWu is the best approximation of u by vectors in W. To do this, choose a vector v∈W such that v ≠ projWu. Approximating ||u-projWu||, amounts to minimizing ||u-v||.
  • 5. Given a finite-dimension subspace W, we need to write a vector u as a linear combination of vectors in the subspace W. But, we cannot do this exactly since u∉W. We will show that the best we can do is projWu since for all v∈W where v ≠ projWu, ||u-projwu|| < ||u – v||. Therefore, projwu is the “best approximation” of u by vectors in W.
  • 6. Proof:  Let W be a finite-dimension subspace of an inner product space V with u∈V and v∈W.  projWu,v∈W, so (projWu – v)∈W.  u – projWu is orthogonal to W, so u – projWu is orthogonal to projWu – v as well.  u – v = (u – projWu) + (projWu – v).  We can apply the Pythagorean Theorem, ||u-v||2 = ||u – projWu||2 + ||projWu - v||2.  As long as v ≠ projWu, ||u-v||2 > ||u – projWu||2. Therefore, ||u-v|| > || u – projWu||. ☐
  • 7. If we have a matrix A, then we can let the column space of A be W. Thus, W represents the vectors Ax for all x∈Rn. Since the system Ax = b is inconsistent, b∉W. Thus, ∄x∈Rn such that Ax = b. We will find x’∈Rn such that Ax’∈W where Ax’ is the best approximation for b∉W.
  • 8.  Given the inconsistent linear system Ax = b, we want to find the least squares solution x’ to minimize ||Ax - b||. If ||Ax’ – b|| is large, then x’ is regarded as a poor solution. If ||Ax’ – b|| is small, x’ is a good solution. Thus, this solution x’ is called a least squares solution of Ax = b.
  • 9.  Let e = Ax – b. Then writing e component-wise yields e = (e1, e2, … , em). Since we try to minimize this vector, we are minimizing ||e|| = . Therefore, the solution will minimize ||e||2 = (e1 2 + e2 2 + … + em 2) as well. Thus, we are trying to find the sum of the least squares to minimize that vector.
  • 10.  From the Best Approximation Theorem, the closest vector x’ is the orthogonal projection of b on W.  For x’ to be a least squares solution to the system Ax=b, Ax’ = projWb.  Since b∉W, b – Ax’ = b – projWb is orthogonal to W.  W is the column space of A, so b – Ax’ must lie in the nullspace of AT.  A least squares solution x’ of Ax’ = b would satisfy AT(b – Ax’) = 0, or equivalently ATAx’ = ATb.  The system must be consistent as we are assuming Ax’ = b, since x’ is the least squares solution to Ax = b.
  • 11. For an n x m matrix A, A has linearly independent column vectors if and only if ATA is invertible. The square matrix that is obtained from ATA is invertible when the matrix A has linearly independent column vectors. Thus, from ATAx’ = ATb. We obtain x’ = (ATA)-1ATb.
  • 12. Recalling our earlier example about the Fiddler Crabs, the water regulation is called osmoregulation, such that the crabs will undergo osmosis in the solution, and thus gain weight. Let’s see the linear equations produced again: x1 + 0x2 = 0.0 x1 + 20x2 = 0.9 x1 + 40x2 = 1.1 x1 + 60x2 = 1.1 x1 + 80x2 = 1.1 x1 + 100x2 = 1.2
  • 13.
  • 14. Given a set of data of the form (x1,y1), (x2,y2), …, (xn,yn), we model the data by trying to fit an equation to the data. n + 1 sets of data can be modeled by polynomials all the way up to the nth-degree polynomial. But, when we try to fit a curve to data, some measurement error and rounding issues exists. Therefore, the curve cannot fit the data exactly. So, we will use the least squares technique.
  • 15. We minimize the error between the data points and the actual line formed from the line of best fit. This is the vertical distance between the points and the line, denoted as the distance dj. We believe that there is an additive error with the vertical coordinates, not the horizontal coordinates, producing yj = a0 + a1xj + dj.
  • 16. The equation of the line would be y = a0 + a1x. So, the resulting linear system would look like: y1 = a0 + a1x1 y2 = a0 + a1x2 … yn = a0 + a1xn Or equivalently,
  • 17. In the Example, we have carried out the least squares solution for a straight line of best fit. The equation would be y = 0.42857 + 0.00943x.
  • 18.
  • 19. The equation is y = a0 + a1x + a2x2. The equation is y = 0.14286 + 0.03086x – 0.00002x2.
  • 20.
  • 21. The equation is y = a0 + a1x + a2x2 + a3x3. The equation is y = 0.0150794 + 0.0600331x – 0.0010129x2 + 0.0000053x3
  • 22.
  • 23. The number of columns increased in matrix A for each degree of the proposed polynomial function. We can fit a polynomial of degree n to m data points as: y1 = a0 + a1x1 + a2x1 2 + … + anx1 n y2 = a0 + a1x2 + a2x2 2 + … + anx2 n … ym = a0 + a1xm + a2xm 2 + … + anxm n We can solve for the least squares solution x’ = (ATA)-1ATb to fit m data points with an m – 1 degree polynomial:
  • 24. Anton, Howard. Elementary Linear Algebra. 9th ed. John Wiley & Sons. United States of America. 2005. Johnson, Arlene Prof. Animal Osmoregulation. Biology Lab. BI114L. 26 February 2010.