SlideShare a Scribd company logo
1 of 247
error 2.pdf
10/13/16, 6(46 PM01_error
Page 1 of
5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
wnload=false
In [ ]: %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sys
Error Definitions
Following is an example for the concept of absolute error,
relative error and decimal precision:
We shall test the approximation to common mathematical
constant, . Compute the absolute and relative
errors along with the decimal precision if we take the
approximate value of .
In [ ]: # We can use the formulas you derieved above to
calculate the actual n
umbers
absolute_error = np.abs(np.exp(1) - 2.718)
relative_error = absolute_error/np.exp(1)
print "The absolute error is "+str(absolute_error)
print "The relative error is "+str(relative_error)
Machine epsilon is a very important concept in floating point
error. The value, even though miniscule, can
easily compund over a period to cause huge problems.
Below we see a problem demonstating how easily machine error
can creep into a simple piece of code:
In [ ]: a = 4.0/3.0
b = a - 1.0
c = 3*b
eps = 1 - c
print 'Value of a is ' +str(a)
print 'Value of b is ' +str(b)
print 'Value of c is ' +str(c)
print 'Value of epsilon is ' +str(eps)
Ideally eps should be 0, but instead we see the machine epsilon
and while the value is small it can lead to
issues.
e
e = 2.718
10/13/16, 6(46 PM01_error
Page 2 of
5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
wnload=false
In [ ]: print "The progression of error:"
for i in range(1,20):
print str(abs((10**i)*c - (10**i)))
The largest floating point number
The formula for obtaining the number is shown below, instead
of calculating the value we can use the
system library to find this value.
In [ ]: maximum = (2.0-eps)*2.0**1023
print sys.float_info.max
print 'Value of maximum is ' +str(maximum)
The smallest floating point number
The formula for obtaining the number is shown below. Similarly
the value can be found using the system
library to find this value.
In [ ]: minimum = eps*2.0**(-1022)
print sys.float_info.min
print sys.float_info.min*sys.float_info.epsilon
print 'Value of minimum is ' +str(minimum)
As we try to compute a number bigger than the aforementioned,
largest floating point number we see weird
errors. The computer assigns infinity to these values.
In [ ]: overflow = maximum*10.0
print 'Value of overflow is ' +str(overflow)
As we try to compute a number smaller than the aforementioned
smallest floating point number we see that
the computer assigns it the value 0. We actually lose precision
in this case.
10/13/16, 6(46 PM01_error
Page 3 of
5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
wnload=false
In [1]: underflow = minimum/2.0
print 'Value of underflow is ' +str(underflow)
Truncation error is a very common form of error you will keep
seing in the area of Numerical
Analysis/Computing.
Here we will look at the classic Calculus example of the
approximation near 0. We can plot them
together to vsualize the approximation and also plot the error to
unserdtand the behavious of the truncation
error.
Sin(x) ≈ x
--------------------------------------------------------------------
-------
NameError Traceback (most recent cal
l last)
<ipython-input-1-36ba610e4c74> in <module>()
----> 1 underflow = minimum/2.0
2 print 'Value of underflow is ' +str(underflow)
NameError: name 'minimum' is not defined
10/13/16, 6(46 PM01_error
Page 4 of
5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
wnload=false
In [ ]: # Truncation error, plot of x vs Sin x
# Plot Sin x and x for the values between -pi and pi
x = np.linspace(-np.pi,np.pi,101)
plt.plot(x, x, '-r',x,np.sin(x),'bs')
plt.title("Comparing sin x to x on the whole domain")
plt.legend(["x", "Sin(x)"], loc=4)
plt.show()
# Now lets move our focus closer to 0, pick a range closer to 0
x = np.linspace(-0.5,0.5,21)
plt.plot(x, x, '-r',x,np.sin(x),'bs')
plt.title("Comparing sin x to x nearer to 0")
plt.legend(["x", "Sin(x)"], loc=4)
plt.show()
# Now we can plot the absolute error
error = np.absolute(np.sin(x) - x)
plt.plot(x, error,'-b')
plt.title("Error for $Sin(x) - x$")
#plt.legend(["x", "Sin(x)"], loc=8)
plt.show()
# Now we can plot the relative error
rel_error = np.absolute(error/x)
plt.plot(x, rel_error,'-b')
plt.title("Realtive Error for $Sin(x) - x$")
#plt.legend(["x", "Sin(x)"], loc=8)
plt.show()
Model error arisses in various forms, here we are gonna take
some population data and fit two different
models and analyze which model is better for the given data.
10/13/16, 6(46 PM01_error
Page 5 of
5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
wnload=false
In [ ]: # Model Error
time = [0, 1, 2, 3, 4, 5] # hours
growth = [20, 40, 75, 150, 297, 510] # Bacteria Population
time = np.array(time)
growth = np.array(growth)
# First we can just plot the data to visualize it
plt.plot(time,growth,'rs')
plt.title("Scatter plot for the Bacteria population growth over
time")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.show()
# Now we can use the Exponential Model, y = ab^x, to fit the
data
a = 20.5122; b = 1.9238;
y = a*b**time[:]
plt.plot(time,growth,'rs',time,y,'-b')
plt.title("Expoenential model fit")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.legend(["Data", "Exponential Fit"], loc=4)
plt.show()
# Now we can use the Power Model, y = ax^b, to fit the data
a = 32.5846; b = 1.572;
y = a*time[:]**b
plt.plot(time,growth,'rs',time,y,'-b')
plt.title("Power model fit")
plt.xlabel('Time (hrs)')
plt.ylabel('Population')
plt.legend(["Data", "Power Fit"], loc=4)
plt.show()
error.pdf
10/13/16, 12)40 PMHW1_error
Page 1 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [2]: %matplotlib inline
%precision 16
import numpy
import matplotlib.pyplot as plt
Before you turn this problem in, make sure everything runs as
expected. First, restart the kernel (in the
menubar, select Kernel Restart) and then run all cells (in the
menubar, select Cell Run All).
Make sure you fill in any place that says YOUR CODE HERE
or "YOUR ANSWER HERE", as well as your
name and collaborators below:
HW 1 - Forms of Error
Question 1
Find the absolute error, relative error, and decimal precision
(number of significant decimal digits) for the
following and approximations . Note that here we may also
mean precision as compared to . In these
cases use the absolute error to help define 's precision (each
worth 5 points).
(a) and
(b) and
(c) and for (Stirling's approximation)
(d) and where is the Taylor polynomial approximation to
expanded about .
Consider . What vaule of is required for this approximation to
be good to 6 digits of decimal
precision?
→ →
f f ̂ f
f ̂
f = π = 3.14f ̂
f = π = 22/7f ̂
f = log(n!) = n log(n) − nf ̂ n = 5, 10, 100
f = ex = (x)f ̂ Tn (x)Tn ex x = 0
N = 1, 2, 3 N
10/13/16, 12)40 PMHW1_error
Page 2 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
(a)
Absolute error
Relative error
Precision
print numpy.abs(numpy.pi - 3.14)
print numpy.abs(numpy.pi - 3.14) / (numpy.pi)
(b)
Absolute error
Relative error
Precision
print numpy.abs(numpy.pi - 22.0 / 7.0)
print numpy.abs(numpy.pi - 22.0 / 7.0) / numpy.pi
(c)
Absolute error
Relative error
Precision using the relative error.
import scipy.misc as misc
n = numpy.array([5, 10, 100])
numpy.abs(numpy.log(misc.factorial(n)) - n * numpy.log(n) +
n)
numpy.abs(numpy.log(misc.factorial(n)) - (n * numpy.log(n) -
n)) / numpy.a
bs(numpy.log(misc.factorial(n)))
(d)
Absolute error =
Relative error =
Precision - Since this question does require some interval
however someone answer this they
should use the same approach to how (c) was answered.
= π − 3.14 = 0.00159265358979
= = 0.000506957382897π−3.14π
= 3
= π − = 0.001264489227
= = 1 − ≈ 0.000402499434771
π− 227
π
22
7π
= 3
= log(n!) − (n log(n) − n) = 1.7403021806115442,
2.0785616431350551, 3.2223569567543109
= = 0.3635102208239511, 0.1376128752494626,
0.0088589720368673log(n!)−(n log(n)−n)log(n!)
= 3
−
∣
∣
∣
∣ e
x
∑
n=0
N xn
n!
∣
∣
∣
∣
−∣ ∣ ex ∑
N
n=0
xn
n!
∣ ∣
| |ex
10/13/16, 12)40 PMHW1_error
Page 3 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
Question 2
(a) (10) Write a Python program to compute
once using the first summation and once using the second for .
= [ − ] =SN ∑
n=1
N 1
n
1
n + 1 ∑n=1
N 1
n(n + 1)
N = 10, , … ,102 107
10/13/16, 12)40 PMHW1_error
Page 4 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [6]: def sum_1(N):
"""Compute the summation S_N defined as
sum^N_{n=1} left [ frac{1}{n} - frac{1}{n+1} right ]
:Input:
*N* (int) The upper bound on the summation.
Returns Sn (float)
"""
### BEGIN SOLUTION
Sn = 0.0
for n in xrange(1, N + 1):
Sn += 1.0 / float(n) - 1.0 / (float(n) + 1.0)
### END SOLUTION
return Sn
def sum_2(N):
"""Compute the summation S_N defined as
sum^N_{n=1} frac{1}{n (n + 1)}
:Input:
*N* (int) The upper bound on the summation.
Returns Sn (float)
"""
### BEGIN SOLUTION
Sn = 0.0
for n in xrange(1, N + 1):
Sn += 1.0 / (float(n) * (float(n) + 1.0))
### END SOLUTION
return Sn
10/13/16, 12)40 PMHW1_error
Page 5 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [7]: N = numpy.array([10**n for n in xrange(1,8)])
answer = numpy.zeros((2, N.shape[0]))
for (n, upper_bound) in enumerate(N):
answer[0, n] = sum_1(upper_bound)
answer[1, n] = sum_2(upper_bound)
numpy.testing.assert_allclose(answer[0, :],
numpy.array([0.90909090909
09089, 0.9900990099009896,
0.99900099900
09996, 0.9999000099990004,
0.99999000010
00117, 0.9999990000010469,
0.99999989999
98143]))
numpy.testing.assert_allclose(answer[1, :],
numpy.array([0.90909090909
09091, 0.9900990099009898,
0.99900099900
09997, 0.9999000099990007,
0.99999000010
00122, 0.9999990000010476,
0.99999989999
98153]))
print "Success!"
(b) (5) Compute the absolute error between the two summation
approaches.
In [10]: def abs_error(N):
"""Compute the absolute error of the two sums defined as
sum^N_{n=1} left [ frac{1}{n} - frac{1}{n+1} right ]
and
sum^N_{n=1} frac{1}{n (n + 1)}
respectively for the given N.
:Input:
*N* (int) The upper bound on the summation.
Returns *error* (float)
"""
### BEGIN SOLUTION
error = numpy.abs(sum_2(N) - sum_1(N))
### END SOLUTION
return error
10/13/16, 12)40 PMHW1_error
Page 6 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [11]: N = numpy.array([10**n for n in xrange(1,8)])
answer = numpy.zeros(N.shape)
for (n, upper_bound) in enumerate(N):
answer[n] = abs_error(upper_bound)
numpy.testing.assert_allclose(answer,
numpy.array([1.1102230246251565e
-16, 1.1102230246251565e-16,
1.1102230246251565e
-16, 3.3306690738754696e-16,
4.4408920985006262e
-16, 6.6613381477509392e-16,
9.9920072216264089e
-16]))
print "Success!"
(c) (10) Plot the relative and absolute error versus . Also plot a
line where should be. Comment on
what you see.
N ϵmachine
10/13/16, 12)40 PMHW1_error
Page 7 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [8]: fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
# HINT! Use the plotting function semilogx to plot the errors
# Also, do not forget to label your plot
### BEGIN SOLUTION
N = numpy.array([10**n for n in xrange(1,8)])
answer = numpy.zeros((2, N.shape[0]))
for (n, upper_bound) in enumerate(N):
answer[0, n] = abs_error(upper_bound)
answer[1, n] = numpy.abs(sum_1(upper_bound) -
sum_2(upper_bound))
/ numpy.abs(sum_2(upper_bound))
for n in xrange(2):
axes = fig.add_subplot(1, 2, n + 1)
axes.semilogx(N, answer[n, :])
axes.semilogx(N, answer[n, :], 'o')
axes.semilogx(N, numpy.finfo(float).eps *
numpy.ones(N.shape))
axes.set_xlabel("Number of Terms in Series")
axes.set_ylabel("Absolute Error between Series")
### END SOLUTION
plt.show()
(d) (5) Theorize what may have lead to the differences in
answers.
Lots of possibilities here, just grade on being reasonable.
10/13/16, 12)40 PMHW1_error
Page 8 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
Question 3
Following our discussion in lecture regarding approximating
again consider the Taylor polynomial
approximation:
(a) Derive the upper bound on the relative error assuming that
and
is given by
Just using the definitions of the Taylor polynomial we can
simplify the relative error to
for some with (Lagrange remainder).
For this simplifies to
If then we have no bound although taking more terms in the
series will always allow for somewhat
arbitrary control of the error.
(b) Show that for large and , implies that we need at least
terms in the series
(where ).
Hint Use Stirling's approximation .
ex
≈ (x) = 1 + x + + + ⋯ +ex Tn
x2
2!
x3
3!
xn
n!
x > 0
=Rn
| − (x)|ex Tn
| |ex
≤Rn
∣
∣
∣
xn+1
(n + 1)!
∣
∣
∣
= = = = ≤Rn
| − (x)|ex Tn
| |ex
−∣ ∣ ∑
∞
k=0
xk
k! ∑
n
k=0
xk
k!
∣ ∣
| |ex
∣ ∣ ∑
∞
k=n+1
xk
k!
∣ ∣
| |ex
1
| |ex
∣
∣
∣ eξ
xn+1
(n + 1)!
∣
∣
∣
∣
∣
∣
xn+1
(n + 1)!
∣
∣
∣
ξ = θx 0 < θ < 1
0 ≤ x ≤ 1
≤ ≤Rn
1
| |e1
∣
∣
∣ ∣ e
1 1n+1
(n + 1)!
∣
∣
∣ ∣
∣
∣ ∣
1
(n + 1)!
∣
∣ ∣
x ≥ 1
x n ≤rn ϵmachine n > e ⋅ x
e = exp(1)
log(n!) ≈ n log n − n
10/13/16, 12)40 PMHW1_error
Page 9 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
Using the result from part (a) we have
Since we can drop the absolute values
Now taking the of both sides we have
where we have used Stirling's approximation. Now simplifying
the left side and exponentiating we have
Now taking the root of both sides and using leading to
which leads to what we wanted (approximately).
(c) Write a Python function that accurately computes to the
specified relative error tolerance and returns
both the estimate on the range and the number of terms in the
series needed over the interval . Note
that the testing tolerance will be .
Make sure to document your code including expected inputs,
outputs, and assumptions being made.
≤ ≤rn
∣
∣
∣ x
N+1
(N + 1)!
∣
∣
∣ ϵmachine
x ≫ 1
≤x
N+1
(N + 1)!
ϵmachine
log
(N + 1) log x − (N + 1) log(N + 1) + N + 1 ≤ log ϵmachine
log[ ] + N + 1 ≤ log( )xN + 1
N+1
ϵmachine
≤( )xN + 1
N+1
eN+1 ϵmachine
N + 1 ( < 1ϵmachine )
1
N+1
≤ 1 ⇒ x ⋅ e ≤ N + 1xe
N + 1
Tn
[−2, 2]
8 ⋅ ϵmachine
10/13/16, 12)40 PMHW1_error
Page 10 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
In [3]: # HINT: Think about how we evaluated polynomials
efficiently in class
import scipy.misc as misc
def Tn_exp(x, tolerance=1e-3):
MAX_N = 100
### BEGIN SOLUTION
method = 0
for N in xrange(0, MAX_N + 1):
if method == 0:
# Direct method
Tn = numpy.zeros(x.shape)
for n in xrange(N + 1):
Tn += x**n / misc.factorial(n)
elif method == 1:
# Use Horner's method!
p = numpy.array([1.0 / misc.factorial(n - 1) for n in
xran
ge(N + 1, 0, -1)])
Tn = numpy.ones(x.shape) * p[0]
for coefficient in p[1:]:
Tn = Tn * x + coefficient
elif method == 2:
# Use direct evaluation through NumPy
p = numpy.array([1.0 / misc.factorial(n - 1) for n in
xran
ge(N + 1, 0, -1)])
Tn = numpy.polyval(p, x)
# Check stopping criteria
if numpy.all(numpy.abs(Tn - numpy.exp(x)) /
numpy.abs(numpy.ex
p(x)) < tolerance):
break
### END SOLUTION
return Tn, N
In [5]: x = numpy.linspace(-2, 2, 100)
tolerance = 8.0 * numpy.finfo(float).eps
answer, N = Tn_exp(x, tolerance=tolerance)
assert(numpy.all(numpy.abs(answer - numpy.exp(x)) /
numpy.abs(numpy.ex
p(x)) < tolerance))
print "Success!"
10/13/16, 12)40 PMHW1_error
Page 11 of
11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
error.ipynb?download=false
Question 4
Given the Taylor polynomial expansions
and
determine the order of approximation for their sum and product
(determine the exponent that belongs in the
).
Sum:
Product:
= 1 + Δx + Δ
1 − Δx
x2 x3 x4
2
2!
Δx4
4!
x6
1 − Δx
Δx2
2
x3 x4
1 − Δx
Δx2
2
Δx3
2
x4
interpolation 2.pdf
10/19/16, 8:44 PM03_interpolation
Page 1 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
In [ ]:
Group Work 3 - Interpolation
Additional resource:
http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOLA
TION
(http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOL
ATION)
(a) Based on the principals of an interpolating polynomial, write
the general
system of equations for an interpolating polynomial of degree
that
goes through points represent. Express this in matrix notation.
What are the inputs to the problem we are solving?
1. distinct points
2. functional values
Property of interpolating polynomial:
We know that an interpolating polynoial of degree n is of the
form:
With this, and the previous information we can make equations:
This can be represented as a linear system:
Clearly, our unknowns are the coefficients, which are weights
of the
monomial basis.
(x)PN N
N +1
n+1 , , . . . ,x0 x1 xN
n+1 , , . . . ,y0 y1 yN
⇒ ( ) = … ( ) =Pn x0 y0 Pn xN yN
(x) = + x+. . .+Pn a0 a1 aNxN
n+1
( ) = + +. . .+ =Pn x0 a0 a1x0 aNxN0 y0
⋮
( ) = + +. . .+ =Pn xn a0 a1xn aNxNn yn
Ax = b
Ax = =
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOLA
TION
10/19/16, 8:44 PM03_interpolation
Page 2 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
Note, once the coefficients are determined, we multiply the
coefficients by
the monomial basis because the represent the respective weights
for each
element in the basis and the result is the interpolating
polynomial!
(b) What does the system of equations look like if you use the
Lagrangian
basis? Can you represent this in matrix form? Think about the
basis and its
role in the previous question. (Hint: start from the definition of
an
interpolating polynomial and what it must satisfy. )
Interpolating polynomial with lagrangian basis:
System of equations:
Evaluate all lagrangian basis using the definition of the basis:
Point :
Ax = =
1
1
⋮
1
x0
x1
⋮
xm
x20
x21
⋮
x2m
⋯
⋯
⋱
⋯
xN0
xN1
⋮
xNm
a0
a1
⋮
aN
y0
y1
⋮
yN
(x) = (x)Pn ∑
i=0
n
aiℓi
( ) = ( ) +⋯+ ( ) =Pn x0 a0ℓ0 x0 anℓn x0 y0
⋮
( ) = ( ) +⋯+ ( ) =Pn xn a0ℓ0 xn anℓn xn yn
x0
( ) = = 1ℓ0 x0 ∏
i=0,i≠0
−x0 xi
−x0 xi
( ) = = 0ℓ1 x0 ∏
i=0,i≠1
−x0 xi
−x1 xi
⋮
( ) = = 0n 0
−x0 xi
10/19/16, 8:44 PM03_interpolation
Page 3 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
Point :
In this way, we see that combining all these equations of them
into a matrix we get:
Or
The interpolating polynomial can then be represented as:
(c) Are the systems you just derived related? What conclusion
can you
draw based on these two examples about the form of the linear
system to
find the coefficients?
The systems are basis dependent.
(c) Generate random points (take as user input), and
construct the interpolating polynomial using a monomial basis.
For this
( ) = = 0ℓn x0 ∏
i=0,i≠1
−x0 xi
−xn xi
x1
( ) = = 1ℓ0 x1 ∏
i=0,i≠0
−x1 xi
−x0 xi
( ) = = 0ℓ1 x1 ∏
i=0,i≠1
−x1 xi
−x1 xi
⋮
( ) = = 0ℓn x1 ∏
i=0,i≠1
−x1 xi
−xn xi
(n+1)2
=
1
0
⋮
0
0
1
0
0
…
…
⋱
…
0
0
0
1
a0
a1
⋮
aN
y0
y1
⋮
yn
Ix = y
(x) = (x)Pn ∑ni=0 yiℓi
N+1 N+1
10/19/16, 8:44 PM03_interpolation
Page 4 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
construct the interpolating polynomial using a monomial basis.
For this
exercise assume .x ∈ [−π,π]
10/19/16, 8:44 PM03_interpolation
Page 5 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
In [ ]: # Pick out random points
num_points = 6
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.random.random(num_points) * 2.0 *
numpy.pi - numpy.pi
data[:, 1] = numpy.random.random(num_points)
N = num_points - 1
#1: Form Vandermonde matrix and b vector
A = numpy.ones((num_points, num_points))
b = numpy.ones((num_points, 1))
A_prime = numpy.vander(data[:, 0], N = None, increasing =
True)
#2 solve system
coefficients = numpy.linalg.solve(A_prime, data[:, 1])
#3 construct interpolating polynomial
x = numpy.linspace(-numpy.pi, numpy.pi, 100)
P = numpy.zeros(x.shape[0])
# first, create the monomial basis
monomial_basis = numpy.ones((num_points, x.shape[0]))
for i in xrange(num_points):
monomial_basis[i, :] = x**i
for n in range(num_points):
P += monomial_basis[n, :] * coefficients[n]
# Plot individual basis
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
for i in xrange(num_points):
axes.plot(x, monomial_basis[i, :], label="$x^%s$" % i)
axes.plot(data[i, 0], data[i, 1], 'ko', label = "Data")
# Plot interpolating polynomial
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, P, label="$P_{%s}(x)$" % N)
axes.set_xlabel("$x$")
axes.set_ylabel("$P_{N}(x)$")
axes.set_title("$P_{N}(x)$")
axes.set_xlim((-numpy.pi, numpy.pi))
# Plot data points
for point in data:
axes.plot(point[0], point[1], 'ko', label = "Data")
plt.show()
10/19/16, 8:44 PM03_interpolation
Page 6 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
(d) Do the same as before except use a Lagrangian basis.
In [ ]:
(e) What do you observe about the basis when we leave the
interval
?[−π,pi]
# Pick out random points
num_points = 10
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.random.random(num_points) * 2.0 *
numpy.pi - numpy.pi
print data[:, 0]
data[:, 1] = numpy.random.random(num_points)
N = num_points - 1
x = numpy.linspace(-numpy.pi, numpy.pi, 100)
# Step 1: Generate Lagrangian Basis
# Note, we have N+1 weights y_0 ... y_N so we have N+1 basis
functions
# --> row size is then numPts & column size is the size of the
vector x we are transforming
lagrangian_basis = numpy.ones((num_points, x.shape[0]))
for i in range(num_points):
for j in range(num_points):
if i != j:
lagrangian_basis[i, :] *= (x - data[j][0]) / (data[i][0] -
data
# Step 2: Calculate Full Polynomial
P = numpy.zeros(x.shape[0])
for i in range(numPts):
P += lagrangian_basis[i, :] * data[i][1]
# Plot individual basis
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
for i in xrange(num_points):
axes.plot(x, lagrangian_basis[i, :], label="$ell_{%s}(x)$" %
i)
axes.plot(data[i, 0], data[i, 1], 'ko', label = "Data")
# Plot polynomial
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, P, label="$P_{%s}(x)$" % degree)
axes.set_xlabel("$x$")
axes.set_ylabel("$P_{N}(x)$")
axes.set_title("$P_{N}(x)$")
for point in data:
axes.plot(point[0], point[1], 'ko', label = "Data")
plt.show()
10/19/16, 8:44 PM03_interpolation
Page 7 of
7http://localhost:8888/notebooks/group/03_interpolation.ipynb
They diverge quickly.
interpolation.pdf
10/19/16, 8:43 PMHW3_interpolation
Page 1 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
In [2]: %matplotlib inline
import numpy
import matplotlib.pyplot as plt
Before you turn this problem in, make sure everything runs as
expected. First, restart the kernel (in the
menubar, select Kernel Restart) and then run all cells (in the
menubar, select Cell Run All).
Make sure you fill in any place that says YOUR CODE HERE
or "YOUR ANSWER HERE", as well as your
name and collaborators below:
HW 3: Interpolation
Question 1
Consider data at three points , , and .
(a) (20) Analytically find the interpolating polynomial in the
basis
1. Monomial:
2. Newton:
1. Monomials: We have the system of equations
$$begin{bmatrix}
1 & x_0 & x_0^2 
1 & x_1 & x_1^2
1 & x_2 & x_2^2
end{bmatrix} begin{bmatrix}
p_0  p_1  p_2
end{bmatrix} = begin{bmatrix}
y_0  y_1  y_2
end{bmatrix} $$
→ →
( , ) = (0, 0)x0 y0 ( , ) = (1, 2)x1 y1 ( , ) = (2, 2)x2 y2
P(x)
P(x) = + x +p0 p1 p2x2
P(x) = (x)∑2i=0 ai ni
10/19/16, 8:43 PMHW3_interpolation
Page 2 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
Taking a Newton like approach we can solve for :
and similarly
.
Subtracting these we have
leading to
can then be solved for as
and finally as
The polynomial then can be written down explicitly.
This can also be accomplished by using matrices.
2. Newton:
The basis are calculated with so that
p2
− = ( − ) + ( − ) = ( − ) + ( − )( + )y0 y1 p1 x0 x1 p2 x20 x21 p1
x0 x1 p2 x0 x1 x0 x1
= + ( + )−y0 y1−x0 x1 p1 p2 x0 x1
= + ( + )−y2 y1−x2 x1 p1 p2 x2 x1
− = ( − − + )−y2 y1−x2 x1
−y0 y1
−x0 x1 p2 x2 x1 x0 x1
=p2
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
p1
= − ( + )p1
−y0 y1
−x0 x1
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x0 x1
p0
= − − = − ( − ( + )) −p0 y0 p1x0 p2x20 y0
−y0 y1
−x0 x1
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x0 x1 x0
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x20
P(x) = − ( − ( + )) − + ( −y0
−y0 y1
−x0 x1
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x0 x1 x0
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x20
−y0 y1
−x0 x1
−y2 y1
−x2 x1
x2
+ ( )
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x2
(x)nj (x) = (x − )nj ∏j−1i=0 xi
(x) = 1n0
10/19/16, 8:43 PMHW3_interpolation
Page 3 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
(b) (10) Show that these all lead to the same polynomial (show
that is in fact unique).
The coefficients are
leading to a polynomial of the form
(x) = (x − )n1 x0
(x) = (x − )(x − )n2 x0 x1
[ ] =y0 y0
[ , ] =y0 y1
−y1 y0
−x1 x0
[ , , ] = −y0 y1 y2
−y2 y1
( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
P(x) = + (x − ) + ( − ) (x − )(x − )y0 −y1 y0−x1 x0 x0
−y2 y1
( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
x0 x1
P(x)
10/19/16, 8:43 PMHW3_interpolation
Page 4 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
The most straight forward way to do this is to gather terms
multiplying each power of and show that they
are equivalent in each representation:
1. Monomials: These are already collected into the powers:
2. Newton: Again we can collect terms (a little bit easier this
time) and taking special note that the last
basis has we can write
:
:
:
A more compact version of this uses matrices instead
(significantly less tedious).
x
:x0 p0 = − ( − ( + )) −y0
−y0 y1
−x0 x1
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x0 x1 x0
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x20
= − − y0
−y0 y1
−x0 x1
x0
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x1 x0
= − − ( − ) y0 −y0 y1−x0 x1 x0
−y2 y1
( − )( − )x2 x1 x2 x0
−y0 y1
( − )( − )x0 x1 x2 x0
x1 x0
= − + ( − )y0 −y1 y0−x1 x0 x0
−y2 y1
( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
x1x0
:x1 p1 = − ( + )
−y0 y1
−x0 x1
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
x0 x1
= − ( − ) ( + )−y1 y0−x1 x0
−y2 y1
( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
x0 x1
:x2 p2 =
−−y2 y1−x2 x1
−y0 y1
−x0 x1
−x2 x0
= −−y2 y1( − )( − )x2 x1 x2 x0
−y0 y1
( − )( − )x0 x1 x2 x0
= −−y2 y1( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
(x − )(x − ) = − x( + ) +x0 x1 x2 x1 x0 x1x0
x0 − + ( − )y0 −y1 y0−x1 x0 x0 −y2 y1( − )( − )x2 x1 x2 x0 −y1
y0( − )( − )x1 x0 x2 x0 x1x0
x1 − ( − ) ( + )−y1 y0−x1 x0 −y2 y1( − )( − )x2 x1 x2 x0 −y1
y0( − )( − )x1 x0 x2 x0 x1 x0
x2 −−y2 y1( − )( − )x2 x1 x2 x0
−y1 y0
( − )( − )x1 x0 x2 x0
10/19/16, 8:43 PMHW3_interpolation
Page 5 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
(c) (10) Use the uniqueness of the interpolating polynomial to
show that for general points
at any value of (i.e. the interpolant of a constant is a constant
regardless of ).
Hint: Consider the Newton polynomial form and uniqueness.
Looking at the Newton form we can observe that all of the
divided differences are identically except for the
term since all the are equal. This leaves us with only the
constant term in the polynomial which of
course is identically 1.
Question 2
(10) The th Chebyshev polynomial is characterized (up to a
constant) by the identity
Use this identity to show that the Chebyshev polynomials are
orthogonal on with respect to the
weight
To do this you must prove that
where is a finite constant (also find this coefficient).
N + 1
(x) = 1∑
i=0
N
ℓi
x N
0
[ ]y0 yi
n
(cos θ) = cos(nθ)Tn
x ∈ [−1, 1]
w(x) = 1
1 − x2‾ ‾‾‾‾‾√
w(x) (x) (x)dx = {∫
1
−1
Tn Tm
a
0
m = n
m ≠ n
a
10/19/16, 8:43 PMHW3_interpolation
Page 6 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
Setting (leading to ) and using into the expression for the
integral leads to
Using the rules regarding the orthogonality of we have
Question 3
(10) For N = 4 find the maximum value and its location of for
equispaced points on .
x = cos θ dx = sin θdθ (cos θ) = cos(nθ)Tn
sin θdθ = sin θdθ = − cos(nθ) cos(mθ)dθ∫ 0π
(cos θ) (cos θ)Tn Tm
1− θcos2√
∫ 0π
cos(nθ) cos(mθ)
sin θ ∫
π
0
cos
w(x) (x) (x)dx =∫ 1−1 Tn Tm
⎧
⎩
⎨ ⎪ ⎪
π
π
2
0
m = n = 0
m = n ≠ 0
m ≠ n
| (x)|ℓ2 [−1, 1]
10/19/16, 8:43 PMHW3_interpolation
Page 7 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
Analytically we need to take the derivative of
to find
The max/min of are at
One simplification that is useful here is to use the values of the
points , , ,
and . This allows for the reduction of to be reduced to a
quadratic function.
which has roots at
From here we can plug in the values and find the maximums and
find out that the maximum is at with
a value of .
(x) =ℓi ∏
j=0,j≠i
4 x − xj
−xi xj
(x) = ( )
d
dx ℓi ∑
k=0,k≠i
4 1
−xi xk ∏j=0,j≠i,j≠k
4 x − xj
−xi xj
ℓi
0 = ( (x − ))∑k=0,k≠i
4
∏
j=0,j≠i,j≠k
4
xj
= −1x0 = −1/2x1 = 0x2
= 1/2x3 = 1x4 (x)ddx ℓ2
(x) = = 4( − 1)( − 1/4)ℓ2
(x + 1)(x + 1/2)(x − 1/2)(x − 1)
(1)(1/2)(−1/2)(−1) x
2 x2
(x) = 4(2x( − 1) + 2x( − 1/4)) = 8x(2 − 5/4)d
dx ℓ2 x
2 x2 x2
x = 0, ± 5
8
‾‾
√
x = 0
| (0)| = 1ℓ2
10/19/16, 8:43 PMHW3_interpolation
Page 8 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
Question 4
Consider the Lebesgue function
where are Lagrange basis functions for a given set of . The
maximum of the Lebesgue function is
called the Lebesgue constant and are clearly related to
Lagrangian interpolation as they provide a first
estimate for the interpolation error. Unfortunately, is not
uniformly bounded regardless of the nodes used
as one can show that
Note, is the infinite-norm of the linear operator mapping data
to interpolant on the given grid and
interval.
(a) (5) What do you expect the Lebesgue function to look like?
Are there key points where we will know the
function value exactly?
The primary observation is that when for all .
(b) (10) Plot the Lebesgue function for for with
For the case where comment on what you see (you may need to
use semilogy to see the results).
(x) = (x)λN ∑
i=0
N
∣ ∣ ℓi ∣ ∣
(x)ℓi xi
Λn
ΛN
Λn
(x) = 0λN x = xi i = 0, … , N
x ∈ [−1, 1] N = 5, 10, 20
= −1 + , i = 0, 1, … , N.xi
2i
N
N = 20
10/19/16, 8:43 PMHW3_interpolation
Page 9 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
In [4]: def lebesgue(x, data):
"""Compute the *i*th Lagrangian basis
:Input:
- *x* (numpy.ndarray) x values that basis will be evaluated
at
- *data* (numpy.ndarray) Tuples representing interpolant
points
- *i* (int) Which basis function to compute.
:Output:
- (numpy.ndarray) Contains the ith Lagrangian basis
evaluated at
x
"""
lebesgue = numpy.zeros(x.shape[0])
for i in xrange(data.shape[0]):
lagrange_basis = numpy.ones(x.shape[0])
for j in xrange(data.shape[0]):
if i != j:
lagrange_basis *= (x - data[j]) / (data[i] - data[j])
lebesgue += numpy.abs(lagrange_basis)
return lebesgue
# Plot for each N
x = numpy.linspace(-1, 1, 1000)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
for (i, N) in enumerate([5, 10, 20]):
data = -1.0 + 2.0 * numpy.arange(N + 1) / N
y = lebesgue(x, data)
axes = fig.add_subplot(1, 3, i + 1)
axes.semilogy(x, y, 'k')
axes.semilogy(data, numpy.ones(N + 1), 'ro')
axes.semilogy(data, numpy.ones(N + 1), 'o--')
axes.set_xlim((-1, 1))
axes.set_ylim((0.0, numpy.max(y)))
Due to numerical precision the case does not quite get the
evaluation of correctly. We could
evaluate exactly at these points and we would see what we
would expect.
N = 20 x = xi
10/19/16, 8:43 PMHW3_interpolation
Page 10 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
(c) (10) Plot the Lebesgue function for for with
Again comment on what you see in the case .
In [16]: x = numpy.linspace(-1, 1, 1000)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 3)
for (i, N) in enumerate([5, 10, 34]):
# data = numpy.cos((2.0 * numpy.arange(1, N + 2) - 1.0) *
numpy.pi
/ (2.0 * (N + 1)))
data = numpy.cos((2.0 * numpy.arange(N) + 1.0) / (2.0 * N)
* numpy
.pi)
# data = numpy.cos((2.0 * numpy.arange(N) + 1.0) / (2.0 *
N) * num
py.pi)
data = numpy.cos(numpy.arange(N + 1) * numpy.pi / N)
y = lebesgue(x, data)
axes = fig.add_subplot(1, 3, i + 1)
axes.plot(x, y, 'k')
axes.plot(data, numpy.ones(N + 1), 'ro')
axes.plot(data, numpy.ones(N + 1), 'o--')
axes.set_xlim((-1, 1))
axes.set_ylim((0.0, numpy.max(y)))
Same problem as part (c).
(d) (5) What do you observe about the Lebesgue function for
each of the distribution of points?
The growth of the Lebesgue constant is much slower with the
Chebyshev points. The maximum is also
obtained at all the points where the function reaches a
maximum.
x ∈ [−1, 1] N = 5, 10, 20
= cos( ) i = 1, … , N + 1.xi (2i − 1)π2N
N = 20
10/19/16, 8:43 PMHW3_interpolation
Page 11 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
(e) (10) Using suitable values for plot the Lebesgue constants
of each of the above cases. Make sure to
use a suitably large number of points to evaluate the function at.
Graphically demonstrate that the constant
grow with the predicted growth rate . Describe what you
observe.
N
10/19/16, 8:43 PMHW3_interpolation
Page 12 of
12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i
nterpolation.ipynb?download=false
In [17]: leb_constant = lambda x, data: numpy.max(lebesgue(x,
data))
N_range = numpy.array([2**n for n in xrange(2, 6)])
N_range = numpy.array([20, 30, 40, 50, 60])
lebesgue_constant = numpy.empty((N_range.shape[0], 2))
x = numpy.linspace(-1, 1, 1000)
for (i, N) in enumerate(N_range):
data = -1.0 + 2.0 * numpy.arange(N + 1) / N
lebesgue_constant[i, 0] = numpy.max(lebesgue(x, data))
data = numpy.cos(numpy.arange(N) * numpy.pi / (N - 1))
lebesgue_constant[i, 1] = numpy.max(lebesgue(x, data))
order_C = lambda N, error, order: numpy.exp(numpy.log(error)
- order *
numpy.log(N))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(N_range, lebesgue_constant[:, 0], 'bo',
label="equispaced"
)
axes.loglog(N_range, lebesgue_constant[:, 1], 'ro',
label="Chebyshev")
axes.loglog(N_range, numpy.log(N_range), 'k--', label="$log
N$")
axes.legend(loc=2)
plt.show()
Clearly the equispaced points have a Lebesgue constant that is
increasing much faster. Both show the
approximate growth though.log N
roots 2.pdf
10/13/16, 6(46 PM02_roots
Page 1 of
5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do
wnload=false
In [3]: %matplotlib inline
import numpy
import matplotlib.pyplot as plt
Group Work 2 - Rooting for the Optimum
After you are done please submit this on Vocareum as with the
homework.
Newton's Method:
For the following consider the function
(a) Write down the Newton iteration for .
(b) The first step in setting up Newton's method is to pick a
good initial guess. One way to do this is to find
two points that and where . Find two such points for .
or
f (x) = cos(x) − 2x.
f (x)
xn+1 = −xn
f ( )xn
( )f ′ xn
= +xn
cos(x) − 2x
sin(x) + 2
x0 x1 sign(f ( )) ≠ sign(f ( ))x0 x1 f (x)
( , ) = (0, π/4)x0 x1 (−1, −2)
10/13/16, 6(46 PM02_roots
Page 2 of
5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do
wnload=false
(c) Using your update formula, your initial guess, and Newton's
method find the root of . Feel free to use
the code demonstrated in class. Make sure to use plots to get a
visual understanding for what is going on.
Additional things to play with:
1. Choose a small max step and display the results to see how
the newton method converges to the
root.
2. Choose different tolerances and see how many iterations it
takes to converge to the root.
3. Choose a "bad" initial guess and see what happens.
In [19]: def newton(x_0, f, f_prime, max_steps=100,
tolerance=1e-4):
# Initial guess
x_n = x_0
success = False
for n in xrange(max_steps):
if numpy.abs(f(x_n)) < tolerance:
success = True
break
x_n = x_n - f(x_n) / f_prime(x_n)
if success:
return x_n, n
else:
raise ValueError("Method did not converge!")
# Demo code
f = lambda x: numpy.cos(x) - 2.0 * x
f_prime = lambda x: -numpy.sin(x) - 2.0
print newton(0.3, f, f_prime)
The Secant Method
For the following consider the function
(a) Write down the iteration for the secant method.
f (x)
f (x) = − x + 1.x3
(0.4501875310743772, 2)
10/13/16, 6(46 PM02_roots
Page 3 of
5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do
wnload=false
(b) The advantage of the secant method over Newton's method
is that we don't need to calculate the
derivative of the function. The disadvantage is that now we
need two initial guesses to start the secant
method.
As we did with Newton's method find two points and with the
same properties as before (choose a
bracket).
(c) Using your update formula, your initial guess, and the secant
method find the root of . Again feel free
to use the code demonstrated in class and use plots to visualize
your results.
Additional things to play with:
1. Choose a small max step and display the results to see how
the newton method converges to the
root.
2. Choose different tolerances and see how many iterations it
takes to converge to the root.
3. Choose a "bad" initial guess and see what happens.
= −xk+1 xk
( − + 1)( − )x3k xk xk xk−1
( − + 1) − f ( )x3k xk xk−1
x0 x1
(−1, −2)
f (x)
10/13/16, 6(46 PM02_roots
Page 4 of
5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do
wnload=false
In [18]: def secant(bracket, f, max_steps=100, tolerance=1e-4):
x_n = bracket[1]
x_nm = bracket[0]
success = False
for n in xrange(max_steps):
if numpy.abs(f(x_n)) < tolerance:
success = True
break
x_np = x_n - f(x_n) * (x_n - x_nm) / (f(x_n) - f(x_nm))
x_nm = x_n
x_n = x_np
if success:
return x_n, n
else:
raise ValueError("Secant method did not converge.")
f = lambda x: x**3 - x + 1.0
x = numpy.linspace(-2.0,-1.0,11)
# Initial guess
bracket = [-1.0, -2.0]
print secant(bracket, f)
Comparing convergence
Now that we have seen both the methods, lets compare them for
the function
(a) For the new function derive the iteration for both Newton's
method and the secant method.
f (x) = x − cos(x)
f (x)
(-1.324707936532088, 5)
10/13/16, 6(46 PM02_roots
Page 5 of
5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do
wnload=false
Newton:
Secant:
(b) Choose a bracket to start both methods.
(c) Using your code from before (you could do this easily if you
write the above as a function) see how the
two methods compare in terms of the number of the number of
iterations it takes to converge. Play around
with your choice of bracket and see how this might impact both
methods.
In [26]: f = lambda x: x - numpy.cos(x)
f_prime = lambda x: 1.0 + numpy.sin(x)
# Initial guess
bracket = [-1.0, 1.0]
max_steps = 100
tolerance = 1e-10
print "Newton: ", newton(bracket[0], f, f_prime,
max_steps=max_steps,
tolerance=tolerance)
print "Secant: ", secant(bracket, f, max_steps=max_steps,
tolerance=to
lerance)
= −xn+1 xn
x − cos(x)
1 + sin(x)
= −xn+1 xn
( − cos( ))( − )xn xn xn xn−1
− cos( ) − + cos( )xn xn xn−1 xn−1
(0, π/4)
Newton: (0.73908513321516067, 8)
Secant: (0.73908513321516067, 6)
roots.pdf
10/13/16, 12)40 PMHW2_roots
Page 1 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [1]: %matplotlib inline
import numpy
import matplotlib.pyplot as plt
Before you turn this problem in, make sure everything runs as
expected. First, restart the kernel (in the
menubar, select Kernel Restart) and then run all cells (in the
menubar, select Cell Run All).
Make sure you fill in any place that says YOUR CODE HERE
or "YOUR ANSWER HERE", as well as your
name and collaborators below:
HW 2: Root Finding and Optimization
Question 1 - Finding the Root
Let's say that we wanted to calculate given that and and that
we did not want to use
the function sqrt directly. One way to do this is to solve for the
zeros of the function .
Note that not all the methods will work!
Make sure to handle the case where .
We are only looking for the positive root of .
(a) (5 points) Write a function that uses fixed-point iteration to
solve for the zeros of .
Note: There are multiple ways to write the iteration function ,
some work better than others. Make sure
to use the input function to formulate this.
→ →
M‾‾√ M ∈ ℝ M > 0
f (x) = − Mx2
=M0 M‾‾√
f (x)
f (x)
g(x)
f (x)
10/13/16, 12)40 PMHW2_roots
Page 2 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [2]: def fixed_point(x_0, f, tolerance):
"""Find the zeros of the given function f using fixed-point
iterat
ion
:Input:
- *x_0* (float) - Initial iterate
- *f* (function) - The function that will be analyzed
- *tolerance* (float) - Stopping tolerance for iteration
:Output:
If the iteration was successful the return values are:
- *M* (float) - Zero found via the given intial iterate.
- *n* (int) - Number of iterations it took to achieve the
specifi
ed
tolerance.
otherwise
- *x* (float) - Last iterate found
- *n* (int) - *n = -1*
"""
# Parameters
MAX_STEPS = 1000
### BEGIN SOLUTION
g = lambda x: x - f(x)
g = lambda x: f(x) + x
x = x_0
if numpy.abs(f(x)) < tolerance:
success = True
n = 0
else:
success = False
for n in xrange(1, MAX_STEPS + 1):
x = g(x)
if numpy.abs(f(x)) < tolerance:
success = True
break
if not success:
return x, -1
### END SOLUTION
return x, n
10/13/16, 12)40 PMHW2_roots
Page 3 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [3]: M = 1.8
TOLERANCE = 1e-10
f = lambda x: x**2 - M
# Note that this test probably will fail
try:
M_f, n = fixed_point(2.0, f, TOLERANCE)
except OverflowError:
print "Fixed-point test failed!"
print "Success!"
else:
if n == -1:
print "Fixed-point test failed!"
print "Success!"
else:
print M_f, n
raise ValueError("Test should have failed!")
(b) (5 points) Write a function that uses Newton's method to
find the roots of . The analytical derivative
of is provided.
f (x)
(x)f ′
Fixed-point test failed!
Success!
10/13/16, 12)40 PMHW2_roots
Page 4 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [4]: def newton(x_0, f, f_prime, tolerance):
"""Find the zeros of the given function f using Newton's
method
:Input:
- *M_0* (float) - Initial iterate
- *f* (function) - The function that will be analyzed
- *f_prime* (function) - The derivative of *f*
- *tolerance* (float) - Stopping tolerance for iteration
:Output:
If the iteration was successful the return values are:
- *M* (float) - Zero found via the given intial iterate.
- *n* (int) - Number of iterations it took to achieve the
specifi
ed
tolerance.
otherwise
- *M* (float) - Last iterate found
- *n* (int) - *n = -1*
"""
# Parameters
MAX_STEPS = 1000
### BEGIN SOLUTION
x = x_0
if numpy.abs(f(x)) < tolerance:
success = True
n = 0
else:
success = False
for n in xrange(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
if numpy.abs(f(x)) < tolerance:
success = True
break
if not success:
return x, -1
### END SOLUTION
return x, n
10/13/16, 12)40 PMHW2_roots
Page 5 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [5]: M = 3.0
TOLERANCE = 1e-10
f = lambda x: x**2 - M
f_prime = lambda x: 2.0 * x
M_f, n = newton(2.0, f, f_prime, TOLERANCE)
numpy.testing.assert_almost_equal(M_f, numpy.sqrt(M))
print M_f, n
assert(n == 4)
M_f, n = newton(numpy.sqrt(M), f, f_prime, TOLERANCE)
print M_f, n
assert(n == 0)
print "Success!"
(c) (5 points) Write a function to find the zeros of using the
secant method.f (x)
1.73205080757 4
1.73205080757 0
Success!
10/13/16, 12)40 PMHW2_roots
Page 6 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [6]: def secant(x_0, f, tolerance):
"""Find the zeros of the given function f using the secant
method
:Input:
- *M_0* (float) - Initial bracket
- *f* (function) - The function that will be analyzed
- *tolerance* (float) - Stopping tolerance for iteration
:Output:
If the iteration was successful the return values are:
- *M* (float) - Zero found via the given intial iterate.
- *n* (int) - Number of iterations it took to achieve the
specifi
ed
tolerance.
otherwise
- *M* (float) - Last iterate found
- *n* (int) - *n = -1*
"""
# Parameters
MAX_STEPS = 1000
### BEGIN SOLUTION
x = x_0
if numpy.abs(f(x[1])) < tolerance:
success = True
n = 0
else:
success = False
for n in xrange(1, MAX_STEPS + 1):
x_new = x[1] - f(x[1]) * (x[1] - x[0]) / (f(x[1]) - f(x[0]
))
x[0] = x[1]
x[1] = x_new
if numpy.abs(f(x[1])) < tolerance:
success = True
break
if not success:
return x[1], -1
else:
x = x[1]
### END SOLUTION
return x, n
10/13/16, 12)40 PMHW2_roots
Page 7 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [7]: M = 3.0
TOLERANCE = 1e-10
f = lambda x: x**2 - M
M_f, n = secant([0.0, 3.0], f, TOLERANCE)
numpy.testing.assert_almost_equal(M_f, numpy.sqrt(M))
print M_f, n
assert(n == 7)
M_f, n = secant([1.0, numpy.sqrt(M)], f, TOLERANCE)
assert(n == 0)
print "Success!"
(d) (5 points) Using the theory and illustrative plots why the
fixed-point method did not work (pick a bracket
that demonstrates the problem well).
The range is not contained within the domain and therefore
fixed-point iteration will not converge. The plot
below should be included.
1.73205080757 7
Success!
10/13/16, 12)40 PMHW2_roots
Page 8 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [8]: # Place plotting code here if needed
### BEGIN SOLUTION
x = numpy.linspace(-3.0, 5.0, 100)
bracket = [1.5, 1.8]
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)#, aspect='equal')
axes.plot(x, x**2 - 3.0, 'r')
axes.plot(x, numpy.zeros(x.shape), 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_xlim([1.0, 2.0])
axes.set_ylim([-1.5, 1.0])
# Plot domain and range
axes.plot(numpy.ones(x.shape) * bracket[0], x, '--k')
axes.plot(numpy.ones(x.shape) * bracket[1], x, '--k')
axes.plot(x, numpy.ones(x.shape) * (bracket[0]**2 - 3.0), '--k')
axes.plot(x, numpy.ones(x.shape) * (bracket[1]**2 - 3.0), '--k')
plt.show()
### END SOLUTION
Question 2 - Bessel Function Zeros
The zeros of the Bessel functions can be important for a
number of applications. Considering only
we are going to find the first ten zeros of by using a hybrid
approach.
(x)J0
x ≥ 0 (x)J0
10/13/16, 12)40 PMHW2_roots
Page 9 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
(a) (5 points) Plot the Bessel function and its zeros on the same
plot. Note that the module
scipy.special contains functions dealing with the Bessel
functions (jn).
In [9]: import scipy.special
x = numpy.linspace(0.0, 50.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, scipy.special.jn(0, x))
axes.plot(x, numpy.zeros(x.shape),'k--')
axes.plot(scipy.special.jn_zeros(0, 10), numpy.zeros(10), 'ro')
print scipy.special.jn_zeros(0, 10)
axes.set_title("Bessel Function $J_0(x)")
axes.set_xlabel("x")
axes.set_ylabel("$J_0(x)$")
plt.show()
(x)J0
[ 2.40482556 5.52007811 8.65372791 11.79153444
14.93091771
18.07106397 21.21163663 24.35247153 27.49347913
30.63460647]
10/13/16, 12)40 PMHW2_roots
Page 10 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
(b) (15 points) Now write a function j0_zeros that takes two
tolerances, a bracket size tolerance
bracket_tolerance and tolerance for the final convergence
tolerance. Given an initial bracket, the
function should perform secant iterations until the bracket size
is less than bracket_tolerance. If this is
successful then proceed with Newton's method using the newest
value of the bracket until tolerance is
reached. Return both the zero found and the number of steps
needed in each iteration. Also write a doc-
string for the function.
Notes:
Newton's method by itself does not work here given the initial
brackets provided.
The secant method does work however it is slower than the
approach outlined.
Try playing a bit yourself with the tolerances used.
10/13/16, 12)40 PMHW2_roots
Page 11 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [10]: import scipy.special
# Note that the num_steps being returned should be a list
# of the number of steps being used in each method
def j0_zeros(x0, bracket_tolerance, tolerance):
### BEGIN SOLUTION
# Parameters
MAX_STEPS = 100
# Output
num_steps = [0, 0]
# Useful functions
f = lambda x: scipy.special.jn(0, x)
f_prime = lambda x: -scipy.special.jn(1, x)
x = x0
if numpy.abs(f(x0[0])) < tolerance or numpy.abs(f(x0[1])) <
tolera
nce:
success = True
n = 0
else:
success = False
for n in xrange(1, MAX_STEPS + 1):
x_new = x[1] - f(x[1]) * (x[1] - x[0]) / (f(x[1]) - f(x[0]
))
x[0] = x[1]
x[1] = x_new
if numpy.abs(x[1] - x[0]) < bracket_tolerance:
success = True
num_steps[0] = n
break
if not success:
return x[1], -1
else:
x[1], num_steps[1] = newton(x[1], f, f_prime, tolerance)
x = x[1]
### END SOLUTION
return x, num_steps
10/13/16, 12)40 PMHW2_roots
Page 12 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [11]: brackets = [[ 2.0, 3.0], [ 4.0, 7.0], [ 7.0, 10.0], [10.0,
12.0],
[13.0, 15.0], [17.0, 19.0], [19.0, 22.0],
[22.0, 26.0], [26.0, 29.0], [29.0, 32.0]]
zero = []
for bracket in brackets:
x, num_steps = j0_zeros(bracket, 1e-1, 1e-15)
print x, num_steps
zero.append(x)
numpy.testing.assert_allclose(zero, scipy.special.jn_zeros(0,
10), rto
l=1e-14)
print "Success!"
Question 3 - Newton's Method Convergence
Recall that Newton's method converges as
with where is the true solution and is between and .
(a) (10 points) Show that the Newton iteration when with is
| | = |ϵn+1
| (c)|f ″
2| ( )|f ′ xn
ϵn |2
= −ϵn xn x∗ x∗ c xn x∗
f (x) = − Mx2 M > 0
= ( + )xn+1 12 xn
M
xn
2.4048255577 [2, 3]
5.52007811029 [4, 2]
8.65372791291 [2, 3]
11.791534439 [3, 2]
14.9309177085 [2, 2]
18.0710639679 [2, 2]
21.2116366299 [3, 2]
24.3524715307 [4, 2]
27.493479132 [2, 3]
30.6346064684 [3, 2]
Success!
10/13/16, 12)40 PMHW2_roots
Page 13 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
(b) (10 points) From this update scheme show that
(c) (10 points) Confirm that the asymptotic error convergence
matches the general convergence for
Newton's method.
The connection of course is that the previous formula is really
which confirms the result directly.
= −xn+1 xn
f ( )xn
( )f ′ xn
= −xn+1 xn
− Mx2n
2xn
= − ( − )xn+1 xn 12 xn
M
xn
= ( + )xn+1 12 xn
M
xn
=−xn+1 M‾‾√
( −xn M‾‾√ )2
1
2xn
= ( + M)xn+1
1
2xn
x2n
= ( + M − 2 + 2 )xn+1
1
2xn
x2n M‾‾√ xn M‾‾√ xn
= ( − +xn+1
1
2xn
xn M‾‾√ )2 M‾‾√
=−xn+1 M‾‾√
( −xn M‾‾√ )2
1
2xn
= =
| (c)|f ″
2| ( )|f ′ xn
2
4xn
1
2xn
=ϵn+1
ϵ2n
1
2xn
10/13/16, 12)40 PMHW2_roots
Page 14 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
Question 4 - Optimization of a Data Series
For the following questions we are given a set of data .
(a) (15 points) Write a function that takes in the data series and
finds the value at a point by
constructing the equation of the line between the two data
points that bound and evaluating the resulting
function at .
Hints:
Make sure to handle the case that .
If or then return the corresponding value or .
If you write your function so that can be an array you can use
the plotting code in the cell.
Otherwise just delete it.
( , ), ( , ), … , ( , )t0 y0 t1 y1 tN yN
( , )ti yi t∗
t∗
t∗
=t∗ ti
<t∗ t0 >t∗ tN y0 yN
t∗
10/13/16, 12)40 PMHW2_roots
Page 15 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [12]: def linear_eval(t, y, t_star):
# BEGIN SOLUTION
if isinstance(t_star, float):
t_star = [t_star]
y_star = [0.0]
else:
y_star = numpy.empty(t_star.shape)
for (i, t_star_val) in enumerate(t_star):
if t_star_val < t[0]:
y_star[i] = y[0]
elif t_star_val > t[-1]:
y_star[i] = y[-1]
else:
for (n, t_val) in enumerate(t):
if t_val > t_star_val:
y_star[i] = (y[n-1] - y[n]) / (t[n-1] - t[n]) * (t
_star_val - t[n]) + y[n]
break
elif t_val == t_star_val:
y_star[i] = y[n]
break
# END SOLUTION
return y_star
N = 10
t_fine = numpy.linspace(-numpy.pi, numpy.pi, 100)
t_rand = numpy.random.rand(N + 1) * (2.0 * numpy.pi) -
numpy.pi
t_rand.sort()
f = lambda x: numpy.sin(x) * numpy.cos(x)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth()*1.5)
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_fine, f(t_fine), 'k-', label="True")
axes.plot(t_rand, f(t_rand), 'og', label="Sample Data")
axes.plot(t_fine, linear_eval(t_rand, f(t_rand), t_fine), 'xb',
label=
"linear_eval")
axes.set_xlim((-numpy.pi, numpy.pi))
axes.set_title("Demo Plot")
axes.set_xlabel('$t$')
axes.set_ylabel('$f(t)$')
axes.legend()
plt.show()
10/13/16, 12)40 PMHW2_roots
Page 16 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [13]: N = 100
f = lambda x: numpy.sin(x) * numpy.cos(x)
t = numpy.linspace(-1, 1, N + 1)
t_star = 0.5
answer = linear_eval(t, f(t), t_star)
if isinstance(answer, list):
answer = answer[0]
print "Computed solution: %s" % answer
print "True solution: %s" % f(t_star)
numpy.testing.assert_almost_equal(answer, f(t_star),
verbose=True, dec
imal=7)
print "Success!"
(b) (10 points) Using the function you wrote in part (a) write a
function that uses Golden search to find the
minimum of a series of data. Again you can use the plotting
code available if your linear_eval function
from part (a) handles arrays.
In [14]: def golden_search(bracket, t, y, max_steps=100,
tolerance=1e-4):
phi = (numpy.sqrt(5.0) - 1.0) / 2.0
# BEGIN SOLUTION
f = lambda x: linear_eval(t, y, x)
x = numpy.array([bracket[0], None, None, bracket[1]])
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
Computed solution: 0.420735492404
True solution: 0.420735492404
Success!
10/13/16, 12)40 PMHW2_roots
Page 17 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
success = False
for n in xrange(max_steps):
if numpy.abs(x[3] - x[0]) < tolerance:
success = True
t_star = (x[3] + x[0]) / 2.0
break
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 > f_2:
x[3] = x[2]
x[2] = x[1]
x[1] = x[3] - phi * (x[3] - x[0])
else:
x[0] = x[1]
x[1] = x[2]
x[2] = x[0] + phi * (x[3] - x[0])
if not success:
raise ValueError("Unable to converge to requested
tolerance.")
# END SOLUTION
return t_star
N = 50
t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi
t.sort()
y = numpy.sin(t) * numpy.cos(t)
t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y)
t_true = numpy.pi / 4.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, y, 'x', label="data")
t_fine = numpy.linspace(-numpy.pi, numpy.pi, 100)
axes.plot(t_fine, numpy.sin(t_fine) * numpy.cos(t_fine), 'k',
label="$
f(x)$")
axes.plot(t_star, linear_eval(t, y, t_star), 'go')
axes.plot(t_true, numpy.sin(t_true) * numpy.cos(t_true), 'ko',
label="
True")
axes.set_xlim((0.0, numpy.pi / 2.0))
axes.set_ylim((0.0, 1.0))
plt.show()
10/13/16, 12)40 PMHW2_roots
Page 18 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [15]: N = 100
t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi
t.sort()
y = numpy.sin(t) * numpy.cos(t)
t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y)
t_true = numpy.pi / 4.0
abs_error = numpy.abs(t_star - t_true)
rel_error = numpy.abs(t_star - t_true) / numpy.abs(t_true)
print "Error: %s, %s" % (abs_error, rel_error)
numpy.testing.assert_allclose(abs_error, 0.0, rtol=1e-1, atol=1e-
1)
print "Success!"
(c) (5 points) Below is sample code that plots the number of
sample points vs. the relative error. Note
because we are sampling at random points that we do each 6
times and average the relative error to
reduce noise. Additionally a line is drawn representing what
would be linear (1st order) convergence.
Modify this code and try it out on other problems. Do you
continue to see linear convergence? What about if
you change how we sample points? Make sure that you change
your initial interval and range of values of
inside the loop.
N
N
t
Error: 0.0150948731852, 0.0192193894622
Success!
10/13/16, 12)40 PMHW2_roots
Page 19 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
In [16]: f = lambda t: numpy.sin(t) * numpy.cos(t)
N_range = numpy.array([2**n for n in range(4, 10)], dtype=int)
rel_error = numpy.zeros(len(N_range))
t_true = numpy.pi / 4.0
for (i, N) in enumerate(N_range):
for j in xrange(6):
t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) -
numpy.pi
t.sort()
y = f(t)
t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y)
rel_error[i] += numpy.abs(t_star - t_true) /
numpy.abs(t_true)
rel_error[i] /= 6
order_C = lambda N, error, order: numpy.exp(numpy.log(error)
- order *
numpy.log(N))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(N_range, rel_error, 'ko', label="Ave. Error")
axes.loglog(N_range, order_C(N_range[0], rel_error[0], -1.0) *
N_range
**(-1.0), 'r', label="1st order")
axes.loglog(N_range, order_C(N_range[0], rel_error[0], -2.0) *
N_range
**(-2.0), 'b', label="1st Order")
axes.set_xlabel("N")
axes.set_ylabel("Relative Error")
axes.legend()
plt.show()
10/13/16, 12)40 PMHW2_roots
Page 20 of
20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r
oots.ipynb?download=false
This really is dependent on what they explore. It should be
possible to get second order convergence but
depends on what they change. Anything that seems reasonable
and demonstrates that they explored the
options is good.
differentiation.py
# coding: utf-8
# <table>
# <tr align=left><td><img align=left src="./images/CC-
BY.png">
# <td>Text provided under a Creative Commons Attribution
license, CC-BY. All code is made available under the FSF-
approved MIT license. (c) Kyle T. Mandli</td>
# </table>
# In[ ]:
get_ipython().magic(u'matplotlib inline')
import numpy
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# # Numerical Differentiation
#
# **GOAL:** Given a set of $N+1$ points $(x_i, y_i)$
compute the derivative of a given order to a specified accuracy.
#
# **Approach:** Find the interpolating polynomial $P_N(x)$
and differentiate that.
# ### Newton's Form
#
# For ease of analysis we will write $P_N(x)$ in Newton's form
which looks like
#
# $$P_N(x) = sum^N_{j=0} a_j n_j(x)$$
#
# where
#
# $$n_j(x) = prod^{j-1}_{i=0} (x - x_i)$$
#
# and the $a_j = [y_0, ldots, y_j]$ are the divided differences
defined in general as
#
# $$[y_i] = y_i ~~~~~ i in {0,ldots, N+1}$$
#
# and
#
# $$[y_i, ldots , y_{i+j}] = frac{[y_{i+1}, ldots , y_{i + j}] -
[y_{i},ldots,y_{i+j-1}]}{x_{i+j} - x_{i}} ~~~~~ i in
{0,ldots,N+1 - j} ~~~~ j in {1,ldots, N+1}$$
# These formulas are recursively defined but not so helpful,
here are a few examples to start out with:
#
# $$[y_0] = y_0$$
#
# $$[y_0, y_1] = frac{y_1 - y_0}{x_1 - x_0}$$
#
# $$[y_0, y_1, y_2] = frac{[y_1, y_2] - [y_0, y_1]}{x_{2} -
x_{0}} = frac{frac{y_2 - y_1}{x_2 - x_1} - frac{y_1 -
y_0}{x_1 - x_0}}{x_2 - x_0} = frac{y_2 - y_1}{(x_2 -
x_1)(x_2 - x_0)} - frac{y_1 - y_0}{(x_1 - x_0)(x_2 - x_0)}$$
# The benefit of writing a polynomial like this is that it isolates
the $x$ dependence (we can easily take derivatives of this
form).
#
# In general then $P_N(x)$ can be written in Newton's form as
#
# $$P_N(x) = y_0 + (x-x_0)[y_0, y_1] + (x - x_0) (x - x_1)
[y_0, y_1, y_2] + cdots + (x-x_0) (x-x_1) cdots (x-x_{N-1})
[y_0, y_1, ldots, y_{N}]$$
# As another concrete example consider a quadratic polynomial
written in Newton's form
#
# $$P_2(x) = [y_0] + (x - x_0) [y_0, y_1] + (x - x_0)(x - x_1)
[y_0, y_1, y_2] = y_0 + (x - x_0) frac{y_1 - y_0}{x_1 - x_0}
+ (x - x_0)(x - x_1) left ( frac{y_2 - y_1}{(x_2 - x_1)(x_2 -
x_0)} - frac{y_1 - y_0}{(x_1 - x_0)(x_2 - x_0)} right )$$
#
# Recall that the interpolating polynomial of degree $N$
through these points is unique!
# In[ ]:
def divided_difference(x, y, N=50):
# print x.shape, N
if N == 0:
raise Exception("Reached recurssion limit!")
# Reached the end of the recurssion
if y.shape[0] == 1:
return y[0]
elif y.shape[0] == 2:
return (y[1] - y[0]) / (x[1] - x[0])
else:
return (divided_difference(x[1:], y[1:], N=N-1) -
divided_difference(x[:-1], y[:-1], N=N-1)) / (x[-1] - x[0])
# Calculate a polynomial in Newton Form
data = numpy.array([[-2.0, 1.0], [-1.5, -1.0], [-0.5, -3.0], [0.0, -
2.0], [1.0, 3.0], [2.0, 1.0]])
N = data.shape[0] - 1
x = numpy.linspace(-2.0, 2.0, 100)
# Construct basis functions
newton_basis = numpy.ones((N + 1, x.shape[0]))
for j in xrange(N + 1):
for i in xrange(j):
newton_basis[j, :] *= (x - data[i, 0])
# Construct full polynomial
P = numpy.zeros(x.shape)
for j in xrange(N + 1):
P += divided_difference(data[:j + 1, 0], data[:j + 1, 1]) *
newton_basis[j, :]
# Plot basis and interpolant
fig = plt.figure()
fig.set_figwidth(2.0 * fig.get_figwidth())
axes = [None, None]
axes[0] = fig.add_subplot(1, 2, 1)
axes[1] = fig.add_subplot(1, 2, 2)
for j in xrange(N + 1):
axes[0].plot(x, newton_basis[j, :], label='$n_%s$'%j)
axes[1].plot(data[j, 0], data[j, 1],'ko')
axes[1].plot(x, P)
axes[0].set_title("Newton Polynomial Basis")
axes[0].set_xlabel("x")
axes[0].set_ylabel("$n_j(x)$")
axes[0].legend(loc='upper left')
axes[1].set_title("Interpolant $P_%s(x)$" % N)
axes[1].set_xlabel("x")
axes[1].set_ylabel("$P_%s(x)$" % N)
plt.show()
# ### Error Analysis
#
# Given $N + 1$ points we can form an interpolant $P_N(x)$ of
degree $N$ where
#
# $$f(x) = P_N(x) + R_N(x)$$
# We know from Lagrange's Theorem that the remainder term
looks like
#
# $$R_N(x) = (x - x_0)(x - x_1)cdots (x - x_{N})(x - x_{N+1})
frac{f^{(N+1)}(c)}{(N+1)!}$$
#
# noting that we need to require that $f(x) in C^{N+1}$ on the
interval of interest. Taking the derivative of the interpolant
$P_N(x)$ then leads to
#
# $$P_N'(x) = [y_0, y_1] + ((x - x_1) + (x - x_0)) [y_0, y_1,
y_2] + cdots + left(sum^{N-1}_{i=0}left( prod^{N-
1}_{j=0,~jneq i} (x - x_j) right )right ) [y_0, y_1, ldots,
y_N]$$
# Similarly we can find the derivative of the remainder term
$R_N(x)$ as
#
# $$R_N'(x) = left(sum^{N}_{i=0} left(
prod^{N}_{j=0,~jneq i} (x - x_j) right )right )
frac{f^{(N+1)}(c)}{(N+1)!}$$
# Now if we consider the approximation of the derivative
evaluated at one of our data points $(x_k, y_k)$ these
expressions simplify such that
#
# $$f'(x_k) = P_N'(x_k) + R_N'(x_k)$$
# If we let $Delta x = max_i |x_k - x_i|$ we then know that the
remainder term will be $mathcal{O}(Delta x^N)$ as $Delta x
rightarrow 0$ thus showing that this approach converges and
we can find arbitrarily high order approximations.
# In[ ]:
# Compute the approximation to the derivative
# data = numpy.array([[-2.0, 1.0], [-1.5, -1.0], [-0.5, -3.0], [0.0,
-2.0], [1.0, 3.0], [2.0, 1.0]])
num_points = 15
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.linspace(-2.0, 2.0, num_points)
data[:, 1] = numpy.sin(data[:, 0])
N = data.shape[0] - 1
x = numpy.linspace(-2.0, 2.0, 100)
# General form of derivative of P_N'(x)
P_prime = numpy.zeros(x.shape)
newton_basis_prime = numpy.empty(x.shape)
product = numpy.empty(x.shape)
for n in xrange(N):
newton_basis_prime = 0.0
for i in xrange(n):
product = 1.0
for j in xrange(n):
if j != i:
product *= (x - data[j, 0])
newton_basis_prime += product
P_prime += divided_difference(data[:n+1, 0], data[:n+1, 1])
* newton_basis_prime
fig = plt.figure()
fig.set_figwidth(2.0 * fig.get_figwidth())
axes = [None, None]
axes[0] = fig.add_subplot(1, 2, 1)
axes[1] = fig.add_subplot(1, 2, 2)
axes[0].set_title("$f'(x)$")
axes[1].set_title("Close up of $f'(x)$")
for j in xrange(2):
axes[j].plot(x, numpy.cos(x), 'k')
axes[j].plot(x, P_prime, 'ro')
axes[j].set_xlabel("x")
axes[j].set_ylabel("$f'(x)$ and $hat{f}'(x)$")
axes[0].add_patch(patches.Rectangle((0.7, 0.6), 0.8, -0.5,
fill=None, color='blue'))
axes[1].add_patch(patches.Rectangle((0.7, 0.6), 0.8, -0.5,
fill=None, color='blue'))
axes[1].set_xlim([0.69,1.51])
axes[1].set_ylim([0.09,0.61])
plt.show()
# ## Examples
#
# Often in practice we only use a small number of data points to
derive a differentiation formula. In the context of differential
equations we also often have $f(x)$ so that $f(x_k) = y_k$ and
we can approximate the derivative of a known function $f(x)$.
# ### Example 1: 1st order Forward and Backward Differences
#
# Using 2 points we can get an approximation that is
$mathcal{O}(Delta x)$:
#
# $$f'(x) approx P_1'(x) = [y_0, y_1] = frac{y_1 - y_0}{x_1 -
x_0} = frac{y_1 - y_0}{Delta x} = frac{f(x_1) -
f(x_0)}{Delta x}$$
# We can also calculate the error as
#
# $$R_1'(x) = -Delta x frac{f''(c)}{2}$$
# We can also derive the "forward" and "backward" formulas by
considering the question slightly differently. Say we want
$f'(x_n)$, then the "forward" finite-difference can be written as
#
# $$f'(x_n) approx D_1^+ = frac{f(x_{n+1}) - f(x_n)}{Delta
x}$$
#
# and the "backward" finite-difference as
#
# $$f'(x_n) approx D_1^- = frac{f(x_n) - f(x_{n-1})}{Delta
x}$$
# Note these approximations should be familiar to use as the
limit as $Delta x rightarrow 0$ these are no longer
approximations but equivalent definitions of the derivative at
$x_n$.
# In[ ]:
f = lambda x: numpy.sin(x)
f_prime = lambda x: numpy.cos(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 20
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute forward difference using a loop
f_prime_hat = numpy.empty(x_hat.shape)
for i in xrange(N - 1):
f_prime_hat[i] = (f(x_hat[i+1]) - f(x_hat[i])) / delta_x
f_prime_hat[-1] = (f(x_hat[i]) - f(x_hat[i-1])) / delta_x
# Vector based calculation
# f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x)
# Use first-order differences for points at edge of domain
f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x #
Backward Difference at x_N
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f_prime(x), 'k')
axes.plot(x_hat + 0.5 * delta_x, f_prime_hat, 'ro')
axes.set_xlim((x[0], x[-1]))
axes.set_ylim((-1.1, 1.1))
axes.set_xlabel('x')
axes.set_ylabel("$f'(x)$ and $D_1^-(x_n)$")
axes.set_title("Backward Differences for $f(x) = sin(x)$")
plt.show()
# #### Computing Order of Convergence
#
# $$begin{aligned}
# e(Delta x) &= C Delta x^n 
# log e(Delta x) &= log C + n log Delta x
# end{aligned}$$
#
# Slope of line is $n$ when computing this! We can also match
the first point by solving for $C$:
#
# $$C = e^{log e(Delta x) - n log Delta x}$$
# In[ ]:
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x.append(x_hat[1] - x_hat[0])
# Compute forward difference
f_prime_hat = numpy.empty(x_hat.shape)
f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x[-
1])
# Use first-order differences for points at edge of domain
f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1]
# Backward Difference at x_N
error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat +
0.5 * delta_x[-1]) - f_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error, 'ko', label="Approx. Derivative")
order_C = lambda delta_x, error, order:
numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) *
delta_x**1.0, 'r--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) *
delta_x**2.0, 'b--', label="2nd Order")
axes.legend(loc=4)
axes.set_title("Convergence of 1st Order Differences")
axes.set_xlabel("$Delta x$")
axes.set_ylabel("$|f'(x) - hat{f}'(x)|$")
plt.show()
# ### Example 2: 2nd Order Centered Difference
#
# Now lets use 3 points to calculate the 2nd order accurate
finite-difference. Consider the points $(x_{n}, y_{n})$,
$(x_{n-1}, y_{n-1})$, and $(x_{n+1}, y_{n+1})$, from before
we have
#
# $$P_2'(x) = [f(x_n), f(x_{n+1})] + ((x - x_n) + (x - x_{n+1}))
[f(x_n), f(x_{n+1}), f(x_{n-1})]$$
#
# $$= frac{f(x_{n+1}) - f(x_n)}{x_{n+1} - x_n} + ((x - x_n) +
(x - x_{n+1})) left ( frac{f(x_{n-1}) - f(x_{n+1})}{(x_{n-1} -
x_{n+1})(x_{n-1} - x_n)} - frac{f(x_{n+1}) -
f(x_n)}{(x_{n+1} - x_n)(x_{n-1} - x_n)} right )$$
# Evaluating at $x_n$ and assuming the points $x_{n-1}, x_n,
x_{n+1}$ are evenly spaced leads to
#
# $$P_2'(x_n) = frac{f(x_{n+1}) - f(x_n)}{Delta x} - Delta x
left ( frac{f(x_{n-1}) - f(x_{n+1})}{2Delta x^2} +
frac{f(x_{n+1}) - f(x_n)}{Delta x^2} right )$$
#
# $$=frac{f(x_{n+1}) - f(x_n)}{Delta x} - left (
frac{f(x_{n+1}) - 2f(x_n) + f(x_{n-1})}{2Delta x}right )$$
#
# $$=frac{2f(x_{n+1}) - 2f(x_n) - f(x_{n+1}) + 2f(x_n) -
f(x_{n-1})}{2 Delta x}$$
#
# $$=frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta x}$$
# This finite-difference is second order accurate and is centered
about the point it is meant to approximate ($x_n$). We can
show that it is second order by again considering the remainder
term's derivative
#
# $$R_2'(x) = left(sum^{2}_{i=0} left(
prod^{2}_{j=0,~jneq i} (x - x_j) right )right )
frac{f'''(c)}{3!}$$
#
# $$= left ( (x - x_{n+1}) (x - x_{n-1}) + (x-x_n) (x-x_{n-1})
+ (x-x_n)(x-x_{n+1}) right ) frac{f'''(c)}{3!}$$
# Again evaluating this expression at $x = x_n$ and assuming
evenly space points we have
#
# $$R_2'(x_n) = -Delta x^2 frac{f'''(c)}{3!}$$
#
# showing that our error is $mathcal{O}(Delta x^2)$.
# In[ ]:
f = lambda x: numpy.sin(x)
f_prime = lambda x: numpy.cos(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 20
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute derivative
f_prime_hat = numpy.empty(x_hat.shape)
f_prime_hat[1:-1] = (f(x_hat[2:]) - f(x_hat[:-2])) / (2 * delta_x)
# Use first-order differences for points at edge of domain
f_prime_hat[0] = (f(x_hat[1]) - f(x_hat[0])) / delta_x #
Forward Difference at x_0
f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x #
Backward Difference at x_N
# f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) -
f(x_hat[2])) / (2.0 * delta_x)
# f_prime_hat[-1] = (3.0 * f(x_hat[-1]) - 4.0 * f(x_hat[-2]) +
f(x_hat[-3])) / (2.0 * delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f_prime(x), 'k')
axes.plot(x_hat, f_prime_hat, 'ro')
axes.set_xlim((x[0], x[-1]))
# axes.set_ylim((-1.1, 1.1))
axes.set_xlabel('x')
axes.set_ylabel("$f'(x)$ and $D_2(x_n)$")
axes.set_title("Second Order Centered Differences $D_2(x_n)$
for $f(x)$")
plt.show()
# In[ ]:
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N + 1)
delta_x.append(x_hat[1] - x_hat[0])
# Compute derivative
f_prime_hat = numpy.empty(x_hat.shape)
f_prime_hat[1:-1] = (f(x_hat[2:]) - f(x_hat[:-2])) / (2 *
delta_x[-1])
# Use first-order differences for points at edge of domain
# f_prime_hat[0] = (f(x_hat[1]) - f(x_hat[0])) / delta_x[-1]
# f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1]
# Use second-order differences for points at edge of domain
f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) + -
f(x_hat[2])) / (2.0 * delta_x[-1])
f_prime_hat[-1] = ( 3.0 * f(x_hat[-1]) + -4.0 * f(x_hat[-2]) +
f(x_hat[-3])) / (2.0 * delta_x[-1])
error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat) -
f_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error, "ro", label="Approx. Derivative")
order_C = lambda delta_x, error, order:
numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) *
delta_x**1.0, 'b--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) *
delta_x**2.0, 'r--', label="2nd Order")
axes.legend(loc=4)
axes.set_title("Convergence of 2nd Order Differences")
axes.set_xlabel("$Delta x$")
axes.set_ylabel("$|f'(x) - hat{f}'(x)|$")
plt.show()
# ### Example 3: Alternative Derivations
#
# An alternative method for finding finite-difference formulas is
by using Taylor series expansions about the point we want to
approximate. The Taylor series about $x_n$ is
#
# $$f(x) = f(x_n) + (x - x_n) f'(x_n) + frac{(x - x_n)^2}{2!}
f''(x_n) + frac{(x - x_n)^3}{3!} f'''(x_n) + mathcal{O}((x -
x_n)^4)$$
# Say we want to derive the second order accurate, first
derivative approximation that just did, this requires the values
$(x_{n+1}, f(x_{n+1}))$ and $(x_{n-1}, f(x_{n-1}))$. We can
express these values via our Taylor series approximation above
as
#
# $$f(x_{n+1}) = f(x_n) + (x_{n+1} - x_n) f'(x_n) +
frac{(x_{n+1} - x_n)^2}{2!} f''(x_n) + frac{(x_{n+1} -
x_n)^3}{3!} f'''(x_n) + mathcal{O}((x_{n+1} - x_n)^4) $$
#
# $$ = f(x_n) + Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n)
+ frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$
#
# and
#
# $$f(x_{n-1}) = f(x_n) + (x_{n-1} - x_n) f'(x_n) +
frac{(x_{n-1} - x_n)^2}{2!} f''(x_n) + frac{(x_{n-1} -
x_n)^3}{3!} f'''(x_n) + mathcal{O}((x_{n-1} - x_n)^4) $$
#
# $$ = f(x_n) - Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n)
- frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$
# Now to find out how to combine these into an expression for
the derivative we assume our approximation looks like
#
# $$f'(x_n) + R(x_n) = A f(x_{n+1}) + B f(x_n) + C f(x_{n-
1})$$
#
# where $R(x_n)$ is our error. Plugging in the Taylor series
approximations we find
#
# $$f'(x_n) + R(x_n) = A left ( f(x_n) + Delta x f'(x_n) +
frac{Delta x^2}{2!} f''(x_n) + frac{Delta x^3}{3!} f'''(x_n)
+ mathcal{O}(Delta x^4)right ) + B f(x_n) + C left ( f(x_n) -
Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) - frac{Delta
x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) right )$$
# Since we want $R(x_n) = mathcal{O}(Delta x^2)$ we want
all terms lower than this to dissapear except for those
multiplying $f'(x_n)$ as those should sum to 1 to give us our
approximation. Collecting the terms with common $Delta x^n$
we get a series of expressions for the coefficients $A$, $B$, and
$C$ based on the fact we want an approximation to $f'(x_n)$.
The $n=0$ terms collected are $A + B + C$ and are set to 0 as
we want the $f(x_n)$ term to dissapear
#
# $$Delta x^0: ~~~~ A + B + C = 0$$
#
# $$Delta x^1: ~~~~ A Delta x - C Delta x = 1 $$
#
# $$Delta x^2: ~~~~ A frac{Delta x^2}{2} + C frac{Delta
x^2}{2} = 0 $$
# This last equation $Rightarrow A = -C$, using this in the
second equation gives $A = frac{1}{2 Delta x}$ and $C = -
frac{1}{2 Delta x}$. The first equation then leads to $B = 0$.
Putting this altogether then gives us our previous expression
including an estimate for the error:
#
# $$f'(x_n) + R(x_n) = frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta
x} + frac{1}{2 Delta x} frac{Delta x^3}{3!} f'''(x_n) +
mathcal{O}(Delta x^4) + frac{1}{2 Delta x} frac{Delta
x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$
#
# $$R(x_n) = frac{Delta x^2}{3!} f'''(x_n) +
mathcal{O}(Delta x^3) = mathcal{O}(Delta x^2)$$
# #### Another way...
#
# There is one more way to derive the second order accurate,
first order finite-difference formula. Consider the two first
order forward and backward finite-differences averaged
together:
#
# $$frac{D_1^+(f(x_n)) + D_1^-(f(x_n))}{2} =
frac{f(x_{n+1}) - f(x_n) + f(x_n) - f(x_{n-1})}{2 Delta x} =
frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta x}$$
# ### Example 4: Higher Order Derivatives
#
# Using our Taylor series approach lets derive the second order
accurate second derivative formula. Again we will use the same
points and the Taylor series centered at $x = x_n$ so we end up
with the same expression as before:
#
# $$f''(x_n) + R(x_n) = A left ( f(x_n) + Delta x f'(x_n) +
frac{Delta x^2}{2!} f''(x_n) + frac{Delta x^3}{3!} f'''(x_n)
+ frac{Delta x^4}{4!} f^{(4)}(x_n) + mathcal{O}(Delta
x^5)right ) + B f(x_n) + C left ( f(x_n) - Delta x f'(x_n) +
frac{Delta x^2}{2!} f''(x_n) - frac{Delta x^3}{3!} f'''(x_n) +
frac{Delta x^4}{4!} f^{(4)}(x_n) + mathcal{O}(Delta x^5)
right )$$
#
# except this time we want to leave $f''(x_n)$ on the right hand
side. Doing the same trick as before we have the following
expressions:
#
# $$Delta x^0: ~~~~ A + B + C = 0$$
#
# $$Delta x^1: ~~~~ A Delta x - C Delta x = 0$$
#
# $$Delta x^2: ~~~~ A frac{Delta x^2}{2} + C frac{Delta
x^2}{2} = 1$$
# The second equation implies $A = C$ which combined with
the third implies
#
# $$A = C = frac{1}{Delta x^2}$$
#
# Finally the first equation gives
#
# $$B = -frac{2}{Delta x^2}$$
#
# leading to the final expression
#
# $$f''(x_n) + R(x_n) = frac{f(x_{n+1}) - 2 f(x_n) + f(x_{n-
1})}{Delta x^2} + frac{1}{Delta x^2} left(frac{Delta
x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!} f^{(4)}(x_n) -
frac{Delta x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!}
f^{(4)}(x_n) right) + mathcal{O}(Delta x^5)$$
#
# with
#
# $$R(x_n) = frac{Delta x^2}{12} f^{(4)}(x_n) +
mathcal{O}(Delta x^3)$$
# In[ ]:
f = lambda x: numpy.sin(x)
f_dubl_prime = lambda x: -numpy.sin(x)
# Use uniform discretization
x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)
N = 10
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x = x_hat[1] - x_hat[0]
# Compute derivative
f_dubl_prime_hat = numpy.empty(x_hat.shape)
f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) +
f(x_hat[:-2])) / (delta_x**2)
# Use first-order differences for points at edge of domain
f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) +
4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x**2
f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[-2]) +
4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x**2
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f_dubl_prime(x), 'k')
axes.plot(x_hat, f_dubl_prime_hat, 'ro')
axes.set_xlim((x[0], x[-1]))
axes.set_ylim((-1.1, 1.1))
axes.set_xlabel('x')
axes.set_ylabel("$f''(x)$")
axes.set_title("Second Order Accurate Second Derivative of
$f(x)$")
plt.show()
# In[ ]:
f = lambda x: numpy.sin(x)
f_dubl_prime = lambda x: -numpy.sin(x)
# Compute the error as a function of delta_x
delta_x = []
error = []
# for N in xrange(2, 101):
for N in xrange(50, 1000, 50):
x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)
delta_x.append(x_hat[1] - x_hat[0])
# Compute derivative
f_dubl_prime_hat = numpy.empty(x_hat.shape)
f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) +
f(x_hat[:-2])) / (delta_x[-1]**2)
# Use second-order differences for points at edge of domain
f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) +
4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x[-1]**2
f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[-
2]) + 4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x[-1]**2
error.append(numpy.linalg.norm(numpy.abs(f_dubl_prime(x_hat
) - f_dubl_prime_hat), ord=numpy.infty))
error = numpy.array(error)
delta_x = numpy.array(delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
# axes.plot(delta_x, error)
axes.loglog(delta_x, error, "ko", label="Approx. Derivative")
order_C = lambda delta_x, error, order:
numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[2], error[2], 1.0) *
delta_x**1.0, 'b--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[2], error[2], 2.0) *
delta_x**2.0, 'r--', label="2nd Order")
axes.legend(loc=4)
axes.set_title("Convergence of Second Order Second
Derivative")
axes.set_xlabel("$Delta x$")
axes.set_ylabel("$|f'' - hat{f}''|$")
plt.show()
# In[ ]:
# In[ ]:
error.py
# coding: utf-8
# <table>
# <tr align=left><td><img align=left src="./images/CC-
BY.png">
# <td>Text provided under a Creative Commons Attribution
license, CC-BY. All code is made available under the FSF-
approved MIT license. (c) Kyle T. Mandli</td>
# </table>
# In[ ]:
get_ipython().magic(u'matplotlib inline')
import numpy
import matplotlib.pyplot as plt
# # Sources of Error
#
# Error can come from many sources when using applying a
numerical method:
# - Model/Data Error
# - Truncation Error
# - Floating Point Error
#
# **Goal:** Categorize and understand each type of error and
explore some simple approaches to analyzing error.
# ## Model and Data Error
#
# Errors in fundamental formulation
# - Lotka-Volterra - fractional rabbits, no extinctions, etc.
# - Data Error - Inaccuracy in measurement or uncertainties in
parameters
#
# Unfortunatley we cannot control model and data error directly
but we can use methods that may be more robust in the presense
of these types of errors.
# ## Truncation Error
#
# Errors arising from approximating a function with a simpler
function (e.g. $sin(x) approx x$ for $|x| approx 0$.
# ## Floating Point Error
#
# Errors arising from approximating real numbers with finite-
precision numbers and arithmetic.
# ## Basic Definitions
#
# Given a true value of a function $f$ and an approximate
solution $hat{f}$ define:
#
# Absolute Error: $e = |f - hat{f}|$
#
# Relative Error: $r = frac{e}{|f|} = frac{|f - hat{f}|}{|f|}$
# Decimal precision $p$ is defined as the minimum value that
satisfies
#
# $$x = text{round}(10^{-n} cdot x) cdot 10^n$$
#
# where
#
# $$n = text{floor}(log_{10} x) + 1 - p$$
#
# Note that if we are asking the decimal precision of the
approximation $hat{f}$ of $f$ then we need to use the absolute
error to determine the precision. To find the decimal precision
in this case look at the magnitude of the absolute error and
deterimine the place of the first error. Combine this with the
number of "correct" digits and you will get the decimal
precision of the approximation.
# ## Truncation Error and Taylor's Theorem
#
# **Taylor's Theorem:** Let $f(x) in C^{m+1}[a,b]$ and $x_0
in [a,b]$, then for all $x in (a,b)$ there exists a number $c =
c(x)$ that lies between $x_0$ and $x$ such that
#
# $$ f(x) = T_N(x) + R_N(x)$$
#
# where $T_N(x)$ is the Taylor polynomial approximation
#
# $$T_N(x) = sum^N_{n=0} frac{f^{(n)}(x_0)cdot(x-
x_0)^n}{n!}$$
#
# and $R_N(x)$ is the residual (the part of the series we left
off)
#
# $$R_N(x) = frac{f^{(n+1)}(c) cdot (x -
x_0)^{n+1}}{(n+1)!}$$
# Another way to think about these results involves replacing $x
- x_0$ with $Delta x$. The primary idea here is that the
residual $R_N(x)$ becomes smaller as $Delta x rightarrow 0$.
#
# $$T_N(x) = sum^N_{n=0} frac{f^{(n)}(x_0)cdotDelta
x^n}{n!}$$
#
# and $R_N(x)$ is the residual (the part of the series we left
off)
#
# $$R_N(x) = frac{f^{(n+1)}(c) cdot Delta x^{n+1}}{(n+1)!}
leq M Delta x^{n+1} = O(Delta x^{n+1})$$
# #### Example 1
#
# $f(x) = e^x$ with $x_0 = 0$
#
# Using this we can find expressions for the relative and
absolute error as a function of $x$ assuming $N=2$.
# $$f'(x) = e^x, ~~~ f''(x) = e^x ~~~ f^{(n)}(x) = e^x$$
#
# $$T_N(x) = sum^N_{n=0} e^0 frac{x^n}{n!}
~~~~Rightarrow ~~~~ T_2(x) = 1 + x + frac{x^2}{2}$$
#
# $$R_N(x) = e^c frac{x^{n+1}}{(n+1)!} = e^c cdot
frac{x^3}{6} ~~~~ Rightarrow ~~~~ R_2(x) leq
frac{e^1}{6} approx 0.5$$
#
# $$e^1 = 2.718ldots$$
#
# $$T_2(1) = 2.5 Rightarrow e approx 0.2 ~~ r approx 0.1$$
# We can also use the package sympy which has the ability to
calculate Taylor polynomials built-in!
# In[ ]:
import sympy
x = sympy.symbols('x')
f = sympy.symbols('f', cls=sympy.Function)
f = sympy.exp(x)
f.series(x0=0, n=6)
# Lets plot this numerically for a section of $x$.
# In[ ]:
x = numpy.linspace(-1, 1, 100)
T_N = 1.0 + x + x**2 / 2.0
R_N = numpy.exp(1) * x**3 / 6.0
plt.plot(x, T_N, 'r', x, numpy.exp(x), 'k', x, R_N, 'b')
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=2)
plt.show()
# #### Example 2
#
# $f(x) = frac{1}{x} ~~~~~~ x_0 = 1$, approximate with
$hat{f}(x) = T_2(x)$
# $$f'(x) = -frac{1}{x^2} ~~~~~~~ f''(x) = frac{2}{x^3}
~~~~~~~ f^{(n)}(x) = frac{(-1)^n n!}{x^{n+1}}$$
#
# $$T_N(x) = sum^N_{n=0} (-1)^n (x-1)^n ~~~~ Rightarrow
~~~~ T_2(x) = 1 - (x - 1) + (x - 1)^2$$
#
# $$R_N(x) = frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} ~~~~
Rightarrow ~~~~ R_2(x) = frac{-(x - 1)^{3}}{c^{4}}$$
# In[ ]:
x = numpy.linspace(0.8, 2, 100)
T_N = 1.0 - (x-1) + (x-1)**2
R_N = -(x-1.0)**3 / (1.1**4)
plt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b')
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=8)
plt.show()
# ### Symbols and Definitions
#
# Big-O notation: $f(x) = text{O}(g(x))$ as $x rightarrow a$
if and only if $|f(x)| leq M |g(x)|$ as $|x - a| < delta$ for $M$
and $a$ positive.
#
# In practice we use Big-O notation to say something about how
the terms we may have left out of a series might behave. We
saw an example earlier of this with the Taylor's series
approximations:
# #### Example:
# $f(x) = sin x$ with $x_0 = 0$ then
#
# $$T_N(x) = sum^N_{n=0} (-1)^{n}
frac{x^{2n+1}}{(2n+1)!}$$
#
# We can actually write $f(x)$ then as
#
# $$f(x) = x - frac{x^3}{6} + frac{x^5}{120} + O(x^7)$$
#
# This becomes more useful when we look at this as we did
before with $Delta x$:
#
# $$f(x) = Delta x - frac{Delta x^3}{6} + frac{Delta
x^5}{120} + O(Delta x^7)$$
# **We can also develop rules for error propagation based on
Big-O notation:**
#
# In general, there are two theorems that do not need proof and
hold when the value of x is large:
#
# Let
# $$f(x) = p(x) + O(x^n)$$
# $$g(x) = q(x) + O(x^m)$$
# $$k = max(n, m)$$ then
# $$f+g = p + q + O(x^k)$$
# $$f cdot g = p cdot q + O(x^{ncdot m})$$
# On the other hand, if we are interested in small values of x,
say Δx, the above expressions can be modified as follows:
#
# $$f(Delta x) = p(Delta x) + O(Delta x^n)$$
# $$g(Delta x) = q(Delta x) + O(Delta x^m)$$
# $$r = min(n, m)$$ then
#
# $$f+g = p + q + O(Delta x^r)$$
#
# $$f cdot g = p cdot q + p cdot O(Delta x^m) + q cdot
O(Delta x^n) + O(Delta x^{n+m}) = p cdot q + O(Delta
x^r)$$
# **Note 1:** In this case we suppose that at least the
polynomial with $k = max(n, m)$ has the following form:
# $$p(Delta x) = 1 + p_1 Delta x + p_2 Delta x^2 + ...$$
# or $$q(Delta x) = 1 + q_1 Delta x + q_2 Delta x^2 + ...$$
# so that there is an O(1) term that guarantees the existence of
$O(Delta x^r)$ in the final product.
# To get a sense of why we care most about the power on
$Delta x$ when considering convergence the following figure
shows how different powers on the convergence rate can effect
how quickly we converge to our solution. Note that here we are
plotting the same data two different ways. Plotting the error as
a function of $Delta x$ is a common way to show that a
numerical method is doing what we expect and exhibits the
correct convergence behavior. Since errors can get small
quickly it is very common to plot these sorts of plots on a log-
log scale to easily visualize the results. Note that if a method
was truly of the order $n$ that they will be a linear function in
log-log space with slope $n$.
# In[ ]:
dx = numpy.linspace(1.0, 1e-4, 100)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
for n in xrange(1, 5):
axes[0].plot(dx, dx**n, label="$Delta x^%s$" % n)
axes[1].loglog(dx, dx**n, label="$Delta x^%s$" % n)
axes[0].legend(loc=2)
axes[1].set_xticks([10.0**(-n) for n in xrange(5)])
axes[1].set_yticks([10.0**(-n) for n in xrange(16)])
axes[1].legend(loc=4)
for n in xrange(2):
axes[n].set_title("Growth of Error vs. $Delta x^n$")
axes[n].set_xlabel("$Delta x$")
axes[n].set_ylabel("Estimated Error")
axes[n].set_title("Growth of different")
axes[n].set_xlabel("$Delta x$")
axes[n].set_ylabel("Estimated Error")
plt.show()
# ## Horner's Method for Evaluating Polynomials
#
# Given
#
# $$P_N(x) = a_0 + a_1 x + a_2 x^2 + ldots + a_N x^N$$
#
# or
#
# $$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + ldots +
p_{N+1}$$
#
# want to find best way to evaluate $P_N(x)$.
# First consider two ways to write $P_3$:
#
# $$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$
#
# and using nested multiplication:
#
# $$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$
# Consider how many operations it takes for each...
#
# $$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$
#
# $$P_3(x) = overbrace{p_1 cdot x cdot x cdot x}^3 +
overbrace{p_2 cdot x cdot x}^2 + overbrace{p_3 cdot x}^1
+ p_4$$
# Adding up all the operations we can in general think of this as
a pyramid
#
# ![Original Count](./images/horners_method_big_count.png)
#
# We can estimate this way that the algorithm written this way
will take approximately $O(N^2 / 2)$ operations to complete.
# Looking at our other means of evaluation:
#
# $$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$
#
# Here we find that the method is $O(N)$ (the 2 is usually
ignored in these cases). The important thing is that the first
evaluation is $O(N^2)$ and the second $O(N)$!
# ### Algorithm
#
# Fill in the function and implement Horner's method:
# ```python
# def eval_poly(p, x):
# """Evaluates polynomial given coefficients p at x
#
# Function to evaluate a polynomial in order N operations.
The polynomial is defined as
#
# P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
#
# The value x should be a float.
# """
# pass
# ```
# ```python
# def eval_poly(p, x):
# """Evaluates polynomial given coefficients p at x
#
# Function to evaluate a polynomial in order N operations.
The polynomial is defined as
#
# P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
#
# The value x should be a float.
# """
#
# y = p[0]
# for coefficient in p[1:]:
# y = y * x + coefficient
#
# return y
# ```
# or an alternative version that allows `x` to be a vector of
values:
# ```python
# def eval_poly(p, x):
# """Evaluates polynomial given coefficients p at x
#
# Function to evaluate a polynomial in order N operations.
The polynomial is defined as
#
# P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
#
# The value x can by a NumPy ndarray.
# """
#
# y = numpy.ones(x.shape) * p[0]
# for coefficient in p[1:]:
# y = y * x + coefficient
#
# return y
# ```
# This version calculates each `y` value simultaneously making
for much faster code!
# In[ ]:
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The
polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x can by a NumPy ndarray.
"""
y = numpy.ones(x.shape) * p[0]
for coefficient in p[1:]:
y = y * x + coefficient
return y
p = [1, -3, 10, 4, 5, 5]
x = numpy.linspace(-10, 10, 100)
plt.plot(x, eval_poly(p, x))
plt.show()
# ## Truncation Error vs. Floating Point Error
#
# Truncation error: Errors arising from approximation of a
function, truncation of a series...
#
# $$sin x approx x - frac{x^3}{3!} + frac{x^5}{5!} +
O(x^7)$$
#
# Floating-point Error: Errors arising from approximating real
numbers with finite-precision numbers
#
# $$pi approx 3.14$$
#
# or $frac{1}{3} approx 0.333333333$ in decimal, results
form finitely number of registers to represent each number.
#
# ## Floating Point Systems
#
# Numbers in floating point systems are represented as a series
of bits that represent different pieces of a number. In
*normalized floating point systems* there are some standar
conventions for what these bits are used for. In general the
numbers are stored by breaking them down into the form
#
# $$hat{f} = pm d_1 . d_2 d_3 d_4 ldots d_p times beta^E$$
# $$hat{f} = pm d_1 . d_2 d_3 d_4 ldots d_p times beta^E$$
#
# where
# 1. $pm$ is a single bit and of course represents the sign of
the number
# 2. $d_1 . d_2 d_3 d_4 ldots d_p$ is called the *mantissa*.
Note that technically the decimal could be moved but generally,
using scientific notation, the decimal can always be placed at
this location. The digits $d_2 d_3 d_4 ldots d_p$ are called
the *fraction* with $p$ digits of precision. Normalized systems
specifically put the decimal point in the front like we have and
assume $d_1 neq 0$ unless the number is exactly $0$.
# 3. $beta$ is the *base*. For binary $beta = 2$, for decimal
$beta = 10$, etc.
# 4. $E$ is the *exponent*, an integer in the range $[E_{min},
E_{max}]$
# The important points on any floating point system is that
# 1. There exist a discrete and finite set of representable
numbers
# 2. These representable numbers are not evenly distributed on
the real line
# 3. Airthmetic in floating point systems yield different results
from infinite precision arithmetic (i.e. "real" math)
# ### Example: Toy System
# Consider the toy 2-digit precision decimal system
(normalized)
# $$f = pm d_1 . d_2 times 10^E$$
# with $E in [-2, 0]$.
#
# #### Number and distribution of numbers
# 1. How many numbers can we represent with this system?
#
# 2. What is the distribution on the real line?
#
# 3. What is the underflow and overflow limits?
#
# How many numbers can we represent with this system?
#
# $f = pm d_1 . d_2 times 10^E$ with $E in [-2, 0]$.
#
# $$ 2 times 9 times 10 times 3 + 1 = 541$$
# What is the distribution on the real line?
# In[ ]:
d_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9]
d_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
E_values = [0, -1, -2]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+',
markersize=20)
axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+',
markersize=20)
axes.plot(0.0, 0.0, '+', markersize=20)
axes.plot([-10.0, 10.0], [0.0, 0.0], 'k')
axes.set_title("Distribution of Values")
axes.set_yticks([])
axes.set_xlabel("x")
axes.set_ylabel("")
axes.set_xlim([-0.1, 0.1])
plt.show()
# What is the underflow and overflow limits?
#
# Smallest number that can be represented is the underflow:
$1.0 times 10^{-2} = 0.01$
# Largest number that can be represented is the overflow: $9.9
times 10^0 = 9.9$
# ## Properties of Floating Point Systems
# All floating-point systems are characterized by several
important numbers
# - Smalled normalized number (underflow if below)
# - Largest normalized number (overflow if above)
# - Zero
# - Machine $epsilon$ or $epsilon_{text{machine}}$
# - `inf` and `nan`, infinity and **N**ot **a** **N**umber
respectively
# - Subnormal numbers
# ## Binary Systems
# Consider the 2-digit precision base 2 system:
#
# $$f=pm d_1 . d_2 times 2^E ~~~~ text{with} ~~~~ E in [-
1, 1]$$
#
# #### Number and distribution of numbers
# 1. How many numbers can we represent with this system?
#
# 2. What is the distribution on the real line?
#
# 3. What is the underflow and overflow limits?
#
# How many numbers can we represent with this system?
#
# $$f=pm d_1 . d_2 times 2^E ~~~~ text{with} ~~~~ E in [-
1, 1]$$
#
# $$ 2 times 1 times 2 times 3 + 1 = 13$$
# What is the distribution on the real line?
# In[ ]:
d_1_values = [1]
d_2_values = [0, 1]
E_values = [1, 0, -1]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+',
markersize=20)
axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+',
markersize=20)
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx
error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx

More Related Content

Similar to error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx

gptips1.0concrete.matConcrete_Data[1030x9 double array]tr.docx
gptips1.0concrete.matConcrete_Data[1030x9  double array]tr.docxgptips1.0concrete.matConcrete_Data[1030x9  double array]tr.docx
gptips1.0concrete.matConcrete_Data[1030x9 double array]tr.docxwhittemorelucilla
 
Numerical differentation with c
Numerical differentation with cNumerical differentation with c
Numerical differentation with cYagya Dev Bhardwaj
 
Scala as a Declarative Language
Scala as a Declarative LanguageScala as a Declarative Language
Scala as a Declarative Languagevsssuresh
 
C Programming Interview Questions
C Programming Interview QuestionsC Programming Interview Questions
C Programming Interview QuestionsGradeup
 
Python programing
Python programingPython programing
Python programinghamzagame
 
High-Performance Haskell
High-Performance HaskellHigh-Performance Haskell
High-Performance HaskellJohan Tibell
 
Lecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxLecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxjovannyflex
 
Lecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxLecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxjovannyflex
 
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAScientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAAhmed Gamal Abdel Gawad
 
maXbox starter67 machine learning V
maXbox starter67 machine learning VmaXbox starter67 machine learning V
maXbox starter67 machine learning VMax Kleiner
 
Introduction to programming - class 3
Introduction to programming - class 3Introduction to programming - class 3
Introduction to programming - class 3Paul Brebner
 
MH prediction modeling and validation in r (2) classification 190709
MH prediction modeling and validation in r (2) classification 190709MH prediction modeling and validation in r (2) classification 190709
MH prediction modeling and validation in r (2) classification 190709Min-hyung Kim
 
Programming python quick intro for schools
Programming python quick intro for schoolsProgramming python quick intro for schools
Programming python quick intro for schoolsDan Bowen
 
OverviewThis hands-on lab allows you to follow and experiment w.docx
OverviewThis hands-on lab allows you to follow and experiment w.docxOverviewThis hands-on lab allows you to follow and experiment w.docx
OverviewThis hands-on lab allows you to follow and experiment w.docxgerardkortney
 
Python quickstart for programmers: Python Kung Fu
Python quickstart for programmers: Python Kung FuPython quickstart for programmers: Python Kung Fu
Python quickstart for programmers: Python Kung Fuclimatewarrior
 
Csci101 lect04 advanced_selection
Csci101 lect04 advanced_selectionCsci101 lect04 advanced_selection
Csci101 lect04 advanced_selectionElsayed Hemayed
 

Similar to error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx (20)

gptips1.0concrete.matConcrete_Data[1030x9 double array]tr.docx
gptips1.0concrete.matConcrete_Data[1030x9  double array]tr.docxgptips1.0concrete.matConcrete_Data[1030x9  double array]tr.docx
gptips1.0concrete.matConcrete_Data[1030x9 double array]tr.docx
 
CPP Homework Help
CPP Homework HelpCPP Homework Help
CPP Homework Help
 
Numerical differentation with c
Numerical differentation with cNumerical differentation with c
Numerical differentation with c
 
Scala as a Declarative Language
Scala as a Declarative LanguageScala as a Declarative Language
Scala as a Declarative Language
 
C Programming Interview Questions
C Programming Interview QuestionsC Programming Interview Questions
C Programming Interview Questions
 
Python programing
Python programingPython programing
Python programing
 
High-Performance Haskell
High-Performance HaskellHigh-Performance Haskell
High-Performance Haskell
 
Cpl
CplCpl
Cpl
 
Lecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxLecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptx
 
Lecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptxLecture 5 – Computing with Numbers (Math Lib).pptx
Lecture 5 – Computing with Numbers (Math Lib).pptx
 
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAScientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
 
maXbox starter67 machine learning V
maXbox starter67 machine learning VmaXbox starter67 machine learning V
maXbox starter67 machine learning V
 
Introduction to programming - class 3
Introduction to programming - class 3Introduction to programming - class 3
Introduction to programming - class 3
 
MH prediction modeling and validation in r (2) classification 190709
MH prediction modeling and validation in r (2) classification 190709MH prediction modeling and validation in r (2) classification 190709
MH prediction modeling and validation in r (2) classification 190709
 
Tutorial 2
Tutorial     2Tutorial     2
Tutorial 2
 
Programming python quick intro for schools
Programming python quick intro for schoolsProgramming python quick intro for schools
Programming python quick intro for schools
 
Curvefitting
CurvefittingCurvefitting
Curvefitting
 
OverviewThis hands-on lab allows you to follow and experiment w.docx
OverviewThis hands-on lab allows you to follow and experiment w.docxOverviewThis hands-on lab allows you to follow and experiment w.docx
OverviewThis hands-on lab allows you to follow and experiment w.docx
 
Python quickstart for programmers: Python Kung Fu
Python quickstart for programmers: Python Kung FuPython quickstart for programmers: Python Kung Fu
Python quickstart for programmers: Python Kung Fu
 
Csci101 lect04 advanced_selection
Csci101 lect04 advanced_selectionCsci101 lect04 advanced_selection
Csci101 lect04 advanced_selection
 

More from SALU18

AFRICAResearch Paper AssignmentInstructionsOverview.docx
AFRICAResearch Paper AssignmentInstructionsOverview.docxAFRICAResearch Paper AssignmentInstructionsOverview.docx
AFRICAResearch Paper AssignmentInstructionsOverview.docxSALU18
 
Adversarial ProceedingsCritically discuss with your classmates t.docx
Adversarial ProceedingsCritically discuss with your classmates t.docxAdversarial ProceedingsCritically discuss with your classmates t.docx
Adversarial ProceedingsCritically discuss with your classmates t.docxSALU18
 
Advances In Management .docx
Advances In Management                                        .docxAdvances In Management                                        .docx
Advances In Management .docxSALU18
 
African-American Literature An introduction to major African-Americ.docx
African-American Literature An introduction to major African-Americ.docxAfrican-American Literature An introduction to major African-Americ.docx
African-American Literature An introduction to major African-Americ.docxSALU18
 
African American Women and Healthcare I want to explain how heal.docx
African American Women and Healthcare I want to explain how heal.docxAfrican American Women and Healthcare I want to explain how heal.docx
African American Women and Healthcare I want to explain how heal.docxSALU18
 
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docx
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docxAdvocacy & Legislation in Early Childhood EducationAdvocacy & Le.docx
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docxSALU18
 
Advertising is one of the most common forms of visual persuasion we .docx
Advertising is one of the most common forms of visual persuasion we .docxAdvertising is one of the most common forms of visual persuasion we .docx
Advertising is one of the most common forms of visual persuasion we .docxSALU18
 
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docx
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docxAdult Health 1 Study GuideSensory Unit Chapters 63 & 64.docx
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docxSALU18
 
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docx
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docxAdvertising Campaign Management Part 3Jennifer Sundstrom-F.docx
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docxSALU18
 
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docx
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docxAdopt-a-Plant Project guidelinesOverviewThe purpose of this.docx
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docxSALU18
 
ADM2302 M, N, P and Q Assignment # 4 Winter 2020 Page 1 .docx
ADM2302 M, N, P and Q  Assignment # 4 Winter 2020  Page 1 .docxADM2302 M, N, P and Q  Assignment # 4 Winter 2020  Page 1 .docx
ADM2302 M, N, P and Q Assignment # 4 Winter 2020 Page 1 .docxSALU18
 
Adlerian-Based Positive Group Counseling Interventions w ith.docx
Adlerian-Based Positive Group Counseling Interventions w ith.docxAdlerian-Based Positive Group Counseling Interventions w ith.docx
Adlerian-Based Positive Group Counseling Interventions w ith.docxSALU18
 
After completing the assessment, my Signature Theme Report produ.docx
After completing the assessment, my Signature Theme Report produ.docxAfter completing the assessment, my Signature Theme Report produ.docx
After completing the assessment, my Signature Theme Report produ.docxSALU18
 
After careful reading of the case material, consider and fully answe.docx
After careful reading of the case material, consider and fully answe.docxAfter careful reading of the case material, consider and fully answe.docx
After careful reading of the case material, consider and fully answe.docxSALU18
 
AffluentBe unique toConformDebatableDominantEn.docx
AffluentBe unique toConformDebatableDominantEn.docxAffluentBe unique toConformDebatableDominantEn.docx
AffluentBe unique toConformDebatableDominantEn.docxSALU18
 
Advocacy Advoc.docx
Advocacy Advoc.docxAdvocacy Advoc.docx
Advocacy Advoc.docxSALU18
 
Advanced persistent threats (APTs) have been thrust into the spotlig.docx
Advanced persistent threats (APTs) have been thrust into the spotlig.docxAdvanced persistent threats (APTs) have been thrust into the spotlig.docx
Advanced persistent threats (APTs) have been thrust into the spotlig.docxSALU18
 
Advanced persistent threatRecommendations for remediation .docx
Advanced persistent threatRecommendations for remediation .docxAdvanced persistent threatRecommendations for remediation .docx
Advanced persistent threatRecommendations for remediation .docxSALU18
 
Adultism refers to the oppression of young people by adults. The pop.docx
Adultism refers to the oppression of young people by adults. The pop.docxAdultism refers to the oppression of young people by adults. The pop.docx
Adultism refers to the oppression of young people by adults. The pop.docxSALU18
 
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docx
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docxADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docx
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docxSALU18
 

More from SALU18 (20)

AFRICAResearch Paper AssignmentInstructionsOverview.docx
AFRICAResearch Paper AssignmentInstructionsOverview.docxAFRICAResearch Paper AssignmentInstructionsOverview.docx
AFRICAResearch Paper AssignmentInstructionsOverview.docx
 
Adversarial ProceedingsCritically discuss with your classmates t.docx
Adversarial ProceedingsCritically discuss with your classmates t.docxAdversarial ProceedingsCritically discuss with your classmates t.docx
Adversarial ProceedingsCritically discuss with your classmates t.docx
 
Advances In Management .docx
Advances In Management                                        .docxAdvances In Management                                        .docx
Advances In Management .docx
 
African-American Literature An introduction to major African-Americ.docx
African-American Literature An introduction to major African-Americ.docxAfrican-American Literature An introduction to major African-Americ.docx
African-American Literature An introduction to major African-Americ.docx
 
African American Women and Healthcare I want to explain how heal.docx
African American Women and Healthcare I want to explain how heal.docxAfrican American Women and Healthcare I want to explain how heal.docx
African American Women and Healthcare I want to explain how heal.docx
 
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docx
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docxAdvocacy & Legislation in Early Childhood EducationAdvocacy & Le.docx
Advocacy & Legislation in Early Childhood EducationAdvocacy & Le.docx
 
Advertising is one of the most common forms of visual persuasion we .docx
Advertising is one of the most common forms of visual persuasion we .docxAdvertising is one of the most common forms of visual persuasion we .docx
Advertising is one of the most common forms of visual persuasion we .docx
 
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docx
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docxAdult Health 1 Study GuideSensory Unit Chapters 63 & 64.docx
Adult Health 1 Study GuideSensory Unit Chapters 63 & 64.docx
 
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docx
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docxAdvertising Campaign Management Part 3Jennifer Sundstrom-F.docx
Advertising Campaign Management Part 3Jennifer Sundstrom-F.docx
 
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docx
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docxAdopt-a-Plant Project guidelinesOverviewThe purpose of this.docx
Adopt-a-Plant Project guidelinesOverviewThe purpose of this.docx
 
ADM2302 M, N, P and Q Assignment # 4 Winter 2020 Page 1 .docx
ADM2302 M, N, P and Q  Assignment # 4 Winter 2020  Page 1 .docxADM2302 M, N, P and Q  Assignment # 4 Winter 2020  Page 1 .docx
ADM2302 M, N, P and Q Assignment # 4 Winter 2020 Page 1 .docx
 
Adlerian-Based Positive Group Counseling Interventions w ith.docx
Adlerian-Based Positive Group Counseling Interventions w ith.docxAdlerian-Based Positive Group Counseling Interventions w ith.docx
Adlerian-Based Positive Group Counseling Interventions w ith.docx
 
After completing the assessment, my Signature Theme Report produ.docx
After completing the assessment, my Signature Theme Report produ.docxAfter completing the assessment, my Signature Theme Report produ.docx
After completing the assessment, my Signature Theme Report produ.docx
 
After careful reading of the case material, consider and fully answe.docx
After careful reading of the case material, consider and fully answe.docxAfter careful reading of the case material, consider and fully answe.docx
After careful reading of the case material, consider and fully answe.docx
 
AffluentBe unique toConformDebatableDominantEn.docx
AffluentBe unique toConformDebatableDominantEn.docxAffluentBe unique toConformDebatableDominantEn.docx
AffluentBe unique toConformDebatableDominantEn.docx
 
Advocacy Advoc.docx
Advocacy Advoc.docxAdvocacy Advoc.docx
Advocacy Advoc.docx
 
Advanced persistent threats (APTs) have been thrust into the spotlig.docx
Advanced persistent threats (APTs) have been thrust into the spotlig.docxAdvanced persistent threats (APTs) have been thrust into the spotlig.docx
Advanced persistent threats (APTs) have been thrust into the spotlig.docx
 
Advanced persistent threatRecommendations for remediation .docx
Advanced persistent threatRecommendations for remediation .docxAdvanced persistent threatRecommendations for remediation .docx
Advanced persistent threatRecommendations for remediation .docx
 
Adultism refers to the oppression of young people by adults. The pop.docx
Adultism refers to the oppression of young people by adults. The pop.docxAdultism refers to the oppression of young people by adults. The pop.docx
Adultism refers to the oppression of young people by adults. The pop.docx
 
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docx
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docxADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docx
ADVANCE v.09212015 •APPLICANT DIVERSITY STATEMENT .docx
 

Recently uploaded

Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 

Recently uploaded (20)

Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 

error 2.pdf101316, 6(46 PM01_errorPage 1 of 5http.docx

  • 1. error 2.pdf 10/13/16, 6(46 PM01_error Page 1 of 5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do wnload=false In [ ]: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import sys Error Definitions Following is an example for the concept of absolute error, relative error and decimal precision: We shall test the approximation to common mathematical constant, . Compute the absolute and relative errors along with the decimal precision if we take the approximate value of . In [ ]: # We can use the formulas you derieved above to calculate the actual n umbers absolute_error = np.abs(np.exp(1) - 2.718) relative_error = absolute_error/np.exp(1) print "The absolute error is "+str(absolute_error) print "The relative error is "+str(relative_error)
  • 2. Machine epsilon is a very important concept in floating point error. The value, even though miniscule, can easily compund over a period to cause huge problems. Below we see a problem demonstating how easily machine error can creep into a simple piece of code: In [ ]: a = 4.0/3.0 b = a - 1.0 c = 3*b eps = 1 - c print 'Value of a is ' +str(a) print 'Value of b is ' +str(b) print 'Value of c is ' +str(c) print 'Value of epsilon is ' +str(eps) Ideally eps should be 0, but instead we see the machine epsilon and while the value is small it can lead to issues. e e = 2.718 10/13/16, 6(46 PM01_error Page 2 of 5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do wnload=false In [ ]: print "The progression of error:" for i in range(1,20): print str(abs((10**i)*c - (10**i))) The largest floating point number
  • 3. The formula for obtaining the number is shown below, instead of calculating the value we can use the system library to find this value. In [ ]: maximum = (2.0-eps)*2.0**1023 print sys.float_info.max print 'Value of maximum is ' +str(maximum) The smallest floating point number The formula for obtaining the number is shown below. Similarly the value can be found using the system library to find this value. In [ ]: minimum = eps*2.0**(-1022) print sys.float_info.min print sys.float_info.min*sys.float_info.epsilon print 'Value of minimum is ' +str(minimum) As we try to compute a number bigger than the aforementioned, largest floating point number we see weird errors. The computer assigns infinity to these values. In [ ]: overflow = maximum*10.0 print 'Value of overflow is ' +str(overflow) As we try to compute a number smaller than the aforementioned smallest floating point number we see that the computer assigns it the value 0. We actually lose precision in this case. 10/13/16, 6(46 PM01_error Page 3 of 5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do
  • 4. wnload=false In [1]: underflow = minimum/2.0 print 'Value of underflow is ' +str(underflow) Truncation error is a very common form of error you will keep seing in the area of Numerical Analysis/Computing. Here we will look at the classic Calculus example of the approximation near 0. We can plot them together to vsualize the approximation and also plot the error to unserdtand the behavious of the truncation error. Sin(x) ≈ x -------------------------------------------------------------------- ------- NameError Traceback (most recent cal l last) <ipython-input-1-36ba610e4c74> in <module>() ----> 1 underflow = minimum/2.0 2 print 'Value of underflow is ' +str(underflow) NameError: name 'minimum' is not defined 10/13/16, 6(46 PM01_error Page 4 of 5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do wnload=false In [ ]: # Truncation error, plot of x vs Sin x
  • 5. # Plot Sin x and x for the values between -pi and pi x = np.linspace(-np.pi,np.pi,101) plt.plot(x, x, '-r',x,np.sin(x),'bs') plt.title("Comparing sin x to x on the whole domain") plt.legend(["x", "Sin(x)"], loc=4) plt.show() # Now lets move our focus closer to 0, pick a range closer to 0 x = np.linspace(-0.5,0.5,21) plt.plot(x, x, '-r',x,np.sin(x),'bs') plt.title("Comparing sin x to x nearer to 0") plt.legend(["x", "Sin(x)"], loc=4) plt.show() # Now we can plot the absolute error error = np.absolute(np.sin(x) - x) plt.plot(x, error,'-b') plt.title("Error for $Sin(x) - x$") #plt.legend(["x", "Sin(x)"], loc=8) plt.show() # Now we can plot the relative error rel_error = np.absolute(error/x) plt.plot(x, rel_error,'-b') plt.title("Realtive Error for $Sin(x) - x$") #plt.legend(["x", "Sin(x)"], loc=8) plt.show() Model error arisses in various forms, here we are gonna take some population data and fit two different models and analyze which model is better for the given data. 10/13/16, 6(46 PM01_error
  • 6. Page 5 of 5http://localhost:8888/nbconvert/html/group/01_error.ipynb?do wnload=false In [ ]: # Model Error time = [0, 1, 2, 3, 4, 5] # hours growth = [20, 40, 75, 150, 297, 510] # Bacteria Population time = np.array(time) growth = np.array(growth) # First we can just plot the data to visualize it plt.plot(time,growth,'rs') plt.title("Scatter plot for the Bacteria population growth over time") plt.xlabel('Time (hrs)') plt.ylabel('Population') plt.show() # Now we can use the Exponential Model, y = ab^x, to fit the data a = 20.5122; b = 1.9238; y = a*b**time[:] plt.plot(time,growth,'rs',time,y,'-b') plt.title("Expoenential model fit") plt.xlabel('Time (hrs)') plt.ylabel('Population') plt.legend(["Data", "Exponential Fit"], loc=4) plt.show() # Now we can use the Power Model, y = ax^b, to fit the data a = 32.5846; b = 1.572; y = a*time[:]**b plt.plot(time,growth,'rs',time,y,'-b')
  • 7. plt.title("Power model fit") plt.xlabel('Time (hrs)') plt.ylabel('Population') plt.legend(["Data", "Power Fit"], loc=4) plt.show() error.pdf 10/13/16, 12)40 PMHW1_error Page 1 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [2]: %matplotlib inline %precision 16 import numpy import matplotlib.pyplot as plt Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel Restart) and then run all cells (in the menubar, select Cell Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: HW 1 - Forms of Error Question 1 Find the absolute error, relative error, and decimal precision (number of significant decimal digits) for the following and approximations . Note that here we may also
  • 8. mean precision as compared to . In these cases use the absolute error to help define 's precision (each worth 5 points). (a) and (b) and (c) and for (Stirling's approximation) (d) and where is the Taylor polynomial approximation to expanded about . Consider . What vaule of is required for this approximation to be good to 6 digits of decimal precision? → → f f ̂ f f ̂ f = π = 3.14f ̂ f = π = 22/7f ̂ f = log(n!) = n log(n) − nf ̂ n = 5, 10, 100 f = ex = (x)f ̂ Tn (x)Tn ex x = 0 N = 1, 2, 3 N 10/13/16, 12)40 PMHW1_error Page 2 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_
  • 9. error.ipynb?download=false (a) Absolute error Relative error Precision print numpy.abs(numpy.pi - 3.14) print numpy.abs(numpy.pi - 3.14) / (numpy.pi) (b) Absolute error Relative error Precision print numpy.abs(numpy.pi - 22.0 / 7.0) print numpy.abs(numpy.pi - 22.0 / 7.0) / numpy.pi (c) Absolute error Relative error Precision using the relative error. import scipy.misc as misc n = numpy.array([5, 10, 100]) numpy.abs(numpy.log(misc.factorial(n)) - n * numpy.log(n) +
  • 10. n) numpy.abs(numpy.log(misc.factorial(n)) - (n * numpy.log(n) - n)) / numpy.a bs(numpy.log(misc.factorial(n))) (d) Absolute error = Relative error = Precision - Since this question does require some interval however someone answer this they should use the same approach to how (c) was answered. = π − 3.14 = 0.00159265358979 = = 0.000506957382897π−3.14π = 3 = π − = 0.001264489227 = = 1 − ≈ 0.000402499434771 π− 227 π 22 7π = 3 = log(n!) − (n log(n) − n) = 1.7403021806115442, 2.0785616431350551, 3.2223569567543109
  • 11. = = 0.3635102208239511, 0.1376128752494626, 0.0088589720368673log(n!)−(n log(n)−n)log(n!) = 3 − ∣ ∣ ∣ ∣ e x ∑ n=0 N xn n! ∣ ∣ ∣ ∣ −∣ ∣ ex ∑ N n=0 xn n! ∣ ∣ | |ex
  • 12. 10/13/16, 12)40 PMHW1_error Page 3 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false Question 2 (a) (10) Write a Python program to compute once using the first summation and once using the second for . = [ − ] =SN ∑ n=1 N 1 n 1 n + 1 ∑n=1 N 1 n(n + 1) N = 10, , … ,102 107 10/13/16, 12)40 PMHW1_error Page 4 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [6]: def sum_1(N): """Compute the summation S_N defined as
  • 13. sum^N_{n=1} left [ frac{1}{n} - frac{1}{n+1} right ] :Input: *N* (int) The upper bound on the summation. Returns Sn (float) """ ### BEGIN SOLUTION Sn = 0.0 for n in xrange(1, N + 1): Sn += 1.0 / float(n) - 1.0 / (float(n) + 1.0) ### END SOLUTION return Sn def sum_2(N): """Compute the summation S_N defined as sum^N_{n=1} frac{1}{n (n + 1)} :Input: *N* (int) The upper bound on the summation. Returns Sn (float) """ ### BEGIN SOLUTION Sn = 0.0 for n in xrange(1, N + 1): Sn += 1.0 / (float(n) * (float(n) + 1.0)) ### END SOLUTION return Sn
  • 14. 10/13/16, 12)40 PMHW1_error Page 5 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [7]: N = numpy.array([10**n for n in xrange(1,8)]) answer = numpy.zeros((2, N.shape[0])) for (n, upper_bound) in enumerate(N): answer[0, n] = sum_1(upper_bound) answer[1, n] = sum_2(upper_bound) numpy.testing.assert_allclose(answer[0, :], numpy.array([0.90909090909 09089, 0.9900990099009896, 0.99900099900 09996, 0.9999000099990004, 0.99999000010 00117, 0.9999990000010469, 0.99999989999 98143])) numpy.testing.assert_allclose(answer[1, :], numpy.array([0.90909090909 09091, 0.9900990099009898, 0.99900099900 09997, 0.9999000099990007, 0.99999000010 00122, 0.9999990000010476, 0.99999989999 98153])) print "Success!" (b) (5) Compute the absolute error between the two summation approaches.
  • 15. In [10]: def abs_error(N): """Compute the absolute error of the two sums defined as sum^N_{n=1} left [ frac{1}{n} - frac{1}{n+1} right ] and sum^N_{n=1} frac{1}{n (n + 1)} respectively for the given N. :Input: *N* (int) The upper bound on the summation. Returns *error* (float) """ ### BEGIN SOLUTION error = numpy.abs(sum_2(N) - sum_1(N)) ### END SOLUTION return error 10/13/16, 12)40 PMHW1_error Page 6 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [11]: N = numpy.array([10**n for n in xrange(1,8)]) answer = numpy.zeros(N.shape) for (n, upper_bound) in enumerate(N): answer[n] = abs_error(upper_bound) numpy.testing.assert_allclose(answer,
  • 16. numpy.array([1.1102230246251565e -16, 1.1102230246251565e-16, 1.1102230246251565e -16, 3.3306690738754696e-16, 4.4408920985006262e -16, 6.6613381477509392e-16, 9.9920072216264089e -16])) print "Success!" (c) (10) Plot the relative and absolute error versus . Also plot a line where should be. Comment on what you see. N ϵmachine 10/13/16, 12)40 PMHW1_error Page 7 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [8]: fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 2) # HINT! Use the plotting function semilogx to plot the errors # Also, do not forget to label your plot ### BEGIN SOLUTION N = numpy.array([10**n for n in xrange(1,8)]) answer = numpy.zeros((2, N.shape[0])) for (n, upper_bound) in enumerate(N): answer[0, n] = abs_error(upper_bound) answer[1, n] = numpy.abs(sum_1(upper_bound) -
  • 17. sum_2(upper_bound)) / numpy.abs(sum_2(upper_bound)) for n in xrange(2): axes = fig.add_subplot(1, 2, n + 1) axes.semilogx(N, answer[n, :]) axes.semilogx(N, answer[n, :], 'o') axes.semilogx(N, numpy.finfo(float).eps * numpy.ones(N.shape)) axes.set_xlabel("Number of Terms in Series") axes.set_ylabel("Absolute Error between Series") ### END SOLUTION plt.show() (d) (5) Theorize what may have lead to the differences in answers. Lots of possibilities here, just grade on being reasonable. 10/13/16, 12)40 PMHW1_error Page 8 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false Question 3 Following our discussion in lecture regarding approximating again consider the Taylor polynomial approximation: (a) Derive the upper bound on the relative error assuming that and
  • 18. is given by Just using the definitions of the Taylor polynomial we can simplify the relative error to for some with (Lagrange remainder). For this simplifies to If then we have no bound although taking more terms in the series will always allow for somewhat arbitrary control of the error. (b) Show that for large and , implies that we need at least terms in the series (where ). Hint Use Stirling's approximation . ex ≈ (x) = 1 + x + + + ⋯ +ex Tn x2 2! x3 3! xn n! x > 0 =Rn | − (x)|ex Tn | |ex
  • 19. ≤Rn ∣ ∣ ∣ xn+1 (n + 1)! ∣ ∣ ∣ = = = = ≤Rn | − (x)|ex Tn | |ex −∣ ∣ ∑ ∞ k=0 xk k! ∑ n k=0 xk k! ∣ ∣ | |ex ∣ ∣ ∑ ∞
  • 20. k=n+1 xk k! ∣ ∣ | |ex 1 | |ex ∣ ∣ ∣ eξ xn+1 (n + 1)! ∣ ∣ ∣ ∣ ∣ ∣ xn+1 (n + 1)! ∣ ∣ ∣ ξ = θx 0 < θ < 1 0 ≤ x ≤ 1
  • 21. ≤ ≤Rn 1 | |e1 ∣ ∣ ∣ ∣ e 1 1n+1 (n + 1)! ∣ ∣ ∣ ∣ ∣ ∣ ∣ 1 (n + 1)! ∣ ∣ ∣ x ≥ 1 x n ≤rn ϵmachine n > e ⋅ x e = exp(1) log(n!) ≈ n log n − n 10/13/16, 12)40 PMHW1_error
  • 22. Page 9 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false Using the result from part (a) we have Since we can drop the absolute values Now taking the of both sides we have where we have used Stirling's approximation. Now simplifying the left side and exponentiating we have Now taking the root of both sides and using leading to which leads to what we wanted (approximately). (c) Write a Python function that accurately computes to the specified relative error tolerance and returns both the estimate on the range and the number of terms in the series needed over the interval . Note that the testing tolerance will be . Make sure to document your code including expected inputs, outputs, and assumptions being made. ≤ ≤rn ∣ ∣ ∣ x N+1 (N + 1)! ∣ ∣
  • 23. ∣ ϵmachine x ≫ 1 ≤x N+1 (N + 1)! ϵmachine log (N + 1) log x − (N + 1) log(N + 1) + N + 1 ≤ log ϵmachine log[ ] + N + 1 ≤ log( )xN + 1 N+1 ϵmachine ≤( )xN + 1 N+1 eN+1 ϵmachine N + 1 ( < 1ϵmachine ) 1 N+1 ≤ 1 ⇒ x ⋅ e ≤ N + 1xe N + 1 Tn [−2, 2] 8 ⋅ ϵmachine
  • 24. 10/13/16, 12)40 PMHW1_error Page 10 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false In [3]: # HINT: Think about how we evaluated polynomials efficiently in class import scipy.misc as misc def Tn_exp(x, tolerance=1e-3): MAX_N = 100 ### BEGIN SOLUTION method = 0 for N in xrange(0, MAX_N + 1): if method == 0: # Direct method Tn = numpy.zeros(x.shape) for n in xrange(N + 1): Tn += x**n / misc.factorial(n) elif method == 1: # Use Horner's method! p = numpy.array([1.0 / misc.factorial(n - 1) for n in xran ge(N + 1, 0, -1)]) Tn = numpy.ones(x.shape) * p[0] for coefficient in p[1:]: Tn = Tn * x + coefficient
  • 25. elif method == 2: # Use direct evaluation through NumPy p = numpy.array([1.0 / misc.factorial(n - 1) for n in xran ge(N + 1, 0, -1)]) Tn = numpy.polyval(p, x) # Check stopping criteria if numpy.all(numpy.abs(Tn - numpy.exp(x)) / numpy.abs(numpy.ex p(x)) < tolerance): break ### END SOLUTION return Tn, N In [5]: x = numpy.linspace(-2, 2, 100) tolerance = 8.0 * numpy.finfo(float).eps answer, N = Tn_exp(x, tolerance=tolerance) assert(numpy.all(numpy.abs(answer - numpy.exp(x)) / numpy.abs(numpy.ex p(x)) < tolerance)) print "Success!" 10/13/16, 12)40 PMHW1_error Page 11 of 11http://localhost:8888/nbconvert/html/fall_2016/source/HW1_ error.ipynb?download=false Question 4 Given the Taylor polynomial expansions
  • 26. and determine the order of approximation for their sum and product (determine the exponent that belongs in the ). Sum: Product: = 1 + Δx + Δ 1 − Δx x2 x3 x4 2 2! Δx4 4! x6 1 − Δx Δx2 2 x3 x4 1 − Δx
  • 27. Δx2 2 Δx3 2 x4 interpolation 2.pdf 10/19/16, 8:44 PM03_interpolation Page 1 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb In [ ]: Group Work 3 - Interpolation Additional resource: http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOLA TION (http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOL ATION) (a) Based on the principals of an interpolating polynomial, write the general system of equations for an interpolating polynomial of degree that goes through points represent. Express this in matrix notation. What are the inputs to the problem we are solving? 1. distinct points
  • 28. 2. functional values Property of interpolating polynomial: We know that an interpolating polynoial of degree n is of the form: With this, and the previous information we can make equations: This can be represented as a linear system: Clearly, our unknowns are the coefficients, which are weights of the monomial basis. (x)PN N N +1 n+1 , , . . . ,x0 x1 xN n+1 , , . . . ,y0 y1 yN ⇒ ( ) = … ( ) =Pn x0 y0 Pn xN yN (x) = + x+. . .+Pn a0 a1 aNxN n+1 ( ) = + +. . .+ =Pn x0 a0 a1x0 aNxN0 y0 ⋮ ( ) = + +. . .+ =Pn xn a0 a1xn aNxNn yn Ax = b Ax = =
  • 29. %matplotlib inline import numpy import matplotlib.pyplot as plt http://www.math.niu.edu/~dattab/MATH435.2013/INTERPOLA TION 10/19/16, 8:44 PM03_interpolation Page 2 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb Note, once the coefficients are determined, we multiply the coefficients by the monomial basis because the represent the respective weights for each element in the basis and the result is the interpolating polynomial! (b) What does the system of equations look like if you use the Lagrangian basis? Can you represent this in matrix form? Think about the basis and its role in the previous question. (Hint: start from the definition of an interpolating polynomial and what it must satisfy. ) Interpolating polynomial with lagrangian basis:
  • 30. System of equations: Evaluate all lagrangian basis using the definition of the basis: Point : Ax = = 1 1 ⋮ 1 x0 x1 ⋮ xm x20 x21 ⋮ x2m ⋯ ⋯ ⋱
  • 32. y0 y1 ⋮ yN (x) = (x)Pn ∑ i=0 n aiℓi ( ) = ( ) +⋯+ ( ) =Pn x0 a0ℓ0 x0 anℓn x0 y0 ⋮ ( ) = ( ) +⋯+ ( ) =Pn xn a0ℓ0 xn anℓn xn yn x0 ( ) = = 1ℓ0 x0 ∏ i=0,i≠0 −x0 xi −x0 xi ( ) = = 0ℓ1 x0 ∏ i=0,i≠1
  • 33. −x0 xi −x1 xi ⋮ ( ) = = 0n 0 −x0 xi 10/19/16, 8:44 PM03_interpolation Page 3 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb Point : In this way, we see that combining all these equations of them into a matrix we get: Or The interpolating polynomial can then be represented as: (c) Are the systems you just derived related? What conclusion can you draw based on these two examples about the form of the linear system to find the coefficients? The systems are basis dependent. (c) Generate random points (take as user input), and construct the interpolating polynomial using a monomial basis. For this
  • 34. ( ) = = 0ℓn x0 ∏ i=0,i≠1 −x0 xi −xn xi x1 ( ) = = 1ℓ0 x1 ∏ i=0,i≠0 −x1 xi −x0 xi ( ) = = 0ℓ1 x1 ∏ i=0,i≠1 −x1 xi −x1 xi ⋮ ( ) = = 0ℓn x1 ∏ i=0,i≠1 −x1 xi −xn xi (n+1)2 =
  • 36. a1 ⋮ aN y0 y1 ⋮ yn Ix = y (x) = (x)Pn ∑ni=0 yiℓi N+1 N+1
  • 37. 10/19/16, 8:44 PM03_interpolation Page 4 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb construct the interpolating polynomial using a monomial basis. For this exercise assume .x ∈ [−π,π] 10/19/16, 8:44 PM03_interpolation Page 5 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb In [ ]: # Pick out random points num_points = 6 data = numpy.empty((num_points, 2)) data[:, 0] = numpy.random.random(num_points) * 2.0 * numpy.pi - numpy.pi data[:, 1] = numpy.random.random(num_points) N = num_points - 1 #1: Form Vandermonde matrix and b vector A = numpy.ones((num_points, num_points)) b = numpy.ones((num_points, 1)) A_prime = numpy.vander(data[:, 0], N = None, increasing = True) #2 solve system coefficients = numpy.linalg.solve(A_prime, data[:, 1]) #3 construct interpolating polynomial x = numpy.linspace(-numpy.pi, numpy.pi, 100)
  • 38. P = numpy.zeros(x.shape[0]) # first, create the monomial basis monomial_basis = numpy.ones((num_points, x.shape[0])) for i in xrange(num_points): monomial_basis[i, :] = x**i for n in range(num_points): P += monomial_basis[n, :] * coefficients[n] # Plot individual basis fig = plt.figure() axes = fig.add_subplot(1, 1, 1) for i in xrange(num_points): axes.plot(x, monomial_basis[i, :], label="$x^%s$" % i) axes.plot(data[i, 0], data[i, 1], 'ko', label = "Data") # Plot interpolating polynomial fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, P, label="$P_{%s}(x)$" % N) axes.set_xlabel("$x$") axes.set_ylabel("$P_{N}(x)$") axes.set_title("$P_{N}(x)$") axes.set_xlim((-numpy.pi, numpy.pi)) # Plot data points for point in data: axes.plot(point[0], point[1], 'ko', label = "Data") plt.show() 10/19/16, 8:44 PM03_interpolation Page 6 of
  • 39. 7http://localhost:8888/notebooks/group/03_interpolation.ipynb (d) Do the same as before except use a Lagrangian basis. In [ ]: (e) What do you observe about the basis when we leave the interval ?[−π,pi] # Pick out random points num_points = 10 data = numpy.empty((num_points, 2)) data[:, 0] = numpy.random.random(num_points) * 2.0 * numpy.pi - numpy.pi print data[:, 0] data[:, 1] = numpy.random.random(num_points) N = num_points - 1 x = numpy.linspace(-numpy.pi, numpy.pi, 100) # Step 1: Generate Lagrangian Basis # Note, we have N+1 weights y_0 ... y_N so we have N+1 basis functions # --> row size is then numPts & column size is the size of the vector x we are transforming lagrangian_basis = numpy.ones((num_points, x.shape[0])) for i in range(num_points): for j in range(num_points): if i != j: lagrangian_basis[i, :] *= (x - data[j][0]) / (data[i][0] - data # Step 2: Calculate Full Polynomial P = numpy.zeros(x.shape[0]) for i in range(numPts):
  • 40. P += lagrangian_basis[i, :] * data[i][1] # Plot individual basis fig = plt.figure() axes = fig.add_subplot(1, 1, 1) for i in xrange(num_points): axes.plot(x, lagrangian_basis[i, :], label="$ell_{%s}(x)$" % i) axes.plot(data[i, 0], data[i, 1], 'ko', label = "Data") # Plot polynomial fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, P, label="$P_{%s}(x)$" % degree) axes.set_xlabel("$x$") axes.set_ylabel("$P_{N}(x)$") axes.set_title("$P_{N}(x)$") for point in data: axes.plot(point[0], point[1], 'ko', label = "Data") plt.show() 10/19/16, 8:44 PM03_interpolation Page 7 of 7http://localhost:8888/notebooks/group/03_interpolation.ipynb They diverge quickly. interpolation.pdf 10/19/16, 8:43 PMHW3_interpolation
  • 41. Page 1 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false In [2]: %matplotlib inline import numpy import matplotlib.pyplot as plt Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel Restart) and then run all cells (in the menubar, select Cell Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: HW 3: Interpolation Question 1 Consider data at three points , , and . (a) (20) Analytically find the interpolating polynomial in the basis 1. Monomial: 2. Newton: 1. Monomials: We have the system of equations $$begin{bmatrix} 1 & x_0 & x_0^2 1 & x_1 & x_1^2
  • 42. 1 & x_2 & x_2^2 end{bmatrix} begin{bmatrix} p_0 p_1 p_2 end{bmatrix} = begin{bmatrix} y_0 y_1 y_2 end{bmatrix} $$ → → ( , ) = (0, 0)x0 y0 ( , ) = (1, 2)x1 y1 ( , ) = (2, 2)x2 y2 P(x) P(x) = + x +p0 p1 p2x2 P(x) = (x)∑2i=0 ai ni 10/19/16, 8:43 PMHW3_interpolation Page 2 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false Taking a Newton like approach we can solve for : and similarly . Subtracting these we have
  • 43. leading to can then be solved for as and finally as The polynomial then can be written down explicitly. This can also be accomplished by using matrices. 2. Newton: The basis are calculated with so that p2 − = ( − ) + ( − ) = ( − ) + ( − )( + )y0 y1 p1 x0 x1 p2 x20 x21 p1 x0 x1 p2 x0 x1 x0 x1 = + ( + )−y0 y1−x0 x1 p1 p2 x0 x1 = + ( + )−y2 y1−x2 x1 p1 p2 x2 x1 − = ( − − + )−y2 y1−x2 x1 −y0 y1 −x0 x1 p2 x2 x1 x0 x1 =p2 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 p1 = − ( + )p1
  • 44. −y0 y1 −x0 x1 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x0 x1 p0 = − − = − ( − ( + )) −p0 y0 p1x0 p2x20 y0 −y0 y1 −x0 x1 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x0 x1 x0 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x20 P(x) = − ( − ( + )) − + ( −y0 −y0 y1 −x0 x1 −−y2 y1−x2 x1 −y0 y1
  • 45. −x0 x1 −x2 x0 x0 x1 x0 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x20 −y0 y1 −x0 x1 −y2 y1 −x2 x1 x2 + ( ) −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x2 (x)nj (x) = (x − )nj ∏j−1i=0 xi (x) = 1n0 10/19/16, 8:43 PMHW3_interpolation Page 3 of
  • 46. 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false (b) (10) Show that these all lead to the same polynomial (show that is in fact unique). The coefficients are leading to a polynomial of the form (x) = (x − )n1 x0 (x) = (x − )(x − )n2 x0 x1 [ ] =y0 y0 [ , ] =y0 y1 −y1 y0 −x1 x0 [ , , ] = −y0 y1 y2 −y2 y1 ( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 P(x) = + (x − ) + ( − ) (x − )(x − )y0 −y1 y0−x1 x0 x0 −y2 y1 ( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 x0 x1
  • 47. P(x) 10/19/16, 8:43 PMHW3_interpolation Page 4 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false The most straight forward way to do this is to gather terms multiplying each power of and show that they are equivalent in each representation: 1. Monomials: These are already collected into the powers: 2. Newton: Again we can collect terms (a little bit easier this time) and taking special note that the last basis has we can write : : : A more compact version of this uses matrices instead (significantly less tedious). x :x0 p0 = − ( − ( + )) −y0 −y0 y1 −x0 x1 −−y2 y1−x2 x1
  • 48. −y0 y1 −x0 x1 −x2 x0 x0 x1 x0 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x20 = − − y0 −y0 y1 −x0 x1 x0 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x1 x0 = − − ( − ) y0 −y0 y1−x0 x1 x0 −y2 y1 ( − )( − )x2 x1 x2 x0 −y0 y1 ( − )( − )x0 x1 x2 x0 x1 x0 = − + ( − )y0 −y1 y0−x1 x0 x0
  • 49. −y2 y1 ( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 x1x0 :x1 p1 = − ( + ) −y0 y1 −x0 x1 −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 x0 x1 = − ( − ) ( + )−y1 y0−x1 x0 −y2 y1 ( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 x0 x1 :x2 p2 = −−y2 y1−x2 x1 −y0 y1 −x0 x1 −x2 x0 = −−y2 y1( − )( − )x2 x1 x2 x0
  • 50. −y0 y1 ( − )( − )x0 x1 x2 x0 = −−y2 y1( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 (x − )(x − ) = − x( + ) +x0 x1 x2 x1 x0 x1x0 x0 − + ( − )y0 −y1 y0−x1 x0 x0 −y2 y1( − )( − )x2 x1 x2 x0 −y1 y0( − )( − )x1 x0 x2 x0 x1x0 x1 − ( − ) ( + )−y1 y0−x1 x0 −y2 y1( − )( − )x2 x1 x2 x0 −y1 y0( − )( − )x1 x0 x2 x0 x1 x0 x2 −−y2 y1( − )( − )x2 x1 x2 x0 −y1 y0 ( − )( − )x1 x0 x2 x0 10/19/16, 8:43 PMHW3_interpolation Page 5 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false (c) (10) Use the uniqueness of the interpolating polynomial to show that for general points at any value of (i.e. the interpolant of a constant is a constant regardless of ). Hint: Consider the Newton polynomial form and uniqueness.
  • 51. Looking at the Newton form we can observe that all of the divided differences are identically except for the term since all the are equal. This leaves us with only the constant term in the polynomial which of course is identically 1. Question 2 (10) The th Chebyshev polynomial is characterized (up to a constant) by the identity Use this identity to show that the Chebyshev polynomials are orthogonal on with respect to the weight To do this you must prove that where is a finite constant (also find this coefficient). N + 1 (x) = 1∑ i=0 N ℓi x N 0 [ ]y0 yi n (cos θ) = cos(nθ)Tn
  • 52. x ∈ [−1, 1] w(x) = 1 1 − x2‾ ‾‾‾‾‾√ w(x) (x) (x)dx = {∫ 1 −1 Tn Tm a 0 m = n m ≠ n a 10/19/16, 8:43 PMHW3_interpolation Page 6 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false Setting (leading to ) and using into the expression for the integral leads to Using the rules regarding the orthogonality of we have Question 3 (10) For N = 4 find the maximum value and its location of for
  • 53. equispaced points on . x = cos θ dx = sin θdθ (cos θ) = cos(nθ)Tn sin θdθ = sin θdθ = − cos(nθ) cos(mθ)dθ∫ 0π (cos θ) (cos θ)Tn Tm 1− θcos2√ ∫ 0π cos(nθ) cos(mθ) sin θ ∫ π 0 cos w(x) (x) (x)dx =∫ 1−1 Tn Tm ⎧ ⎩ ⎨ ⎪ ⎪ π π 2 0 m = n = 0 m = n ≠ 0 m ≠ n | (x)|ℓ2 [−1, 1]
  • 54. 10/19/16, 8:43 PMHW3_interpolation Page 7 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false Analytically we need to take the derivative of to find The max/min of are at One simplification that is useful here is to use the values of the points , , , and . This allows for the reduction of to be reduced to a quadratic function. which has roots at From here we can plug in the values and find the maximums and find out that the maximum is at with a value of . (x) =ℓi ∏ j=0,j≠i 4 x − xj −xi xj (x) = ( ) d dx ℓi ∑
  • 55. k=0,k≠i 4 1 −xi xk ∏j=0,j≠i,j≠k 4 x − xj −xi xj ℓi 0 = ( (x − ))∑k=0,k≠i 4 ∏ j=0,j≠i,j≠k 4 xj = −1x0 = −1/2x1 = 0x2 = 1/2x3 = 1x4 (x)ddx ℓ2 (x) = = 4( − 1)( − 1/4)ℓ2 (x + 1)(x + 1/2)(x − 1/2)(x − 1) (1)(1/2)(−1/2)(−1) x 2 x2 (x) = 4(2x( − 1) + 2x( − 1/4)) = 8x(2 − 5/4)d dx ℓ2 x 2 x2 x2 x = 0, ± 5 8 ‾‾
  • 56. √ x = 0 | (0)| = 1ℓ2 10/19/16, 8:43 PMHW3_interpolation Page 8 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false Question 4 Consider the Lebesgue function where are Lagrange basis functions for a given set of . The maximum of the Lebesgue function is called the Lebesgue constant and are clearly related to Lagrangian interpolation as they provide a first estimate for the interpolation error. Unfortunately, is not uniformly bounded regardless of the nodes used as one can show that Note, is the infinite-norm of the linear operator mapping data to interpolant on the given grid and interval. (a) (5) What do you expect the Lebesgue function to look like? Are there key points where we will know the function value exactly? The primary observation is that when for all . (b) (10) Plot the Lebesgue function for for with
  • 57. For the case where comment on what you see (you may need to use semilogy to see the results). (x) = (x)λN ∑ i=0 N ∣ ∣ ℓi ∣ ∣ (x)ℓi xi Λn ΛN Λn (x) = 0λN x = xi i = 0, … , N x ∈ [−1, 1] N = 5, 10, 20 = −1 + , i = 0, 1, … , N.xi 2i N N = 20 10/19/16, 8:43 PMHW3_interpolation Page 9 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false
  • 58. In [4]: def lebesgue(x, data): """Compute the *i*th Lagrangian basis :Input: - *x* (numpy.ndarray) x values that basis will be evaluated at - *data* (numpy.ndarray) Tuples representing interpolant points - *i* (int) Which basis function to compute. :Output: - (numpy.ndarray) Contains the ith Lagrangian basis evaluated at x """ lebesgue = numpy.zeros(x.shape[0]) for i in xrange(data.shape[0]): lagrange_basis = numpy.ones(x.shape[0]) for j in xrange(data.shape[0]): if i != j: lagrange_basis *= (x - data[j]) / (data[i] - data[j]) lebesgue += numpy.abs(lagrange_basis) return lebesgue # Plot for each N x = numpy.linspace(-1, 1, 1000) fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 3) for (i, N) in enumerate([5, 10, 20]): data = -1.0 + 2.0 * numpy.arange(N + 1) / N y = lebesgue(x, data) axes = fig.add_subplot(1, 3, i + 1) axes.semilogy(x, y, 'k') axes.semilogy(data, numpy.ones(N + 1), 'ro') axes.semilogy(data, numpy.ones(N + 1), 'o--')
  • 59. axes.set_xlim((-1, 1)) axes.set_ylim((0.0, numpy.max(y))) Due to numerical precision the case does not quite get the evaluation of correctly. We could evaluate exactly at these points and we would see what we would expect. N = 20 x = xi 10/19/16, 8:43 PMHW3_interpolation Page 10 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false (c) (10) Plot the Lebesgue function for for with Again comment on what you see in the case . In [16]: x = numpy.linspace(-1, 1, 1000) fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 3) for (i, N) in enumerate([5, 10, 34]): # data = numpy.cos((2.0 * numpy.arange(1, N + 2) - 1.0) * numpy.pi / (2.0 * (N + 1))) data = numpy.cos((2.0 * numpy.arange(N) + 1.0) / (2.0 * N) * numpy .pi) # data = numpy.cos((2.0 * numpy.arange(N) + 1.0) / (2.0 * N) * num py.pi) data = numpy.cos(numpy.arange(N + 1) * numpy.pi / N)
  • 60. y = lebesgue(x, data) axes = fig.add_subplot(1, 3, i + 1) axes.plot(x, y, 'k') axes.plot(data, numpy.ones(N + 1), 'ro') axes.plot(data, numpy.ones(N + 1), 'o--') axes.set_xlim((-1, 1)) axes.set_ylim((0.0, numpy.max(y))) Same problem as part (c). (d) (5) What do you observe about the Lebesgue function for each of the distribution of points? The growth of the Lebesgue constant is much slower with the Chebyshev points. The maximum is also obtained at all the points where the function reaches a maximum. x ∈ [−1, 1] N = 5, 10, 20 = cos( ) i = 1, … , N + 1.xi (2i − 1)π2N N = 20 10/19/16, 8:43 PMHW3_interpolation Page 11 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false (e) (10) Using suitable values for plot the Lebesgue constants of each of the above cases. Make sure to use a suitably large number of points to evaluate the function at. Graphically demonstrate that the constant grow with the predicted growth rate . Describe what you
  • 61. observe. N 10/19/16, 8:43 PMHW3_interpolation Page 12 of 12http://localhost:8888/nbconvert/html/fall_2016/source/HW3_i nterpolation.ipynb?download=false In [17]: leb_constant = lambda x, data: numpy.max(lebesgue(x, data)) N_range = numpy.array([2**n for n in xrange(2, 6)]) N_range = numpy.array([20, 30, 40, 50, 60]) lebesgue_constant = numpy.empty((N_range.shape[0], 2)) x = numpy.linspace(-1, 1, 1000) for (i, N) in enumerate(N_range): data = -1.0 + 2.0 * numpy.arange(N + 1) / N lebesgue_constant[i, 0] = numpy.max(lebesgue(x, data)) data = numpy.cos(numpy.arange(N) * numpy.pi / (N - 1)) lebesgue_constant[i, 1] = numpy.max(lebesgue(x, data)) order_C = lambda N, error, order: numpy.exp(numpy.log(error) - order * numpy.log(N)) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.loglog(N_range, lebesgue_constant[:, 0], 'bo', label="equispaced" ) axes.loglog(N_range, lebesgue_constant[:, 1], 'ro',
  • 62. label="Chebyshev") axes.loglog(N_range, numpy.log(N_range), 'k--', label="$log N$") axes.legend(loc=2) plt.show() Clearly the equispaced points have a Lebesgue constant that is increasing much faster. Both show the approximate growth though.log N roots 2.pdf 10/13/16, 6(46 PM02_roots Page 1 of 5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do wnload=false In [3]: %matplotlib inline import numpy import matplotlib.pyplot as plt Group Work 2 - Rooting for the Optimum After you are done please submit this on Vocareum as with the homework. Newton's Method: For the following consider the function (a) Write down the Newton iteration for . (b) The first step in setting up Newton's method is to pick a good initial guess. One way to do this is to find two points that and where . Find two such points for .
  • 63. or f (x) = cos(x) − 2x. f (x) xn+1 = −xn f ( )xn ( )f ′ xn = +xn cos(x) − 2x sin(x) + 2 x0 x1 sign(f ( )) ≠ sign(f ( ))x0 x1 f (x) ( , ) = (0, π/4)x0 x1 (−1, −2) 10/13/16, 6(46 PM02_roots Page 2 of 5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do wnload=false (c) Using your update formula, your initial guess, and Newton's method find the root of . Feel free to use the code demonstrated in class. Make sure to use plots to get a visual understanding for what is going on. Additional things to play with: 1. Choose a small max step and display the results to see how
  • 64. the newton method converges to the root. 2. Choose different tolerances and see how many iterations it takes to converge to the root. 3. Choose a "bad" initial guess and see what happens. In [19]: def newton(x_0, f, f_prime, max_steps=100, tolerance=1e-4): # Initial guess x_n = x_0 success = False for n in xrange(max_steps): if numpy.abs(f(x_n)) < tolerance: success = True break x_n = x_n - f(x_n) / f_prime(x_n) if success: return x_n, n else: raise ValueError("Method did not converge!") # Demo code f = lambda x: numpy.cos(x) - 2.0 * x f_prime = lambda x: -numpy.sin(x) - 2.0 print newton(0.3, f, f_prime) The Secant Method For the following consider the function (a) Write down the iteration for the secant method.
  • 65. f (x) f (x) = − x + 1.x3 (0.4501875310743772, 2) 10/13/16, 6(46 PM02_roots Page 3 of 5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do wnload=false (b) The advantage of the secant method over Newton's method is that we don't need to calculate the derivative of the function. The disadvantage is that now we need two initial guesses to start the secant method. As we did with Newton's method find two points and with the same properties as before (choose a bracket). (c) Using your update formula, your initial guess, and the secant method find the root of . Again feel free to use the code demonstrated in class and use plots to visualize your results. Additional things to play with: 1. Choose a small max step and display the results to see how the newton method converges to the root. 2. Choose different tolerances and see how many iterations it
  • 66. takes to converge to the root. 3. Choose a "bad" initial guess and see what happens. = −xk+1 xk ( − + 1)( − )x3k xk xk xk−1 ( − + 1) − f ( )x3k xk xk−1 x0 x1 (−1, −2) f (x) 10/13/16, 6(46 PM02_roots Page 4 of 5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do wnload=false In [18]: def secant(bracket, f, max_steps=100, tolerance=1e-4): x_n = bracket[1] x_nm = bracket[0] success = False for n in xrange(max_steps): if numpy.abs(f(x_n)) < tolerance: success = True break x_np = x_n - f(x_n) * (x_n - x_nm) / (f(x_n) - f(x_nm)) x_nm = x_n x_n = x_np if success:
  • 67. return x_n, n else: raise ValueError("Secant method did not converge.") f = lambda x: x**3 - x + 1.0 x = numpy.linspace(-2.0,-1.0,11) # Initial guess bracket = [-1.0, -2.0] print secant(bracket, f) Comparing convergence Now that we have seen both the methods, lets compare them for the function (a) For the new function derive the iteration for both Newton's method and the secant method. f (x) = x − cos(x) f (x) (-1.324707936532088, 5) 10/13/16, 6(46 PM02_roots Page 5 of 5http://localhost:8888/nbconvert/html/group/02_roots.ipynb?do wnload=false Newton:
  • 68. Secant: (b) Choose a bracket to start both methods. (c) Using your code from before (you could do this easily if you write the above as a function) see how the two methods compare in terms of the number of the number of iterations it takes to converge. Play around with your choice of bracket and see how this might impact both methods. In [26]: f = lambda x: x - numpy.cos(x) f_prime = lambda x: 1.0 + numpy.sin(x) # Initial guess bracket = [-1.0, 1.0] max_steps = 100 tolerance = 1e-10 print "Newton: ", newton(bracket[0], f, f_prime, max_steps=max_steps, tolerance=tolerance) print "Secant: ", secant(bracket, f, max_steps=max_steps, tolerance=to lerance) = −xn+1 xn x − cos(x) 1 + sin(x) = −xn+1 xn ( − cos( ))( − )xn xn xn xn−1 − cos( ) − + cos( )xn xn xn−1 xn−1 (0, π/4)
  • 69. Newton: (0.73908513321516067, 8) Secant: (0.73908513321516067, 6) roots.pdf 10/13/16, 12)40 PMHW2_roots Page 1 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [1]: %matplotlib inline import numpy import matplotlib.pyplot as plt Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel Restart) and then run all cells (in the menubar, select Cell Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: HW 2: Root Finding and Optimization Question 1 - Finding the Root Let's say that we wanted to calculate given that and and that we did not want to use the function sqrt directly. One way to do this is to solve for the zeros of the function . Note that not all the methods will work!
  • 70. Make sure to handle the case where . We are only looking for the positive root of . (a) (5 points) Write a function that uses fixed-point iteration to solve for the zeros of . Note: There are multiple ways to write the iteration function , some work better than others. Make sure to use the input function to formulate this. → → M‾‾√ M ∈ ℝ M > 0 f (x) = − Mx2 =M0 M‾‾√ f (x) f (x) g(x) f (x) 10/13/16, 12)40 PMHW2_roots Page 2 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [2]: def fixed_point(x_0, f, tolerance): """Find the zeros of the given function f using fixed-point iterat ion
  • 71. :Input: - *x_0* (float) - Initial iterate - *f* (function) - The function that will be analyzed - *tolerance* (float) - Stopping tolerance for iteration :Output: If the iteration was successful the return values are: - *M* (float) - Zero found via the given intial iterate. - *n* (int) - Number of iterations it took to achieve the specifi ed tolerance. otherwise - *x* (float) - Last iterate found - *n* (int) - *n = -1* """ # Parameters MAX_STEPS = 1000 ### BEGIN SOLUTION g = lambda x: x - f(x) g = lambda x: f(x) + x x = x_0 if numpy.abs(f(x)) < tolerance: success = True n = 0 else: success = False for n in xrange(1, MAX_STEPS + 1): x = g(x) if numpy.abs(f(x)) < tolerance: success = True break
  • 72. if not success: return x, -1 ### END SOLUTION return x, n 10/13/16, 12)40 PMHW2_roots Page 3 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [3]: M = 1.8 TOLERANCE = 1e-10 f = lambda x: x**2 - M # Note that this test probably will fail try: M_f, n = fixed_point(2.0, f, TOLERANCE) except OverflowError: print "Fixed-point test failed!" print "Success!" else: if n == -1: print "Fixed-point test failed!" print "Success!" else: print M_f, n raise ValueError("Test should have failed!") (b) (5 points) Write a function that uses Newton's method to find the roots of . The analytical derivative of is provided.
  • 73. f (x) (x)f ′ Fixed-point test failed! Success! 10/13/16, 12)40 PMHW2_roots Page 4 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [4]: def newton(x_0, f, f_prime, tolerance): """Find the zeros of the given function f using Newton's method :Input: - *M_0* (float) - Initial iterate - *f* (function) - The function that will be analyzed - *f_prime* (function) - The derivative of *f* - *tolerance* (float) - Stopping tolerance for iteration :Output: If the iteration was successful the return values are: - *M* (float) - Zero found via the given intial iterate. - *n* (int) - Number of iterations it took to achieve the specifi ed tolerance. otherwise - *M* (float) - Last iterate found - *n* (int) - *n = -1* """
  • 74. # Parameters MAX_STEPS = 1000 ### BEGIN SOLUTION x = x_0 if numpy.abs(f(x)) < tolerance: success = True n = 0 else: success = False for n in xrange(1, MAX_STEPS + 1): x = x - f(x) / f_prime(x) if numpy.abs(f(x)) < tolerance: success = True break if not success: return x, -1 ### END SOLUTION return x, n 10/13/16, 12)40 PMHW2_roots Page 5 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [5]: M = 3.0 TOLERANCE = 1e-10 f = lambda x: x**2 - M f_prime = lambda x: 2.0 * x
  • 75. M_f, n = newton(2.0, f, f_prime, TOLERANCE) numpy.testing.assert_almost_equal(M_f, numpy.sqrt(M)) print M_f, n assert(n == 4) M_f, n = newton(numpy.sqrt(M), f, f_prime, TOLERANCE) print M_f, n assert(n == 0) print "Success!" (c) (5 points) Write a function to find the zeros of using the secant method.f (x) 1.73205080757 4 1.73205080757 0 Success! 10/13/16, 12)40 PMHW2_roots Page 6 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [6]: def secant(x_0, f, tolerance): """Find the zeros of the given function f using the secant method :Input: - *M_0* (float) - Initial bracket - *f* (function) - The function that will be analyzed - *tolerance* (float) - Stopping tolerance for iteration :Output:
  • 76. If the iteration was successful the return values are: - *M* (float) - Zero found via the given intial iterate. - *n* (int) - Number of iterations it took to achieve the specifi ed tolerance. otherwise - *M* (float) - Last iterate found - *n* (int) - *n = -1* """ # Parameters MAX_STEPS = 1000 ### BEGIN SOLUTION x = x_0 if numpy.abs(f(x[1])) < tolerance: success = True n = 0 else: success = False for n in xrange(1, MAX_STEPS + 1): x_new = x[1] - f(x[1]) * (x[1] - x[0]) / (f(x[1]) - f(x[0] )) x[0] = x[1] x[1] = x_new if numpy.abs(f(x[1])) < tolerance: success = True break if not success: return x[1], -1 else: x = x[1] ### END SOLUTION
  • 77. return x, n 10/13/16, 12)40 PMHW2_roots Page 7 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [7]: M = 3.0 TOLERANCE = 1e-10 f = lambda x: x**2 - M M_f, n = secant([0.0, 3.0], f, TOLERANCE) numpy.testing.assert_almost_equal(M_f, numpy.sqrt(M)) print M_f, n assert(n == 7) M_f, n = secant([1.0, numpy.sqrt(M)], f, TOLERANCE) assert(n == 0) print "Success!" (d) (5 points) Using the theory and illustrative plots why the fixed-point method did not work (pick a bracket that demonstrates the problem well). The range is not contained within the domain and therefore fixed-point iteration will not converge. The plot below should be included. 1.73205080757 7 Success!
  • 78. 10/13/16, 12)40 PMHW2_roots Page 8 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [8]: # Place plotting code here if needed ### BEGIN SOLUTION x = numpy.linspace(-3.0, 5.0, 100) bracket = [1.5, 1.8] fig = plt.figure() axes = fig.add_subplot(1, 1, 1)#, aspect='equal') axes.plot(x, x**2 - 3.0, 'r') axes.plot(x, numpy.zeros(x.shape), 'b') axes.set_xlabel("x") axes.set_ylabel("f(x)") axes.set_xlim([1.0, 2.0]) axes.set_ylim([-1.5, 1.0]) # Plot domain and range axes.plot(numpy.ones(x.shape) * bracket[0], x, '--k') axes.plot(numpy.ones(x.shape) * bracket[1], x, '--k') axes.plot(x, numpy.ones(x.shape) * (bracket[0]**2 - 3.0), '--k') axes.plot(x, numpy.ones(x.shape) * (bracket[1]**2 - 3.0), '--k') plt.show() ### END SOLUTION Question 2 - Bessel Function Zeros The zeros of the Bessel functions can be important for a number of applications. Considering only
  • 79. we are going to find the first ten zeros of by using a hybrid approach. (x)J0 x ≥ 0 (x)J0 10/13/16, 12)40 PMHW2_roots Page 9 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false (a) (5 points) Plot the Bessel function and its zeros on the same plot. Note that the module scipy.special contains functions dealing with the Bessel functions (jn). In [9]: import scipy.special x = numpy.linspace(0.0, 50.0, 100) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, scipy.special.jn(0, x)) axes.plot(x, numpy.zeros(x.shape),'k--') axes.plot(scipy.special.jn_zeros(0, 10), numpy.zeros(10), 'ro') print scipy.special.jn_zeros(0, 10) axes.set_title("Bessel Function $J_0(x)") axes.set_xlabel("x") axes.set_ylabel("$J_0(x)$") plt.show()
  • 80. (x)J0 [ 2.40482556 5.52007811 8.65372791 11.79153444 14.93091771 18.07106397 21.21163663 24.35247153 27.49347913 30.63460647] 10/13/16, 12)40 PMHW2_roots Page 10 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false (b) (15 points) Now write a function j0_zeros that takes two tolerances, a bracket size tolerance bracket_tolerance and tolerance for the final convergence tolerance. Given an initial bracket, the function should perform secant iterations until the bracket size is less than bracket_tolerance. If this is successful then proceed with Newton's method using the newest value of the bracket until tolerance is reached. Return both the zero found and the number of steps needed in each iteration. Also write a doc- string for the function. Notes: Newton's method by itself does not work here given the initial brackets provided. The secant method does work however it is slower than the approach outlined. Try playing a bit yourself with the tolerances used.
  • 81. 10/13/16, 12)40 PMHW2_roots Page 11 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [10]: import scipy.special # Note that the num_steps being returned should be a list # of the number of steps being used in each method def j0_zeros(x0, bracket_tolerance, tolerance): ### BEGIN SOLUTION # Parameters MAX_STEPS = 100 # Output num_steps = [0, 0] # Useful functions f = lambda x: scipy.special.jn(0, x) f_prime = lambda x: -scipy.special.jn(1, x) x = x0 if numpy.abs(f(x0[0])) < tolerance or numpy.abs(f(x0[1])) < tolera nce: success = True n = 0 else: success = False for n in xrange(1, MAX_STEPS + 1): x_new = x[1] - f(x[1]) * (x[1] - x[0]) / (f(x[1]) - f(x[0] ))
  • 82. x[0] = x[1] x[1] = x_new if numpy.abs(x[1] - x[0]) < bracket_tolerance: success = True num_steps[0] = n break if not success: return x[1], -1 else: x[1], num_steps[1] = newton(x[1], f, f_prime, tolerance) x = x[1] ### END SOLUTION return x, num_steps 10/13/16, 12)40 PMHW2_roots Page 12 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [11]: brackets = [[ 2.0, 3.0], [ 4.0, 7.0], [ 7.0, 10.0], [10.0, 12.0], [13.0, 15.0], [17.0, 19.0], [19.0, 22.0], [22.0, 26.0], [26.0, 29.0], [29.0, 32.0]] zero = [] for bracket in brackets: x, num_steps = j0_zeros(bracket, 1e-1, 1e-15) print x, num_steps zero.append(x) numpy.testing.assert_allclose(zero, scipy.special.jn_zeros(0,
  • 83. 10), rto l=1e-14) print "Success!" Question 3 - Newton's Method Convergence Recall that Newton's method converges as with where is the true solution and is between and . (a) (10 points) Show that the Newton iteration when with is | | = |ϵn+1 | (c)|f ″ 2| ( )|f ′ xn ϵn |2 = −ϵn xn x∗ x∗ c xn x∗ f (x) = − Mx2 M > 0 = ( + )xn+1 12 xn M xn 2.4048255577 [2, 3] 5.52007811029 [4, 2] 8.65372791291 [2, 3] 11.791534439 [3, 2] 14.9309177085 [2, 2] 18.0710639679 [2, 2] 21.2116366299 [3, 2] 24.3524715307 [4, 2] 27.493479132 [2, 3] 30.6346064684 [3, 2] Success!
  • 84. 10/13/16, 12)40 PMHW2_roots Page 13 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false (b) (10 points) From this update scheme show that (c) (10 points) Confirm that the asymptotic error convergence matches the general convergence for Newton's method. The connection of course is that the previous formula is really which confirms the result directly. = −xn+1 xn f ( )xn ( )f ′ xn = −xn+1 xn − Mx2n 2xn = − ( − )xn+1 xn 12 xn M xn = ( + )xn+1 12 xn M xn
  • 85. =−xn+1 M‾‾√ ( −xn M‾‾√ )2 1 2xn = ( + M)xn+1 1 2xn x2n = ( + M − 2 + 2 )xn+1 1 2xn x2n M‾‾√ xn M‾‾√ xn = ( − +xn+1 1 2xn xn M‾‾√ )2 M‾‾√ =−xn+1 M‾‾√ ( −xn M‾‾√ )2 1 2xn = = | (c)|f ″ 2| ( )|f ′ xn 2
  • 86. 4xn 1 2xn =ϵn+1 ϵ2n 1 2xn 10/13/16, 12)40 PMHW2_roots Page 14 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false Question 4 - Optimization of a Data Series For the following questions we are given a set of data . (a) (15 points) Write a function that takes in the data series and finds the value at a point by constructing the equation of the line between the two data points that bound and evaluating the resulting function at . Hints: Make sure to handle the case that . If or then return the corresponding value or . If you write your function so that can be an array you can use the plotting code in the cell. Otherwise just delete it.
  • 87. ( , ), ( , ), … , ( , )t0 y0 t1 y1 tN yN ( , )ti yi t∗ t∗ t∗ =t∗ ti <t∗ t0 >t∗ tN y0 yN t∗ 10/13/16, 12)40 PMHW2_roots Page 15 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [12]: def linear_eval(t, y, t_star): # BEGIN SOLUTION if isinstance(t_star, float): t_star = [t_star] y_star = [0.0] else: y_star = numpy.empty(t_star.shape) for (i, t_star_val) in enumerate(t_star): if t_star_val < t[0]: y_star[i] = y[0] elif t_star_val > t[-1]: y_star[i] = y[-1] else: for (n, t_val) in enumerate(t): if t_val > t_star_val:
  • 88. y_star[i] = (y[n-1] - y[n]) / (t[n-1] - t[n]) * (t _star_val - t[n]) + y[n] break elif t_val == t_star_val: y_star[i] = y[n] break # END SOLUTION return y_star N = 10 t_fine = numpy.linspace(-numpy.pi, numpy.pi, 100) t_rand = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi t_rand.sort() f = lambda x: numpy.sin(x) * numpy.cos(x) fig = plt.figure() fig.set_figwidth(fig.get_figwidth()*1.5) axes = fig.add_subplot(1, 1, 1) axes.plot(t_fine, f(t_fine), 'k-', label="True") axes.plot(t_rand, f(t_rand), 'og', label="Sample Data") axes.plot(t_fine, linear_eval(t_rand, f(t_rand), t_fine), 'xb', label= "linear_eval") axes.set_xlim((-numpy.pi, numpy.pi)) axes.set_title("Demo Plot") axes.set_xlabel('$t$') axes.set_ylabel('$f(t)$') axes.legend() plt.show() 10/13/16, 12)40 PMHW2_roots
  • 89. Page 16 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [13]: N = 100 f = lambda x: numpy.sin(x) * numpy.cos(x) t = numpy.linspace(-1, 1, N + 1) t_star = 0.5 answer = linear_eval(t, f(t), t_star) if isinstance(answer, list): answer = answer[0] print "Computed solution: %s" % answer print "True solution: %s" % f(t_star) numpy.testing.assert_almost_equal(answer, f(t_star), verbose=True, dec imal=7) print "Success!" (b) (10 points) Using the function you wrote in part (a) write a function that uses Golden search to find the minimum of a series of data. Again you can use the plotting code available if your linear_eval function from part (a) handles arrays. In [14]: def golden_search(bracket, t, y, max_steps=100, tolerance=1e-4): phi = (numpy.sqrt(5.0) - 1.0) / 2.0 # BEGIN SOLUTION f = lambda x: linear_eval(t, y, x) x = numpy.array([bracket[0], None, None, bracket[1]]) x[1] = x[3] - phi * (x[3] - x[0]) x[2] = x[0] + phi * (x[3] - x[0])
  • 90. Computed solution: 0.420735492404 True solution: 0.420735492404 Success! 10/13/16, 12)40 PMHW2_roots Page 17 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false success = False for n in xrange(max_steps): if numpy.abs(x[3] - x[0]) < tolerance: success = True t_star = (x[3] + x[0]) / 2.0 break f_1 = f(x[1]) f_2 = f(x[2]) if f_1 > f_2: x[3] = x[2] x[2] = x[1] x[1] = x[3] - phi * (x[3] - x[0]) else: x[0] = x[1] x[1] = x[2] x[2] = x[0] + phi * (x[3] - x[0]) if not success: raise ValueError("Unable to converge to requested tolerance.") # END SOLUTION
  • 91. return t_star N = 50 t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi t.sort() y = numpy.sin(t) * numpy.cos(t) t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y) t_true = numpy.pi / 4.0 fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(t, y, 'x', label="data") t_fine = numpy.linspace(-numpy.pi, numpy.pi, 100) axes.plot(t_fine, numpy.sin(t_fine) * numpy.cos(t_fine), 'k', label="$ f(x)$") axes.plot(t_star, linear_eval(t, y, t_star), 'go') axes.plot(t_true, numpy.sin(t_true) * numpy.cos(t_true), 'ko', label=" True") axes.set_xlim((0.0, numpy.pi / 2.0)) axes.set_ylim((0.0, 1.0)) plt.show() 10/13/16, 12)40 PMHW2_roots Page 18 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [15]: N = 100 t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi t.sort()
  • 92. y = numpy.sin(t) * numpy.cos(t) t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y) t_true = numpy.pi / 4.0 abs_error = numpy.abs(t_star - t_true) rel_error = numpy.abs(t_star - t_true) / numpy.abs(t_true) print "Error: %s, %s" % (abs_error, rel_error) numpy.testing.assert_allclose(abs_error, 0.0, rtol=1e-1, atol=1e- 1) print "Success!" (c) (5 points) Below is sample code that plots the number of sample points vs. the relative error. Note because we are sampling at random points that we do each 6 times and average the relative error to reduce noise. Additionally a line is drawn representing what would be linear (1st order) convergence. Modify this code and try it out on other problems. Do you continue to see linear convergence? What about if you change how we sample points? Make sure that you change your initial interval and range of values of inside the loop. N N t Error: 0.0150948731852, 0.0192193894622 Success! 10/13/16, 12)40 PMHW2_roots Page 19 of
  • 93. 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false In [16]: f = lambda t: numpy.sin(t) * numpy.cos(t) N_range = numpy.array([2**n for n in range(4, 10)], dtype=int) rel_error = numpy.zeros(len(N_range)) t_true = numpy.pi / 4.0 for (i, N) in enumerate(N_range): for j in xrange(6): t = numpy.random.rand(N + 1) * (2.0 * numpy.pi) - numpy.pi t.sort() y = f(t) t_star = golden_search([0.1, 3.0 * numpy.pi / 4.0], t, y) rel_error[i] += numpy.abs(t_star - t_true) / numpy.abs(t_true) rel_error[i] /= 6 order_C = lambda N, error, order: numpy.exp(numpy.log(error) - order * numpy.log(N)) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.loglog(N_range, rel_error, 'ko', label="Ave. Error") axes.loglog(N_range, order_C(N_range[0], rel_error[0], -1.0) * N_range **(-1.0), 'r', label="1st order") axes.loglog(N_range, order_C(N_range[0], rel_error[0], -2.0) * N_range **(-2.0), 'b', label="1st Order") axes.set_xlabel("N") axes.set_ylabel("Relative Error") axes.legend() plt.show()
  • 94. 10/13/16, 12)40 PMHW2_roots Page 20 of 20http://localhost:8888/nbconvert/html/fall_2016/source/HW2_r oots.ipynb?download=false This really is dependent on what they explore. It should be possible to get second order convergence but depends on what they change. Anything that seems reasonable and demonstrates that they explored the options is good. differentiation.py # coding: utf-8 # <table> # <tr align=left><td><img align=left src="./images/CC- BY.png"> # <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF- approved MIT license. (c) Kyle T. Mandli</td> # </table> # In[ ]: get_ipython().magic(u'matplotlib inline') import numpy import matplotlib.pyplot as plt import matplotlib.patches as patches
  • 95. # # Numerical Differentiation # # **GOAL:** Given a set of $N+1$ points $(x_i, y_i)$ compute the derivative of a given order to a specified accuracy. # # **Approach:** Find the interpolating polynomial $P_N(x)$ and differentiate that. # ### Newton's Form # # For ease of analysis we will write $P_N(x)$ in Newton's form which looks like # # $$P_N(x) = sum^N_{j=0} a_j n_j(x)$$ # # where # # $$n_j(x) = prod^{j-1}_{i=0} (x - x_i)$$ # # and the $a_j = [y_0, ldots, y_j]$ are the divided differences defined in general as # # $$[y_i] = y_i ~~~~~ i in {0,ldots, N+1}$$ # # and # # $$[y_i, ldots , y_{i+j}] = frac{[y_{i+1}, ldots , y_{i + j}] - [y_{i},ldots,y_{i+j-1}]}{x_{i+j} - x_{i}} ~~~~~ i in {0,ldots,N+1 - j} ~~~~ j in {1,ldots, N+1}$$ # These formulas are recursively defined but not so helpful, here are a few examples to start out with: # # $$[y_0] = y_0$$ #
  • 96. # $$[y_0, y_1] = frac{y_1 - y_0}{x_1 - x_0}$$ # # $$[y_0, y_1, y_2] = frac{[y_1, y_2] - [y_0, y_1]}{x_{2} - x_{0}} = frac{frac{y_2 - y_1}{x_2 - x_1} - frac{y_1 - y_0}{x_1 - x_0}}{x_2 - x_0} = frac{y_2 - y_1}{(x_2 - x_1)(x_2 - x_0)} - frac{y_1 - y_0}{(x_1 - x_0)(x_2 - x_0)}$$ # The benefit of writing a polynomial like this is that it isolates the $x$ dependence (we can easily take derivatives of this form). # # In general then $P_N(x)$ can be written in Newton's form as # # $$P_N(x) = y_0 + (x-x_0)[y_0, y_1] + (x - x_0) (x - x_1) [y_0, y_1, y_2] + cdots + (x-x_0) (x-x_1) cdots (x-x_{N-1}) [y_0, y_1, ldots, y_{N}]$$ # As another concrete example consider a quadratic polynomial written in Newton's form # # $$P_2(x) = [y_0] + (x - x_0) [y_0, y_1] + (x - x_0)(x - x_1) [y_0, y_1, y_2] = y_0 + (x - x_0) frac{y_1 - y_0}{x_1 - x_0} + (x - x_0)(x - x_1) left ( frac{y_2 - y_1}{(x_2 - x_1)(x_2 - x_0)} - frac{y_1 - y_0}{(x_1 - x_0)(x_2 - x_0)} right )$$ # # Recall that the interpolating polynomial of degree $N$ through these points is unique! # In[ ]: def divided_difference(x, y, N=50): # print x.shape, N if N == 0: raise Exception("Reached recurssion limit!") # Reached the end of the recurssion
  • 97. if y.shape[0] == 1: return y[0] elif y.shape[0] == 2: return (y[1] - y[0]) / (x[1] - x[0]) else: return (divided_difference(x[1:], y[1:], N=N-1) - divided_difference(x[:-1], y[:-1], N=N-1)) / (x[-1] - x[0]) # Calculate a polynomial in Newton Form data = numpy.array([[-2.0, 1.0], [-1.5, -1.0], [-0.5, -3.0], [0.0, - 2.0], [1.0, 3.0], [2.0, 1.0]]) N = data.shape[0] - 1 x = numpy.linspace(-2.0, 2.0, 100) # Construct basis functions newton_basis = numpy.ones((N + 1, x.shape[0])) for j in xrange(N + 1): for i in xrange(j): newton_basis[j, :] *= (x - data[i, 0]) # Construct full polynomial P = numpy.zeros(x.shape) for j in xrange(N + 1): P += divided_difference(data[:j + 1, 0], data[:j + 1, 1]) * newton_basis[j, :] # Plot basis and interpolant fig = plt.figure() fig.set_figwidth(2.0 * fig.get_figwidth()) axes = [None, None] axes[0] = fig.add_subplot(1, 2, 1) axes[1] = fig.add_subplot(1, 2, 2) for j in xrange(N + 1): axes[0].plot(x, newton_basis[j, :], label='$n_%s$'%j)
  • 98. axes[1].plot(data[j, 0], data[j, 1],'ko') axes[1].plot(x, P) axes[0].set_title("Newton Polynomial Basis") axes[0].set_xlabel("x") axes[0].set_ylabel("$n_j(x)$") axes[0].legend(loc='upper left') axes[1].set_title("Interpolant $P_%s(x)$" % N) axes[1].set_xlabel("x") axes[1].set_ylabel("$P_%s(x)$" % N) plt.show() # ### Error Analysis # # Given $N + 1$ points we can form an interpolant $P_N(x)$ of degree $N$ where # # $$f(x) = P_N(x) + R_N(x)$$ # We know from Lagrange's Theorem that the remainder term looks like # # $$R_N(x) = (x - x_0)(x - x_1)cdots (x - x_{N})(x - x_{N+1}) frac{f^{(N+1)}(c)}{(N+1)!}$$ # # noting that we need to require that $f(x) in C^{N+1}$ on the interval of interest. Taking the derivative of the interpolant $P_N(x)$ then leads to # # $$P_N'(x) = [y_0, y_1] + ((x - x_1) + (x - x_0)) [y_0, y_1, y_2] + cdots + left(sum^{N-1}_{i=0}left( prod^{N- 1}_{j=0,~jneq i} (x - x_j) right )right ) [y_0, y_1, ldots, y_N]$$
  • 99. # Similarly we can find the derivative of the remainder term $R_N(x)$ as # # $$R_N'(x) = left(sum^{N}_{i=0} left( prod^{N}_{j=0,~jneq i} (x - x_j) right )right ) frac{f^{(N+1)}(c)}{(N+1)!}$$ # Now if we consider the approximation of the derivative evaluated at one of our data points $(x_k, y_k)$ these expressions simplify such that # # $$f'(x_k) = P_N'(x_k) + R_N'(x_k)$$ # If we let $Delta x = max_i |x_k - x_i|$ we then know that the remainder term will be $mathcal{O}(Delta x^N)$ as $Delta x rightarrow 0$ thus showing that this approach converges and we can find arbitrarily high order approximations. # In[ ]: # Compute the approximation to the derivative # data = numpy.array([[-2.0, 1.0], [-1.5, -1.0], [-0.5, -3.0], [0.0, -2.0], [1.0, 3.0], [2.0, 1.0]]) num_points = 15 data = numpy.empty((num_points, 2)) data[:, 0] = numpy.linspace(-2.0, 2.0, num_points) data[:, 1] = numpy.sin(data[:, 0]) N = data.shape[0] - 1 x = numpy.linspace(-2.0, 2.0, 100) # General form of derivative of P_N'(x) P_prime = numpy.zeros(x.shape) newton_basis_prime = numpy.empty(x.shape) product = numpy.empty(x.shape) for n in xrange(N):
  • 100. newton_basis_prime = 0.0 for i in xrange(n): product = 1.0 for j in xrange(n): if j != i: product *= (x - data[j, 0]) newton_basis_prime += product P_prime += divided_difference(data[:n+1, 0], data[:n+1, 1]) * newton_basis_prime fig = plt.figure() fig.set_figwidth(2.0 * fig.get_figwidth()) axes = [None, None] axes[0] = fig.add_subplot(1, 2, 1) axes[1] = fig.add_subplot(1, 2, 2) axes[0].set_title("$f'(x)$") axes[1].set_title("Close up of $f'(x)$") for j in xrange(2): axes[j].plot(x, numpy.cos(x), 'k') axes[j].plot(x, P_prime, 'ro') axes[j].set_xlabel("x") axes[j].set_ylabel("$f'(x)$ and $hat{f}'(x)$") axes[0].add_patch(patches.Rectangle((0.7, 0.6), 0.8, -0.5, fill=None, color='blue')) axes[1].add_patch(patches.Rectangle((0.7, 0.6), 0.8, -0.5, fill=None, color='blue')) axes[1].set_xlim([0.69,1.51]) axes[1].set_ylim([0.09,0.61]) plt.show()
  • 101. # ## Examples # # Often in practice we only use a small number of data points to derive a differentiation formula. In the context of differential equations we also often have $f(x)$ so that $f(x_k) = y_k$ and we can approximate the derivative of a known function $f(x)$. # ### Example 1: 1st order Forward and Backward Differences # # Using 2 points we can get an approximation that is $mathcal{O}(Delta x)$: # # $$f'(x) approx P_1'(x) = [y_0, y_1] = frac{y_1 - y_0}{x_1 - x_0} = frac{y_1 - y_0}{Delta x} = frac{f(x_1) - f(x_0)}{Delta x}$$ # We can also calculate the error as # # $$R_1'(x) = -Delta x frac{f''(c)}{2}$$ # We can also derive the "forward" and "backward" formulas by considering the question slightly differently. Say we want $f'(x_n)$, then the "forward" finite-difference can be written as # # $$f'(x_n) approx D_1^+ = frac{f(x_{n+1}) - f(x_n)}{Delta x}$$ # # and the "backward" finite-difference as # # $$f'(x_n) approx D_1^- = frac{f(x_n) - f(x_{n-1})}{Delta x}$$ # Note these approximations should be familiar to use as the limit as $Delta x rightarrow 0$ these are no longer approximations but equivalent definitions of the derivative at $x_n$.
  • 102. # In[ ]: f = lambda x: numpy.sin(x) f_prime = lambda x: numpy.cos(x) # Use uniform discretization x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000) N = 20 x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N) delta_x = x_hat[1] - x_hat[0] # Compute forward difference using a loop f_prime_hat = numpy.empty(x_hat.shape) for i in xrange(N - 1): f_prime_hat[i] = (f(x_hat[i+1]) - f(x_hat[i])) / delta_x f_prime_hat[-1] = (f(x_hat[i]) - f(x_hat[i-1])) / delta_x # Vector based calculation # f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x) # Use first-order differences for points at edge of domain f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x # Backward Difference at x_N fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, f_prime(x), 'k') axes.plot(x_hat + 0.5 * delta_x, f_prime_hat, 'ro') axes.set_xlim((x[0], x[-1])) axes.set_ylim((-1.1, 1.1)) axes.set_xlabel('x') axes.set_ylabel("$f'(x)$ and $D_1^-(x_n)$") axes.set_title("Backward Differences for $f(x) = sin(x)$")
  • 103. plt.show() # #### Computing Order of Convergence # # $$begin{aligned} # e(Delta x) &= C Delta x^n # log e(Delta x) &= log C + n log Delta x # end{aligned}$$ # # Slope of line is $n$ when computing this! We can also match the first point by solving for $C$: # # $$C = e^{log e(Delta x) - n log Delta x}$$ # In[ ]: # Compute the error as a function of delta_x delta_x = [] error = [] # for N in xrange(2, 101): for N in xrange(50, 1000, 50): x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N) delta_x.append(x_hat[1] - x_hat[0]) # Compute forward difference f_prime_hat = numpy.empty(x_hat.shape) f_prime_hat[:-1] = (f(x_hat[1:]) - f(x_hat[:-1])) / (delta_x[- 1]) # Use first-order differences for points at edge of domain f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1] # Backward Difference at x_N error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat + 0.5 * delta_x[-1]) - f_prime_hat), ord=numpy.infty))
  • 104. error = numpy.array(error) delta_x = numpy.array(delta_x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.loglog(delta_x, error, 'ko', label="Approx. Derivative") order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x)) axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) * delta_x**1.0, 'r--', label="1st Order") axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) * delta_x**2.0, 'b--', label="2nd Order") axes.legend(loc=4) axes.set_title("Convergence of 1st Order Differences") axes.set_xlabel("$Delta x$") axes.set_ylabel("$|f'(x) - hat{f}'(x)|$") plt.show() # ### Example 2: 2nd Order Centered Difference # # Now lets use 3 points to calculate the 2nd order accurate finite-difference. Consider the points $(x_{n}, y_{n})$, $(x_{n-1}, y_{n-1})$, and $(x_{n+1}, y_{n+1})$, from before we have # # $$P_2'(x) = [f(x_n), f(x_{n+1})] + ((x - x_n) + (x - x_{n+1})) [f(x_n), f(x_{n+1}), f(x_{n-1})]$$ # # $$= frac{f(x_{n+1}) - f(x_n)}{x_{n+1} - x_n} + ((x - x_n) + (x - x_{n+1})) left ( frac{f(x_{n-1}) - f(x_{n+1})}{(x_{n-1} - x_{n+1})(x_{n-1} - x_n)} - frac{f(x_{n+1}) -
  • 105. f(x_n)}{(x_{n+1} - x_n)(x_{n-1} - x_n)} right )$$ # Evaluating at $x_n$ and assuming the points $x_{n-1}, x_n, x_{n+1}$ are evenly spaced leads to # # $$P_2'(x_n) = frac{f(x_{n+1}) - f(x_n)}{Delta x} - Delta x left ( frac{f(x_{n-1}) - f(x_{n+1})}{2Delta x^2} + frac{f(x_{n+1}) - f(x_n)}{Delta x^2} right )$$ # # $$=frac{f(x_{n+1}) - f(x_n)}{Delta x} - left ( frac{f(x_{n+1}) - 2f(x_n) + f(x_{n-1})}{2Delta x}right )$$ # # $$=frac{2f(x_{n+1}) - 2f(x_n) - f(x_{n+1}) + 2f(x_n) - f(x_{n-1})}{2 Delta x}$$ # # $$=frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta x}$$ # This finite-difference is second order accurate and is centered about the point it is meant to approximate ($x_n$). We can show that it is second order by again considering the remainder term's derivative # # $$R_2'(x) = left(sum^{2}_{i=0} left( prod^{2}_{j=0,~jneq i} (x - x_j) right )right ) frac{f'''(c)}{3!}$$ # # $$= left ( (x - x_{n+1}) (x - x_{n-1}) + (x-x_n) (x-x_{n-1}) + (x-x_n)(x-x_{n+1}) right ) frac{f'''(c)}{3!}$$ # Again evaluating this expression at $x = x_n$ and assuming evenly space points we have # # $$R_2'(x_n) = -Delta x^2 frac{f'''(c)}{3!}$$ # # showing that our error is $mathcal{O}(Delta x^2)$.
  • 106. # In[ ]: f = lambda x: numpy.sin(x) f_prime = lambda x: numpy.cos(x) # Use uniform discretization x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000) N = 20 x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N) delta_x = x_hat[1] - x_hat[0] # Compute derivative f_prime_hat = numpy.empty(x_hat.shape) f_prime_hat[1:-1] = (f(x_hat[2:]) - f(x_hat[:-2])) / (2 * delta_x) # Use first-order differences for points at edge of domain f_prime_hat[0] = (f(x_hat[1]) - f(x_hat[0])) / delta_x # Forward Difference at x_0 f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x # Backward Difference at x_N # f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) - f(x_hat[2])) / (2.0 * delta_x) # f_prime_hat[-1] = (3.0 * f(x_hat[-1]) - 4.0 * f(x_hat[-2]) + f(x_hat[-3])) / (2.0 * delta_x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, f_prime(x), 'k') axes.plot(x_hat, f_prime_hat, 'ro') axes.set_xlim((x[0], x[-1])) # axes.set_ylim((-1.1, 1.1)) axes.set_xlabel('x') axes.set_ylabel("$f'(x)$ and $D_2(x_n)$") axes.set_title("Second Order Centered Differences $D_2(x_n)$ for $f(x)$")
  • 107. plt.show() # In[ ]: # Compute the error as a function of delta_x delta_x = [] error = [] # for N in xrange(2, 101): for N in xrange(50, 1000, 50): x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N + 1) delta_x.append(x_hat[1] - x_hat[0]) # Compute derivative f_prime_hat = numpy.empty(x_hat.shape) f_prime_hat[1:-1] = (f(x_hat[2:]) - f(x_hat[:-2])) / (2 * delta_x[-1]) # Use first-order differences for points at edge of domain # f_prime_hat[0] = (f(x_hat[1]) - f(x_hat[0])) / delta_x[-1] # f_prime_hat[-1] = (f(x_hat[-1]) - f(x_hat[-2])) / delta_x[-1] # Use second-order differences for points at edge of domain f_prime_hat[0] = (-3.0 * f(x_hat[0]) + 4.0 * f(x_hat[1]) + - f(x_hat[2])) / (2.0 * delta_x[-1]) f_prime_hat[-1] = ( 3.0 * f(x_hat[-1]) + -4.0 * f(x_hat[-2]) + f(x_hat[-3])) / (2.0 * delta_x[-1]) error.append(numpy.linalg.norm(numpy.abs(f_prime(x_hat) - f_prime_hat), ord=numpy.infty)) error = numpy.array(error) delta_x = numpy.array(delta_x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1)
  • 108. axes.loglog(delta_x, error, "ro", label="Approx. Derivative") order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x)) axes.loglog(delta_x, order_C(delta_x[0], error[0], 1.0) * delta_x**1.0, 'b--', label="1st Order") axes.loglog(delta_x, order_C(delta_x[0], error[0], 2.0) * delta_x**2.0, 'r--', label="2nd Order") axes.legend(loc=4) axes.set_title("Convergence of 2nd Order Differences") axes.set_xlabel("$Delta x$") axes.set_ylabel("$|f'(x) - hat{f}'(x)|$") plt.show() # ### Example 3: Alternative Derivations # # An alternative method for finding finite-difference formulas is by using Taylor series expansions about the point we want to approximate. The Taylor series about $x_n$ is # # $$f(x) = f(x_n) + (x - x_n) f'(x_n) + frac{(x - x_n)^2}{2!} f''(x_n) + frac{(x - x_n)^3}{3!} f'''(x_n) + mathcal{O}((x - x_n)^4)$$ # Say we want to derive the second order accurate, first derivative approximation that just did, this requires the values $(x_{n+1}, f(x_{n+1}))$ and $(x_{n-1}, f(x_{n-1}))$. We can express these values via our Taylor series approximation above as # # $$f(x_{n+1}) = f(x_n) + (x_{n+1} - x_n) f'(x_n) + frac{(x_{n+1} - x_n)^2}{2!} f''(x_n) + frac{(x_{n+1} - x_n)^3}{3!} f'''(x_n) + mathcal{O}((x_{n+1} - x_n)^4) $$
  • 109. # # $$ = f(x_n) + Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) + frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$ # # and # # $$f(x_{n-1}) = f(x_n) + (x_{n-1} - x_n) f'(x_n) + frac{(x_{n-1} - x_n)^2}{2!} f''(x_n) + frac{(x_{n-1} - x_n)^3}{3!} f'''(x_n) + mathcal{O}((x_{n-1} - x_n)^4) $$ # # $$ = f(x_n) - Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) - frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$ # Now to find out how to combine these into an expression for the derivative we assume our approximation looks like # # $$f'(x_n) + R(x_n) = A f(x_{n+1}) + B f(x_n) + C f(x_{n- 1})$$ # # where $R(x_n)$ is our error. Plugging in the Taylor series approximations we find # # $$f'(x_n) + R(x_n) = A left ( f(x_n) + Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) + frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4)right ) + B f(x_n) + C left ( f(x_n) - Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) - frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) right )$$ # Since we want $R(x_n) = mathcal{O}(Delta x^2)$ we want all terms lower than this to dissapear except for those multiplying $f'(x_n)$ as those should sum to 1 to give us our approximation. Collecting the terms with common $Delta x^n$ we get a series of expressions for the coefficients $A$, $B$, and $C$ based on the fact we want an approximation to $f'(x_n)$. The $n=0$ terms collected are $A + B + C$ and are set to 0 as we want the $f(x_n)$ term to dissapear
  • 110. # # $$Delta x^0: ~~~~ A + B + C = 0$$ # # $$Delta x^1: ~~~~ A Delta x - C Delta x = 1 $$ # # $$Delta x^2: ~~~~ A frac{Delta x^2}{2} + C frac{Delta x^2}{2} = 0 $$ # This last equation $Rightarrow A = -C$, using this in the second equation gives $A = frac{1}{2 Delta x}$ and $C = - frac{1}{2 Delta x}$. The first equation then leads to $B = 0$. Putting this altogether then gives us our previous expression including an estimate for the error: # # $$f'(x_n) + R(x_n) = frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta x} + frac{1}{2 Delta x} frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) + frac{1}{2 Delta x} frac{Delta x^3}{3!} f'''(x_n) + mathcal{O}(Delta x^4) $$ # # $$R(x_n) = frac{Delta x^2}{3!} f'''(x_n) + mathcal{O}(Delta x^3) = mathcal{O}(Delta x^2)$$ # #### Another way... # # There is one more way to derive the second order accurate, first order finite-difference formula. Consider the two first order forward and backward finite-differences averaged together: # # $$frac{D_1^+(f(x_n)) + D_1^-(f(x_n))}{2} = frac{f(x_{n+1}) - f(x_n) + f(x_n) - f(x_{n-1})}{2 Delta x} = frac{f(x_{n+1}) - f(x_{n-1})}{2 Delta x}$$ # ### Example 4: Higher Order Derivatives # # Using our Taylor series approach lets derive the second order
  • 111. accurate second derivative formula. Again we will use the same points and the Taylor series centered at $x = x_n$ so we end up with the same expression as before: # # $$f''(x_n) + R(x_n) = A left ( f(x_n) + Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) + frac{Delta x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!} f^{(4)}(x_n) + mathcal{O}(Delta x^5)right ) + B f(x_n) + C left ( f(x_n) - Delta x f'(x_n) + frac{Delta x^2}{2!} f''(x_n) - frac{Delta x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!} f^{(4)}(x_n) + mathcal{O}(Delta x^5) right )$$ # # except this time we want to leave $f''(x_n)$ on the right hand side. Doing the same trick as before we have the following expressions: # # $$Delta x^0: ~~~~ A + B + C = 0$$ # # $$Delta x^1: ~~~~ A Delta x - C Delta x = 0$$ # # $$Delta x^2: ~~~~ A frac{Delta x^2}{2} + C frac{Delta x^2}{2} = 1$$ # The second equation implies $A = C$ which combined with the third implies # # $$A = C = frac{1}{Delta x^2}$$ # # Finally the first equation gives # # $$B = -frac{2}{Delta x^2}$$ # # leading to the final expression # # $$f''(x_n) + R(x_n) = frac{f(x_{n+1}) - 2 f(x_n) + f(x_{n- 1})}{Delta x^2} + frac{1}{Delta x^2} left(frac{Delta
  • 112. x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!} f^{(4)}(x_n) - frac{Delta x^3}{3!} f'''(x_n) + frac{Delta x^4}{4!} f^{(4)}(x_n) right) + mathcal{O}(Delta x^5)$$ # # with # # $$R(x_n) = frac{Delta x^2}{12} f^{(4)}(x_n) + mathcal{O}(Delta x^3)$$ # In[ ]: f = lambda x: numpy.sin(x) f_dubl_prime = lambda x: -numpy.sin(x) # Use uniform discretization x = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000) N = 10 x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N) delta_x = x_hat[1] - x_hat[0] # Compute derivative f_dubl_prime_hat = numpy.empty(x_hat.shape) f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) + f(x_hat[:-2])) / (delta_x**2) # Use first-order differences for points at edge of domain f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) + 4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x**2 f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[-2]) + 4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x**2 fig = plt.figure() axes = fig.add_subplot(1, 1, 1) axes.plot(x, f_dubl_prime(x), 'k') axes.plot(x_hat, f_dubl_prime_hat, 'ro')
  • 113. axes.set_xlim((x[0], x[-1])) axes.set_ylim((-1.1, 1.1)) axes.set_xlabel('x') axes.set_ylabel("$f''(x)$") axes.set_title("Second Order Accurate Second Derivative of $f(x)$") plt.show() # In[ ]: f = lambda x: numpy.sin(x) f_dubl_prime = lambda x: -numpy.sin(x) # Compute the error as a function of delta_x delta_x = [] error = [] # for N in xrange(2, 101): for N in xrange(50, 1000, 50): x_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N) delta_x.append(x_hat[1] - x_hat[0]) # Compute derivative f_dubl_prime_hat = numpy.empty(x_hat.shape) f_dubl_prime_hat[1:-1] = (f(x_hat[2:]) -2.0 * f(x_hat[1:-1]) + f(x_hat[:-2])) / (delta_x[-1]**2) # Use second-order differences for points at edge of domain f_dubl_prime_hat[0] = (2.0 * f(x_hat[0]) - 5.0 * f(x_hat[1]) + 4.0 * f(x_hat[2]) - f(x_hat[3])) / delta_x[-1]**2 f_dubl_prime_hat[-1] = (2.0 * f(x_hat[-1]) - 5.0 * f(x_hat[- 2]) + 4.0 * f(x_hat[-3]) - f(x_hat[-4])) / delta_x[-1]**2 error.append(numpy.linalg.norm(numpy.abs(f_dubl_prime(x_hat
  • 114. ) - f_dubl_prime_hat), ord=numpy.infty)) error = numpy.array(error) delta_x = numpy.array(delta_x) fig = plt.figure() axes = fig.add_subplot(1, 1, 1) # axes.plot(delta_x, error) axes.loglog(delta_x, error, "ko", label="Approx. Derivative") order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x)) axes.loglog(delta_x, order_C(delta_x[2], error[2], 1.0) * delta_x**1.0, 'b--', label="1st Order") axes.loglog(delta_x, order_C(delta_x[2], error[2], 2.0) * delta_x**2.0, 'r--', label="2nd Order") axes.legend(loc=4) axes.set_title("Convergence of Second Order Second Derivative") axes.set_xlabel("$Delta x$") axes.set_ylabel("$|f'' - hat{f}''|$") plt.show() # In[ ]: # In[ ]:
  • 115. error.py # coding: utf-8 # <table> # <tr align=left><td><img align=left src="./images/CC- BY.png"> # <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF- approved MIT license. (c) Kyle T. Mandli</td> # </table> # In[ ]: get_ipython().magic(u'matplotlib inline') import numpy import matplotlib.pyplot as plt # # Sources of Error # # Error can come from many sources when using applying a numerical method: # - Model/Data Error # - Truncation Error # - Floating Point Error # # **Goal:** Categorize and understand each type of error and explore some simple approaches to analyzing error. # ## Model and Data Error # # Errors in fundamental formulation # - Lotka-Volterra - fractional rabbits, no extinctions, etc. # - Data Error - Inaccuracy in measurement or uncertainties in parameters
  • 116. # # Unfortunatley we cannot control model and data error directly but we can use methods that may be more robust in the presense of these types of errors. # ## Truncation Error # # Errors arising from approximating a function with a simpler function (e.g. $sin(x) approx x$ for $|x| approx 0$. # ## Floating Point Error # # Errors arising from approximating real numbers with finite- precision numbers and arithmetic. # ## Basic Definitions # # Given a true value of a function $f$ and an approximate solution $hat{f}$ define: # # Absolute Error: $e = |f - hat{f}|$ # # Relative Error: $r = frac{e}{|f|} = frac{|f - hat{f}|}{|f|}$ # Decimal precision $p$ is defined as the minimum value that satisfies # # $$x = text{round}(10^{-n} cdot x) cdot 10^n$$ # # where # # $$n = text{floor}(log_{10} x) + 1 - p$$ # # Note that if we are asking the decimal precision of the approximation $hat{f}$ of $f$ then we need to use the absolute error to determine the precision. To find the decimal precision
  • 117. in this case look at the magnitude of the absolute error and deterimine the place of the first error. Combine this with the number of "correct" digits and you will get the decimal precision of the approximation. # ## Truncation Error and Taylor's Theorem # # **Taylor's Theorem:** Let $f(x) in C^{m+1}[a,b]$ and $x_0 in [a,b]$, then for all $x in (a,b)$ there exists a number $c = c(x)$ that lies between $x_0$ and $x$ such that # # $$ f(x) = T_N(x) + R_N(x)$$ # # where $T_N(x)$ is the Taylor polynomial approximation # # $$T_N(x) = sum^N_{n=0} frac{f^{(n)}(x_0)cdot(x- x_0)^n}{n!}$$ # # and $R_N(x)$ is the residual (the part of the series we left off) # # $$R_N(x) = frac{f^{(n+1)}(c) cdot (x - x_0)^{n+1}}{(n+1)!}$$ # Another way to think about these results involves replacing $x - x_0$ with $Delta x$. The primary idea here is that the residual $R_N(x)$ becomes smaller as $Delta x rightarrow 0$. # # $$T_N(x) = sum^N_{n=0} frac{f^{(n)}(x_0)cdotDelta x^n}{n!}$$ # # and $R_N(x)$ is the residual (the part of the series we left off) # # $$R_N(x) = frac{f^{(n+1)}(c) cdot Delta x^{n+1}}{(n+1)!} leq M Delta x^{n+1} = O(Delta x^{n+1})$$
  • 118. # #### Example 1 # # $f(x) = e^x$ with $x_0 = 0$ # # Using this we can find expressions for the relative and absolute error as a function of $x$ assuming $N=2$. # $$f'(x) = e^x, ~~~ f''(x) = e^x ~~~ f^{(n)}(x) = e^x$$ # # $$T_N(x) = sum^N_{n=0} e^0 frac{x^n}{n!} ~~~~Rightarrow ~~~~ T_2(x) = 1 + x + frac{x^2}{2}$$ # # $$R_N(x) = e^c frac{x^{n+1}}{(n+1)!} = e^c cdot frac{x^3}{6} ~~~~ Rightarrow ~~~~ R_2(x) leq frac{e^1}{6} approx 0.5$$ # # $$e^1 = 2.718ldots$$ # # $$T_2(1) = 2.5 Rightarrow e approx 0.2 ~~ r approx 0.1$$ # We can also use the package sympy which has the ability to calculate Taylor polynomials built-in! # In[ ]: import sympy x = sympy.symbols('x') f = sympy.symbols('f', cls=sympy.Function) f = sympy.exp(x) f.series(x0=0, n=6) # Lets plot this numerically for a section of $x$.
  • 119. # In[ ]: x = numpy.linspace(-1, 1, 100) T_N = 1.0 + x + x**2 / 2.0 R_N = numpy.exp(1) * x**3 / 6.0 plt.plot(x, T_N, 'r', x, numpy.exp(x), 'k', x, R_N, 'b') plt.xlabel("x") plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$") plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=2) plt.show() # #### Example 2 # # $f(x) = frac{1}{x} ~~~~~~ x_0 = 1$, approximate with $hat{f}(x) = T_2(x)$ # $$f'(x) = -frac{1}{x^2} ~~~~~~~ f''(x) = frac{2}{x^3} ~~~~~~~ f^{(n)}(x) = frac{(-1)^n n!}{x^{n+1}}$$ # # $$T_N(x) = sum^N_{n=0} (-1)^n (x-1)^n ~~~~ Rightarrow ~~~~ T_2(x) = 1 - (x - 1) + (x - 1)^2$$ # # $$R_N(x) = frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} ~~~~ Rightarrow ~~~~ R_2(x) = frac{-(x - 1)^{3}}{c^{4}}$$ # In[ ]: x = numpy.linspace(0.8, 2, 100) T_N = 1.0 - (x-1) + (x-1)**2 R_N = -(x-1.0)**3 / (1.1**4) plt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b') plt.xlabel("x") plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
  • 120. plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=8) plt.show() # ### Symbols and Definitions # # Big-O notation: $f(x) = text{O}(g(x))$ as $x rightarrow a$ if and only if $|f(x)| leq M |g(x)|$ as $|x - a| < delta$ for $M$ and $a$ positive. # # In practice we use Big-O notation to say something about how the terms we may have left out of a series might behave. We saw an example earlier of this with the Taylor's series approximations: # #### Example: # $f(x) = sin x$ with $x_0 = 0$ then # # $$T_N(x) = sum^N_{n=0} (-1)^{n} frac{x^{2n+1}}{(2n+1)!}$$ # # We can actually write $f(x)$ then as # # $$f(x) = x - frac{x^3}{6} + frac{x^5}{120} + O(x^7)$$ # # This becomes more useful when we look at this as we did before with $Delta x$: # # $$f(x) = Delta x - frac{Delta x^3}{6} + frac{Delta x^5}{120} + O(Delta x^7)$$ # **We can also develop rules for error propagation based on Big-O notation:** # # In general, there are two theorems that do not need proof and
  • 121. hold when the value of x is large: # # Let # $$f(x) = p(x) + O(x^n)$$ # $$g(x) = q(x) + O(x^m)$$ # $$k = max(n, m)$$ then # $$f+g = p + q + O(x^k)$$ # $$f cdot g = p cdot q + O(x^{ncdot m})$$ # On the other hand, if we are interested in small values of x, say Δx, the above expressions can be modified as follows: # # $$f(Delta x) = p(Delta x) + O(Delta x^n)$$ # $$g(Delta x) = q(Delta x) + O(Delta x^m)$$ # $$r = min(n, m)$$ then # # $$f+g = p + q + O(Delta x^r)$$ # # $$f cdot g = p cdot q + p cdot O(Delta x^m) + q cdot O(Delta x^n) + O(Delta x^{n+m}) = p cdot q + O(Delta x^r)$$ # **Note 1:** In this case we suppose that at least the polynomial with $k = max(n, m)$ has the following form: # $$p(Delta x) = 1 + p_1 Delta x + p_2 Delta x^2 + ...$$ # or $$q(Delta x) = 1 + q_1 Delta x + q_2 Delta x^2 + ...$$ # so that there is an O(1) term that guarantees the existence of $O(Delta x^r)$ in the final product. # To get a sense of why we care most about the power on $Delta x$ when considering convergence the following figure shows how different powers on the convergence rate can effect how quickly we converge to our solution. Note that here we are plotting the same data two different ways. Plotting the error as a function of $Delta x$ is a common way to show that a numerical method is doing what we expect and exhibits the
  • 122. correct convergence behavior. Since errors can get small quickly it is very common to plot these sorts of plots on a log- log scale to easily visualize the results. Note that if a method was truly of the order $n$ that they will be a linear function in log-log space with slope $n$. # In[ ]: dx = numpy.linspace(1.0, 1e-4, 100) fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 2.0) axes = [] axes.append(fig.add_subplot(1, 2, 1)) axes.append(fig.add_subplot(1, 2, 2)) for n in xrange(1, 5): axes[0].plot(dx, dx**n, label="$Delta x^%s$" % n) axes[1].loglog(dx, dx**n, label="$Delta x^%s$" % n) axes[0].legend(loc=2) axes[1].set_xticks([10.0**(-n) for n in xrange(5)]) axes[1].set_yticks([10.0**(-n) for n in xrange(16)]) axes[1].legend(loc=4) for n in xrange(2): axes[n].set_title("Growth of Error vs. $Delta x^n$") axes[n].set_xlabel("$Delta x$") axes[n].set_ylabel("Estimated Error") axes[n].set_title("Growth of different") axes[n].set_xlabel("$Delta x$") axes[n].set_ylabel("Estimated Error") plt.show() # ## Horner's Method for Evaluating Polynomials
  • 123. # # Given # # $$P_N(x) = a_0 + a_1 x + a_2 x^2 + ldots + a_N x^N$$ # # or # # $$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + ldots + p_{N+1}$$ # # want to find best way to evaluate $P_N(x)$. # First consider two ways to write $P_3$: # # $$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$ # # and using nested multiplication: # # $$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$ # Consider how many operations it takes for each... # # $$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$ # # $$P_3(x) = overbrace{p_1 cdot x cdot x cdot x}^3 + overbrace{p_2 cdot x cdot x}^2 + overbrace{p_3 cdot x}^1 + p_4$$ # Adding up all the operations we can in general think of this as a pyramid # # ![Original Count](./images/horners_method_big_count.png) # # We can estimate this way that the algorithm written this way will take approximately $O(N^2 / 2)$ operations to complete.
  • 124. # Looking at our other means of evaluation: # # $$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$ # # Here we find that the method is $O(N)$ (the 2 is usually ignored in these cases). The important thing is that the first evaluation is $O(N^2)$ and the second $O(N)$! # ### Algorithm # # Fill in the function and implement Horner's method: # ```python # def eval_poly(p, x): # """Evaluates polynomial given coefficients p at x # # Function to evaluate a polynomial in order N operations. The polynomial is defined as # # P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n] # # The value x should be a float. # """ # pass # ``` # ```python # def eval_poly(p, x): # """Evaluates polynomial given coefficients p at x # # Function to evaluate a polynomial in order N operations. The polynomial is defined as # # P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n] # # The value x should be a float. # """
  • 125. # # y = p[0] # for coefficient in p[1:]: # y = y * x + coefficient # # return y # ``` # or an alternative version that allows `x` to be a vector of values: # ```python # def eval_poly(p, x): # """Evaluates polynomial given coefficients p at x # # Function to evaluate a polynomial in order N operations. The polynomial is defined as # # P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n] # # The value x can by a NumPy ndarray. # """ # # y = numpy.ones(x.shape) * p[0] # for coefficient in p[1:]: # y = y * x + coefficient # # return y # ``` # This version calculates each `y` value simultaneously making for much faster code! # In[ ]: def eval_poly(p, x): """Evaluates polynomial given coefficients p at x
  • 126. Function to evaluate a polynomial in order N operations. The polynomial is defined as P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n] The value x can by a NumPy ndarray. """ y = numpy.ones(x.shape) * p[0] for coefficient in p[1:]: y = y * x + coefficient return y p = [1, -3, 10, 4, 5, 5] x = numpy.linspace(-10, 10, 100) plt.plot(x, eval_poly(p, x)) plt.show() # ## Truncation Error vs. Floating Point Error # # Truncation error: Errors arising from approximation of a function, truncation of a series... # # $$sin x approx x - frac{x^3}{3!} + frac{x^5}{5!} + O(x^7)$$ # # Floating-point Error: Errors arising from approximating real numbers with finite-precision numbers # # $$pi approx 3.14$$ # # or $frac{1}{3} approx 0.333333333$ in decimal, results form finitely number of registers to represent each number. #
  • 127. # ## Floating Point Systems # # Numbers in floating point systems are represented as a series of bits that represent different pieces of a number. In *normalized floating point systems* there are some standar conventions for what these bits are used for. In general the numbers are stored by breaking them down into the form # # $$hat{f} = pm d_1 . d_2 d_3 d_4 ldots d_p times beta^E$$ # $$hat{f} = pm d_1 . d_2 d_3 d_4 ldots d_p times beta^E$$ # # where # 1. $pm$ is a single bit and of course represents the sign of the number # 2. $d_1 . d_2 d_3 d_4 ldots d_p$ is called the *mantissa*. Note that technically the decimal could be moved but generally, using scientific notation, the decimal can always be placed at this location. The digits $d_2 d_3 d_4 ldots d_p$ are called the *fraction* with $p$ digits of precision. Normalized systems specifically put the decimal point in the front like we have and assume $d_1 neq 0$ unless the number is exactly $0$. # 3. $beta$ is the *base*. For binary $beta = 2$, for decimal $beta = 10$, etc. # 4. $E$ is the *exponent*, an integer in the range $[E_{min}, E_{max}]$ # The important points on any floating point system is that # 1. There exist a discrete and finite set of representable numbers # 2. These representable numbers are not evenly distributed on the real line # 3. Airthmetic in floating point systems yield different results from infinite precision arithmetic (i.e. "real" math)
  • 128. # ### Example: Toy System # Consider the toy 2-digit precision decimal system (normalized) # $$f = pm d_1 . d_2 times 10^E$$ # with $E in [-2, 0]$. # # #### Number and distribution of numbers # 1. How many numbers can we represent with this system? # # 2. What is the distribution on the real line? # # 3. What is the underflow and overflow limits? # # How many numbers can we represent with this system? # # $f = pm d_1 . d_2 times 10^E$ with $E in [-2, 0]$. # # $$ 2 times 9 times 10 times 3 + 1 = 541$$ # What is the distribution on the real line? # In[ ]: d_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9] d_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] E_values = [0, -1, -2] fig = plt.figure(figsize=(10.0, 1.0)) axes = fig.add_subplot(1, 1, 1) for E in E_values: for d1 in d_1_values: for d2 in d_2_values: axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)
  • 129. axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20) axes.plot(0.0, 0.0, '+', markersize=20) axes.plot([-10.0, 10.0], [0.0, 0.0], 'k') axes.set_title("Distribution of Values") axes.set_yticks([]) axes.set_xlabel("x") axes.set_ylabel("") axes.set_xlim([-0.1, 0.1]) plt.show() # What is the underflow and overflow limits? # # Smallest number that can be represented is the underflow: $1.0 times 10^{-2} = 0.01$ # Largest number that can be represented is the overflow: $9.9 times 10^0 = 9.9$ # ## Properties of Floating Point Systems # All floating-point systems are characterized by several important numbers # - Smalled normalized number (underflow if below) # - Largest normalized number (overflow if above) # - Zero # - Machine $epsilon$ or $epsilon_{text{machine}}$ # - `inf` and `nan`, infinity and **N**ot **a** **N**umber respectively # - Subnormal numbers # ## Binary Systems # Consider the 2-digit precision base 2 system: # # $$f=pm d_1 . d_2 times 2^E ~~~~ text{with} ~~~~ E in [-
  • 130. 1, 1]$$ # # #### Number and distribution of numbers # 1. How many numbers can we represent with this system? # # 2. What is the distribution on the real line? # # 3. What is the underflow and overflow limits? # # How many numbers can we represent with this system? # # $$f=pm d_1 . d_2 times 2^E ~~~~ text{with} ~~~~ E in [- 1, 1]$$ # # $$ 2 times 1 times 2 times 3 + 1 = 13$$ # What is the distribution on the real line? # In[ ]: d_1_values = [1] d_2_values = [0, 1] E_values = [1, 0, -1] fig = plt.figure(figsize=(10.0, 1.0)) axes = fig.add_subplot(1, 1, 1) for E in E_values: for d1 in d_1_values: for d2 in d_2_values: axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20) axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)