Upcoming SlideShare
×

# Numerical methods generating polynomial

1,519 views
1,442 views

Published on

matlab codes..

Published in: Technology
2 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
1,519
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
8
0
Likes
2
Embeds 0
No embeds

No notes for slide

### Numerical methods generating polynomial

1. 1. EC 313: Numerical Methods Mayank AwasthiMATLAB Assignment – 1 M. Tech, IIT Roorkee (Communication Systems)Ques1:MATLAB routine which finds the coefficients of Lagrange polynomial is below:%Implementing Lagrange Interpolating Formulafunction l = lagranp(x,y)N = length(x)-1; %the degree of polynomiall = 0;for m = 1:N + 1P = 1;for k = 1:N + 1%Multiplication of two polynomials corresponds to convolution of theircoefficients.if k ~= m%”linconv” function created by own.P = linconv(P,[1 -x(k)])/(x(m)-x(k));endendl = l + y(m)*P; %Lagrange polynomial coefficientsendMATLAB routine for Linear Convolution:%Linear convolution of sequence using formulafunction s=linconv(x,h)N=length(x);M=length(h);for n=1:N+M-1 s(n)=0;for k=1:M a=n-k;if a<0 y(k)=0;elseif k>N y(k)=0;elseif a<M y(k)=x(k)*h(a+1);else y(k)=0;end s(n)=s(n)+y(k);endend
2. 2. MATLAB routine which finds the coefficients of Newton polynomial is below:%Implementing the Newton divided difference formulafunction n = newtonp(x,y)N = length(x)-1;f = zeros(N+1,1);f(1:N+1,1) = y;for k = 2:N + 1for m = 1: N + 2 - k %Divided Differencef(m,k) = (f(m + 1,k - 1) - f(m,k - 1))/(x(m + k - 1)- x(m));endenda = f(1,:);n = a(N+1);for k =N:-1:1%Nested multiplication form for better computation efficiency%a_k=f(x0,x1,……….xk)n = [n a(k)] - [0 n*x(k)]; %f(x)*(a_k*(x - x(k - 1))+a_k – 1)end>> x=[1 3 4]; Calculating the polynomial using both routines for>> y=[1 27 64]; given data.>> coefflag = lagranp(x,y)coefflag =8.0000 -19.0000 12.0000 So required polynomial we get 8x2 – 19x + 12>> coeffnew = newtonp(x,y)coeffnew =8 -19 12
3. 3. Ques2:a)- For Straight Line: deg(Polynomial)=1 So using Newton or Lagrange routine we can get the coeff. Of polynomial asbelow:>> x= [1970 1990] ;>> y= [3707475887 5281653820] ;>> nc=newtonp(x,y)nc =1.0e+011 *0.0008 -1.5135>> v0=polyval(nc,1980)v0 = % 1980 Population using straight line4.4946e+009b)- For parabola: deg(Polynomial)=2>> x= [1960 1970 1990];>> y= [3039585530 3707475887 5281653820];>> nc= newtonp(x,y)nc =1.0e+012 * 0.0000 -0.0015 1.4063>> v1=polyval(nc,1980) % 1980 Population parabolav1 =4.4548e+009c)- For cubic curve deg(Polynomial)>> x= [1960 1970 1990 2000];>> y= [3039585530 3707475887 5281653820 6079603571];>> nc=newtonp(x,y)nc =1.0e+013 * -0.0000 0.0000 -0.0107 7.0777>> v3=polyval(nc,1980) % 1980 Population using cubic curvev3 =
4. 4. 4.4729e+009Ques4:MATLAB routine which computes the divide difference coefficient is below:%matalb script for computing dividedifference coefff.function ddc = dividediff(x,y)N = length(x)-1;f = zeros(N+1,1);f(1:N+1,1) = y;for k = 2:N + 1for m = 1: N + 2 - k %Divided Differencef(m,k) = (f(m + 1,k - 1) - f(m,k - 1))/(x(m + k - 1)- x(m));endddc=f(1,:);endendCalculating the divide difference coefficients.>> x=[1 2 4 5 7 9];>> y=[2 3 8 6 14 6];>> divide_diff_coeff = dividediff(x,y)divide_diff_coeff = 2.0000 1.0000 0.5000 -0.5000 0.2000 -0.0518Now Calculating the polynomial using Newton method routine used in Ques1.>> x=[1 2 4 5 7 9];>> y=[2 3 8 6 14 6];>> nc= newtonp(x,y)nc = -0.0518 1.1839 -9.7875 35.6018 -53.4464 28.5000Required Polynomial is-0.0518x5 + 1.1839x4 -9.7875x3 + 35.6018x2-53.4464x +28.5000v0=p(1.5), v1=p(3.7), v2=p(6.6), v3=p(9.5)>> v0=polyval(nc,1.5) >> v2=polyval(nc,6.6)v0 = v2 =1.0020 10.6441>> v1=polyval(nc,3.7) >> v3=polyval(nc,9.5)v1 = v3 =
5. 5. 8.3474 -21.6454Plotting Graph>> x=[1 2 4 5 7 9];>> y=[2 3 8 6 14 6];>> nc= newtonp(x,y)nc = -0.0518 1.1839 -9.7875 35.6018 -53.4464 28.5000>> m=[0:10];>> n=polyval(nc,m)n=28.5000 2.0000 3.0000 7.6286 8.0000 6.0000 7.0714 14.0000 20.70006.0000 -72.5714>> plot(m,n)
6. 6. Ques5 :Bisection Method :%Bisection method to solve Non-linear equations.function [x] = bisct(f,a,b,z,MaxIter)fa = feval(f,a);fb = feval(f,b);if fa*fb > 0error(We must have f(a)f(b)<0!);endfor k = 1:MaxIterxx(k) = (a + b)/2;fx =feval(f,xx(k));err = (b-a)/2;if abs(err)<zbreak;elseif fx*fa > 0a = xx(k); fa = fx;else b = xx(k);endx = xx(k);endfprintf(The Interval in which root lie is %d %dn,a,b)fprintf(Iteration Required is %dn,k)fprintf(The approx. error is %dn,err)endRegula Falsi Method :%Implementing Regula Falsi algorithm.function [] = falsp(f,a,b,z,MaxIter)fa =feval(f,a);fb=feval(f,b);if fa*fb > 0error(We must have f(a)f(b)<0!);endfor k = 1: MaxIterxx(k) = (a*fb-b*fa)/(fb-fa);fx = feval(f,xx(k));err = max(abs(xx(k) - a),abs(b - xx(k)));if err<z break;elseif fx*fa > 0 a = xx(k); fa = fx;else b = xx(k); fb = fx;endendx = xx(k);fprintf(The Interval in which root lie is %d %dn,a,b)fprintf(Iteration Required is %dn,k)fprintf(The approx. error is %dn,err)end
7. 7. Modified Regula Falsi Method :%Implementing Modified Regula Falsi algorithmfunction [] = mrfalsp(f,a,b,z,MaxIter)fa =feval(f,a);fb=feval(f,b);F=fa;G=fb;if fa*fb > 0error(We must have f(a)f(b)<0!);endxx(1)=a/2;for k = 1: MaxIterxx(k+1) = (a*G-b*F)/(G-F);fx = feval(f,xx(k+1));err = max(abs(xx(k) - a),abs(b - xx(k)));if err<z break;endif fx*fa > 0 a = xx(k+1); fa = fx;elseif feval(f,xx(k))*feval(f,xx(k+1))>0 F=F/2; endif fx*fa < 0 b = xx(k+1); fb = fx; elseif feval(f,xx(k))*feval(f,xx(k+1))>0 G=G/2; endendfprintf(The Interval in which root lie is %d %dn,a,b)fprintf(Iteration Required is %dn,k)fprintf(The approx. error is %dn,err)end
8. 8. Newton Raphson Meyhod :%Implementing Newton Raphson Methodfunction [] = newton(f,df,x0,z,MaxIter)xx(1) = x0;fx = feval(f,x0);for k = 1: MaxIterif ~isnumeric(df) dfdx = feval(df,xx(k));enddx = -fx/dfdx;xx(k+1) = xx(k)+dx;fx = feval(f,xx(k + 1));a= xx;if abs(dx) < zbreak; endendl=length(a);fprintf(The approx. Root is %dnn,x);fprintf(Iteration Required is %dn,k)endNow calculating the roots of given polynomialsa)- exp(-x)-sin(x) assuming interval (0,1) & Max_Iteration=50>> f=inline(exp(-x)-sin(x)); %matlab command to define function>> df=inline(-exp(-x)-cos(x)); %derivative of f1)- Bisection method:>> bisct(f,0,1,1e-6,50)The Interval in which root lie is 5.885315e-001 5.885334e-001Iteration Required is 20The approx. error is 9.536743e-0072)- Regula Falsi:>> rfalsi(f,0,1,1e-6,50)The Interval in which root lie is 0 5.885327e-001Iteration Required is 50The approx. error is 5.885327e-0013)- Modified Regula Falsi:>> mrfalsi(f,0,1.0,1e-6,50)The Interval in which root lie is 5.885315e-001 5.885342e-001Iteration Required is 23The approx. error is 3.976230e-0064)- Newton Raphson Method:>> newtonrap(f,df,.2,1e-6,50)The approx. Root is 5.885327e-001Iteration Required is 4
9. 9. b)- x- exp(-(x^2)) assuming interval (0,1) & Max_Iteration=50>> f=inline(x-exp(-(x^2))); %matlab command to define function>> df=inline(1+2*x*exp(-(x^2))); %derivative of f1)- Bisection method:>> bisct(f,0,1,1e-6,50)The Interval in which root lie is 6.529179e-001 6.529198e-001Iteration Required is 20The approx. error is 9.536743e-0072)- Regula Falsi:>> rfalsi(f,0,1.0,1e-6,50)The Interval in which root lie is 6.529186e-001 6.534421e-001Iteration Required is 50The approx. error is 5.234929e-0043)- Modified Regula Falsi:>> mrfalsi(f,0,1.0,1e-6,50)The Interval in which root lie is 6.529059e-001 6.529198e-001Iteration Required is 25The approx. error is 2.490377e-0054)- Newton Raphson Method:>> newtonrap(f,df,.5,1e-6,50)The approx. Root is 6.529186e-001Iteration Required is 4c)- (x^3)-x-2 assuming interval (-1,2) & Max_Iteration=50>> f=inline((x^3)-x-2) %matlab command to define function>> df=inline(3*(x^2)-1); %derivative of f1)- Bisection method:>> bisct(f,-1,2,1e-6,50)The Interval in which root lie is 1.521379e+000 1.521382e+000Iteration Required is 22The approx. error is 2.861023e-0062)- Regula Falsi:>> rfalsi(f,-1,2,1e-6,50)The Interval in which root lie is 1.521380e+000 2Iteration Required is 50The approx. error is 4.786203e-001
10. 10. 3)- Modified Regula Falsi:>> mrfalsi(f,-1,2,1e-6,50)The Interval in which root lie is 1.521356e+000 1.521386e+000Iteration Required is 28The approx. error is 6.021365e-0054)- Newton Raphson Method:>> newtonrap(f,df,1,1e-6,50)The approx. Root is 1.521380e+000Iteration Required is 6Ques3:For finding the the smallest positive guess for which Newton method diverges,we can use Newton Raphson method check for various guess as the root startdiverging we can say that that is our required positive guess.Assuming z-1e-006 % Tolerance parameter>> f=inline(atan(x))f= Inline function: f(x) = atan(x)>> df=inline(1/(1+x^2))df = Inline function: df(x) = 1/(1+x^2)>> newtonrap(f,df,1,1e-6,1000)The approx. Root is 0Iteration Required is 5>> newtonrap(f,df,1.1,1e-6,1000)The approx. Root is -1.803121e-019Iteration Required is 5
11. 11. >> newtonrap(f,df,1.2,1e-6,1000)The approx. Root is 0Iteration Required is 6>> newtonrap(f,df,1.3,1e-6,1000)The approx. Root is 0Iteration Required is 7>> newtonrap(f,df,1.38,1e-6,1000)The approx. Root is 0Iteration Required is 9>> newtonrap(f,df,1.39,1e-6,1000)The approx. Root is 0Iteration Required is 11>> newtonrap(f,df,1.4,1e-6,1000)Warning: Divide by zero.> In newtonrap at 8Warning: Divide by zero.> In newtonrap at 8The approx. Root is NaNIteration Required is 1000From the above data we getSince at 1.4 there is divide by zero error so beyond 1.4 function (‘atanx’) maydiverge.Smallest positive guess will be 1.4.