SlideShare a Scribd company logo
1 of 9
Download to read offline
International Association of Scientific Innovation and Research (IASIR) 
(An Association Unifying the Sciences, Engineering, and Applied Research) 
International Journal of Emerging Technologies in Computational 
and Applied Sciences (IJETCAS) 
www.iasir.net 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 128 
ISSN (Print): 2279-0047 
ISSN (Online): 2279-0055 
On finding leading to Newton Raphson’s improved method 
Nitin Jain1, Kushal D Murthy1 and Hamsapriye2 
1Student of Department of Electronics & Communication Engineering, 
2 Professor, Department of Mathematics, 1,2 R. V. College of Engineering, Mysore Road, Bangalore, Karnataka-560 059, INDIA 
__________________________________________________________________________________________ 
Abstract: New iterative algorithms for finding the nth root of a positive number m, to any degree of accuracy, are discussed. The convergence of these methods is also analyzed and the factors affecting the rate of convergence are studied analytically, as well as graphically. The parameters involved in the iterative schemes are studied. Expressions are derived for the optimal values of these parameters. The rates of convergence of these new methods can be accelerated through these parameters, which prove to be much faster than the Newton Raphson method for finding in some cases. Several examples are given for clarity purposes. Also, numerical comparative study is made between the improved Newton Raphson’s method and the third order Halley’s method. 
Keywords: Iterative algorithm; Fixed point iteration; Fixed-b method; Adaptive-b method; Simplified Adaptive- b method; Halley’s method; Newton Raphson Method; Newon-Raphson Improved. 
__________________________________________________________________________________________ 
I. Introduction 
The root of a positive m is a number satisfying . Any real number m has n such roots. In this paper, we are concerned with the numerical approximation of . There are numerical methods, such as, the bisection method, the regula-falsi method, the Newton-Raphson method and many more. Any standard textbook on numerical analysis explains these methods [1]. All these methods use the function . In [2], an iterative algorithm for finding the is discussed, which involves generating a sequence of approximations to . The method is also directly related to the continued fraction representation of . The convergence of this method is established by studying the eigenvalues and eigenvectors of a matrix, directly related to the algorithm itself. The approximations are then obtained from the following sequence of fractions: 
which can also be viewed as a sequence generated from 
where a is replaced with γ. If we consider any fraction as a two dimensional vector, then a can be represented by , and vice-versa. The right hand expression in relation (1) is then equivalent to the matrix product 
Therefore, the successive generation of the sequence of approximations to , involve the multiplication of higher powers of the square matrix in (3). The convergence of the iterative algorithm directly depends on the nature of the eigenvalues and eigenvectors of the matrix. 
In this paper, we have discussed four numerical methods in sequence: the Fixed-b method, the Adaptive-b method, a simplified form of the Adaptive-b method, called the Simplified Adaptive-b method leading to the Newon-Raphson Improved (NRI) method. Several numerical examples are worked out for a clear understanding of these methods. The Newton-Raphson’s improved method is also compared with the third order Halley’s method [3], [4]. 
In section 2, we have discussed the Fixed-b method and in section 3 we have analyzed this method in a greater detail. Section 4 discusses the Adaptive-b method and its analysis. In section 5, an improved version of Newton-Raphson method, called the NRI method is explained, which is derived from the Simplified Adaptive-b method (SA-b). Several examples are included for clarity purposes. Although m is any positive number, we have chosen m = 2012, 2013 and 2014, commemorating the years of research.
12 
9 
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 129 
II. Fixed-b method 
Intuitively, one can generalize the relation (2) as 
for finding the cube-root of a number m. The relation (4) converges to , for . But, it is found that for higher values of m the above sequence oscillates and never converges. In order to eliminate these oscillations, we perturbed the right hand expression of (4) to obtain 
where is a real parameter. For a given m, the convergence of the sequence in (5) depends on . This idea can be further generalized to obtain an approximate value for the sequence . We consider 
This iterative sequence for will take the for 
which is different from relation (2). By varying the parameter in (6), we can achieve convergence and improve the rate of convergence as explained in section 3. 
The sequence in (6) can be rewritten as an iterative formula as given below 
where we have defined as 
It is to be mentioned that, and is chosen initially. 
A. Example 
To illustrate this new method we choose and . Using the formula in (7), with , we get up to 3 decimal accuracy, in 29 iterations. 
III. Analysis of Fixed-b method 
The convergence of the Fixed-b method depends on the choice of and . In this section, we analyze the rate of convergence, by alternately varying b and γ0, for the examples (i) and (ii) . In both examples, we have fixed four decimal places of accuracy. In this example, we have studied how b affects the rate of convergence by fixing . Choosing and in relation (7), we obtain: 
We have plotted the number of iterations against , which is varied from 1 to 100. The iteration diverges whenever and converges for . A formula for b is derived later in this section, for which the iteration starts to converge, as shown later in this section. 
Figure 1: No. of Iterations against b
13 
0 
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 130 
Figure 2: No. of Iterations against γ0 
Figure 1 clearly shows that the number of iterations reaches a minimum at and it slowly increases after this value. At this b the number of iterations is observed to be . The region of divergence is shown by a horizontal line parallel to the b−axis. 
(i) It can be observed that and that whenever the rate of convergence decreases or in other words t increases. 
(ii) In this example, we have studied how affects the rate of convergence by fixing . From relation (7) we obtain for and : 
We have plotted the number of iterations against , which is varied up to 400. Figure 2 shows that, the number of iterations vary much more when is chosen closer to the actual root, than, when chosen far away from the actual root. 
B. Error Analysis 
Let the error in the stage of iteration be . Thus, the error ratio in the stage so from relation (7), . Thus, the above inequality reduces to 
We define a new function , which eases the analysis. This function translates the origin to . The inequality (9) can be cast into the form: 
where . It is to be noted that, relation (9) can be obtained from (10), by setting . Also, 
Therefore, the criteria 
becomes a necessary condition for convergence. 
The exact value of is derived to be 
solving . The expression for is derived based on the pattern observed for the values of n = 1, 2, 3, 4 and 5. The general form of can be validated by directly substituting in the expression of , yielding zero. 
C. Range of b for the convergence of the Fixed-b method 
It is verified that is an increasing function of , when ever . Also, and that . Now, solving for from , we obtain the threshold value , which can be derived to be
13 
1 
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 131 
The expression is derived by noticing the pattern and then the general expression is validated by back substituting in . The inequality (11) is satisfied and the iteration converges, whenever . In all the earlier worked examples, we have chosen . It is to be noted that for a fixed x and for any , the iteration formula using b1 shows a faster rate of convergence, whenever . 
The expression is derived by noticing the pattern and then the general expression is validated by back substituting in . The inequality (11) is satisfied and the iteration converges, whenever . In all the earlier worked examples, we have chosen . It is to be noted that for a fixed x and for any , the iteration formula using 1 shows a faster rate of convergence, whenever 1 . 
IV. Adaptive-b method and its analysis 
In section 3, we have mentioned that the best choice of is . Since, this choice of involves the original problem of finding , the best approximate to is the iterate itself. Thus, after each iteration, b can be updated to . Note that b now depends on γ. Hence, we set , where is a parameter. Defining as a linear function of gives a new variant of the formula (8). The method is now called as the Adaptive- method and the function is named as . A detailed analysis of the effect of the parameter on the rate of convergence has been discussed later in this section. Setting , the iteration formula in (8) takes a new form, given by 
Table 1. Comparison between Adaptive-b and Fixed-b methods 
Iterations 
Adaptive-b 
Fixed- 
γ0 
γ1 
γ2 
γ3 
γ4 
γ5 
γ6 
10 
13.3688 
12.6701 
12.6288 
12.6285 
10 
13.3688 
12.5293 
12.6446 
12.6260 
12.6289 
12.6285 
The sequence formed by (13) converges faster than that formed by (7). For example, let and . Set , so that . Table 1 shows that the Adaptive-b method requires 4 iterations, whereas Fixed-b method requires 6 iterations, for four decimal places of accuracy. 
A. Comparison of Adaptive-b method with Newton Raphson method 
The Newton Raphson iteration formula for finding the root of a number m is 
, 
where 
Comparing the rates of convergence of Adaptive-b method and Newton-Raphson method, we observe that the two curves and almost coincide. In particular, for , the two curves and coincide and for , and at an appropriate , the two curves are in close proximity. For example, choosing and , we observe that the curve tends to curve, as . Figure 3 explains this fact, where β is chosen to be 2.25 and 3. 
Intuitively, if , then . The terms after the first are insignificant if and therefore are negligible. Thus, the convergence of Adaptive-b and N R methods are almost the same, if we choose .
13 
2 
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 132 
Figure 3: Comparison between Adaptive-b method and Newton Raphson method 
B. Convergence analysis of Adaptive- method 
The analysis of the rate of convergence of the Adaptive-b method is similar to that of Fixed-b method, wherein we replace the function with . In figure 4, we observe that the graph of for , is almost equal to that of the Newton-Raphson method. In this example, we have chosen , and . 
The graph for any m > 0 is similar to that shown in Figure 4. The necessary condition for convergence is and it is found that has a point of discontinuity at . It is also verified that is an increasing function of and that , for . This can be observed in figure 5. We can derive the threshold value , similar to that of as, 
For , the necessary condition is satisfied and that the iteration starts to converge. It is found that monotonically increases with m and an upper bound is given to be . Thus for , the iterative scheme of Adaptive- always converge. 
V. Newton Raphson Improved method 
A slight variation in Adaptive-b method results in Simplified Adaptive-b method or SA-b method, wherein we drop the terms , , , from the numerator and the terms , , from the denominator, of relation (13). This gives rise to a new iteration formula after replacing by , as given below 
where the function is defined as 
Clearly, when , the two functions and coincide. Also, the threshold value of . This can be derived on similar lines of deriving . For this method converges.
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 133 
Figure 4: Error ratios of Adaptive- and NR methods 
Now, we compare the two methods SA-b and Adaptive-b and this is illustrated in Figure 6. We have chosen , and . We observed that both the Adaptive- and the methods can be faster than the Newton-Raphson method, if we update the parameters β and α at every iteration. Table 2 and Table 3 illustrates this fact by choosing and . In tables 2 and 3, is chosen such that is less than when , greater than when and when . The method of choosing an optimal value of at every iteration is given by the below formula, which characterizes the above behavior 
Notice that, for , (18) gives values of such that at every iteration stage, thus choosing ensures convergence and unlike earlier methods, no other parameter needs to be chosen. Now, substituting (18) in (16) gives the iteration formula for Newton-Raphson Improved method or NRI method as given below: 
where the function NRI(γk ) is defined as, 
For closer to , NRI method tends to Newton-Raphson method. At , and . 
Table 2. Comparison of the SA-b, the Adaptive-b, (updating α and β) and the Newton 
Raphson method when is chosen 
Iterations 
α 
SA-b 
β 
Adaptive-b 
NR 
γ0 
γ1 
γ2 
γ3 
γ4 
γ5 
γ6 
γ7 
γ8 
γ9 
1 
1 
1 
1.5 
2 
2 
2 
2 
2 
100 
50.1007 
25.413 
14.2794 
12.5166 
12.6274 
12.6264 
1 
1 
1.5 
1.5 
2 
2 
2 
2 
2 
100 
50.1031 
25.4574 
16.5224 
12.8684 
12.6314 
12.6265 
12.6264 
100 
66.7338 
44.6398 
30.0966 
20.8052 
50.4203 
13.1021 
12.6435 
12.6265 
12.6264 
In terms of computational complexity for implementing on a computer, both the and the methods require multiplications and one division, per iteration. Although the NRI method needs two addition operations more than the NR method, there is a significant reduction in the number of iterations in the NRI method, for a fixed decimal of accuracy.
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 134 
Table 3. Comparison of the SA-b, the Adaptive-b, (updating and ) and the Newton 
Raphson method when is chosen 
Iterations 
α 
SA-b 
β 
Adaptive-b 
NR 
γ0 
γ1 
γ2 
γ3 
γ4 
γ5 
γ6 
γ7 
γ8 
γ9 
γ10 
γ11 
γ12 
γ13 
γ14 
γ15 
200 
20 
15 
10 
5 
4 
3 
2 
2 
2 
2 
2 
2 
2 
2 
1 
11.0100 
11.2764 
11.5611 
11.8792 
12.2768 
12.4926 
12.5941 
12.6265 
12.6264 
200 
20 
15 
10 
5 
4 
3 
2 
2 
2 
2 
2 
2 
2 
2 
1 
10.9604 
11.2363 
11.5304 
11.8584 
12.2674 
12.4889 
12.5930 
12.6265 
12.6264 
1 
671.6667 
447.7793 
298.5229 
199.0228 
132.6988 
88.5040 
59.0883 
39.5844 
26.8178 
18.8115 
14.4372 
12.8441 
12.6301 
12.6265 
12.6264 
Thus, the method is superior to the NR method for computation of . For example, if , and , the NRI method takes 8 iterations, whereas, the Newton-Raphson method takes 61 iterations. Table 4 illustrates the number of iterations “i" required by the two methods, in achieving four decimal place of accuracy, for different 
Thus, the method is superior to the NR method for computation of . For example, if , and , the NRI method takes 8 iterations, whereas, the Newton-Raphson method takes 61 iterations. Table 4 illustrates the number of iterations “i" required by the two methods, in achieving four decimal place of accuracy, for different . 
Finally, let us compare NRI method with the Halley’s method, which is a third order method. As shown in Table 4, NRI method can be faster than Halley’s method. In some cases, NRI method is slightly slower than Halley’s method but NRI method needs three lesser multiplication operations per iteration. The formula for computing in by Halley's method can be derived in the form as 
Figure 5: ERβ (β, 0) against β
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 135 
m, n 
γ0 
i 
NRI 
NR 
Halley 
2014.91, 5 
1 
1 
2 
3 
4 
5 
6 
. 
24 
25 
1.9975 
3.8468 
4.6845 
4.5822 
4.5798 
403.7820 
323.0256 
258.4205 
206.7364 
165.3891 
132.3113 
. 
4.5799 
4.5798 
1.4994 
2.2421 
3.2875 
4.3222 
4.5781 
4.5798 
100 
1 
2 
3 
. 
10 
11 
12 
13 
14 
. 
17 
75.0000 
56.2500 
42.1876 
5.8783 
4.9008 
4.6021 
4.5800 
4.5798 
80.0000 
64.0000 
51.2000 
. 
10.7559 
8.6348 
6.9803 
5.7540 
4.9708 
. 
4.5798 
44.4445 
29.63 
19.7548 
. 
4.5798 
Figure 6: Comparison of with Adaptive-b Method 
VI. Conclusions 
New iterative methods named as, the Fixed-b method, the Adaptive- method, the Simplified Adaptive-b method and the Newton Raphson Improved method, have been studied and analyzed, for finding the root of a number m. The convergence of these methods has also been explained. The parameters that affect the rate of convergence have been explained. Further, we also have discussed about choosing an optimum value of these parameters. These iterative methods have been compared to the well-known Newton-Raphson method. It is evident from the examples that the NRI method is much faster than the Newton-Raphson method, for finding . We conclude by stating that the NRI method is a better alternative for finding . Although, Halley’s method is third order method, the numerical examples prove that, at times, Newton Raphson’s improved method can be still better. 
Table 4. Comparison of the NRI, the NR and the Halley’s methods, for various n, m and . 
. . 
. .
Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 
136 
IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 136 
m, n 
γ0 
i 
NRI 
NR 
Halley 
0.2012, 4 
1 
1 
2 
3 
4 
0.7505 
0.6750 
0.6698 
0.6697 
0.8003 
0.6984 
0.6715 
0.6697 
0.7149 
0.67 
0.6697 
0.2012 
1 
2 
3 
4 
. 
10 
11 
12 
0.3960 
0.6503 
0.6700 
0.6697 
6.3266 
4.7451 
3.5593 
2.6706 
... 
0.6793 
0.6699 
0.6697 
0.3325 
0.5215 
0.6578 
0.6697 
References 
[1] Kendall E. Atkinson, “An Introduction to Numerical Analysis (Second Edition)”, John Wiley & Sons, 1988. 
[2] Theodore Eisenberg, “On an unknown algorithm for computing square roots”, Intl. Jl. Math.Edu. Sci. Tech., 34(1), (2003), pp. 153-158. 
[3] Haibin Zhang, Lizhen Zhang, Sen Zhang, “Original Halley Method and its Improvement with Automatic Differentiation, FSKD, Sixth International Conference on Fuzzy Systems and Knowledge Discovery”, IEEE Computer Society, Vol. 4, (2009), pp. 351-355. 
[4] Scavo, T. R. and Thoo, J. B. “On the Geometry of Halley’s Method”, Amer. Math. Mont 102, (1995), pp. 417 – 426.

More Related Content

What's hot

Some Engg. Applications of Matrices and Partial Derivatives
Some Engg. Applications of Matrices and Partial DerivativesSome Engg. Applications of Matrices and Partial Derivatives
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
 
Fault diagnosis using genetic algorithms and principal curves
Fault diagnosis using genetic algorithms and principal curvesFault diagnosis using genetic algorithms and principal curves
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
 
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...IJERA Editor
 
Solution of second kind volterra integro equations using linear
Solution of second kind volterra integro equations using linearSolution of second kind volterra integro equations using linear
Solution of second kind volterra integro equations using linearAlexander Decker
 
A Subgraph Pattern Search over Graph Databases
A Subgraph Pattern Search over Graph DatabasesA Subgraph Pattern Search over Graph Databases
A Subgraph Pattern Search over Graph DatabasesIJMER
 
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Mengxi Jiang
 
Stress-Strength Reliability of type II compound Laplace distribution
Stress-Strength Reliability of type II compound Laplace distributionStress-Strength Reliability of type II compound Laplace distribution
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
 
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor AnalysisColour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
 
Chapter3 hundred page machine learning
Chapter3 hundred page machine learningChapter3 hundred page machine learning
Chapter3 hundred page machine learningmustafa sarac
 
Classification of handwritten characters by their symmetry features
Classification of handwritten characters by their symmetry featuresClassification of handwritten characters by their symmetry features
Classification of handwritten characters by their symmetry featuresAYUSH RAJ
 
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...orajjournal
 
Mild balanced Intuitionistic Fuzzy Graphs
Mild balanced Intuitionistic Fuzzy Graphs Mild balanced Intuitionistic Fuzzy Graphs
Mild balanced Intuitionistic Fuzzy Graphs IJERA Editor
 
Nonlinear Exponential Regularization : An Improved Version of Regularization ...
Nonlinear Exponential Regularization : An Improved Version of Regularization ...Nonlinear Exponential Regularization : An Improved Version of Regularization ...
Nonlinear Exponential Regularization : An Improved Version of Regularization ...Seoung-Ho Choi
 
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...csandit
 

What's hot (20)

201977 1-1-3-pb
201977 1-1-3-pb201977 1-1-3-pb
201977 1-1-3-pb
 
Ijetr011961
Ijetr011961Ijetr011961
Ijetr011961
 
Some Engg. Applications of Matrices and Partial Derivatives
Some Engg. Applications of Matrices and Partial DerivativesSome Engg. Applications of Matrices and Partial Derivatives
Some Engg. Applications of Matrices and Partial Derivatives
 
Fault diagnosis using genetic algorithms and principal curves
Fault diagnosis using genetic algorithms and principal curvesFault diagnosis using genetic algorithms and principal curves
Fault diagnosis using genetic algorithms and principal curves
 
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...
The Analysis of Performance Measures of Generalized Trapezoidal Fuzzy Queuing...
 
FK_SPARS15
FK_SPARS15FK_SPARS15
FK_SPARS15
 
Solution of second kind volterra integro equations using linear
Solution of second kind volterra integro equations using linearSolution of second kind volterra integro equations using linear
Solution of second kind volterra integro equations using linear
 
A Subgraph Pattern Search over Graph Databases
A Subgraph Pattern Search over Graph DatabasesA Subgraph Pattern Search over Graph Databases
A Subgraph Pattern Search over Graph Databases
 
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...
Application of Graphic LASSO in Portfolio Optimization_Yixuan Chen & Mengxi J...
 
10.1.1.34.7361
10.1.1.34.736110.1.1.34.7361
10.1.1.34.7361
 
Stress-Strength Reliability of type II compound Laplace distribution
Stress-Strength Reliability of type II compound Laplace distributionStress-Strength Reliability of type II compound Laplace distribution
Stress-Strength Reliability of type II compound Laplace distribution
 
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor AnalysisColour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
 
Chapter3 hundred page machine learning
Chapter3 hundred page machine learningChapter3 hundred page machine learning
Chapter3 hundred page machine learning
 
Classification of handwritten characters by their symmetry features
Classification of handwritten characters by their symmetry featuresClassification of handwritten characters by their symmetry features
Classification of handwritten characters by their symmetry features
 
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...
Solving Linear Fractional Programming Problems Using a New Homotopy Perturbat...
 
07 Tensor Visualization
07 Tensor Visualization07 Tensor Visualization
07 Tensor Visualization
 
Mild balanced Intuitionistic Fuzzy Graphs
Mild balanced Intuitionistic Fuzzy Graphs Mild balanced Intuitionistic Fuzzy Graphs
Mild balanced Intuitionistic Fuzzy Graphs
 
FinalReportFoxMelle
FinalReportFoxMelleFinalReportFoxMelle
FinalReportFoxMelle
 
Nonlinear Exponential Regularization : An Improved Version of Regularization ...
Nonlinear Exponential Regularization : An Improved Version of Regularization ...Nonlinear Exponential Regularization : An Improved Version of Regularization ...
Nonlinear Exponential Regularization : An Improved Version of Regularization ...
 
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
 

Viewers also liked (19)

Viengsouvanh 2104
Viengsouvanh 2104Viengsouvanh 2104
Viengsouvanh 2104
 
Ijetcas14 336
Ijetcas14 336Ijetcas14 336
Ijetcas14 336
 
Ijetcas14 493
Ijetcas14 493Ijetcas14 493
Ijetcas14 493
 
Aijrfans14 289
Aijrfans14 289Aijrfans14 289
Aijrfans14 289
 
Red Apple Opportunities
Red Apple Opportunities Red Apple Opportunities
Red Apple Opportunities
 
Ijetcas14 317
Ijetcas14 317Ijetcas14 317
Ijetcas14 317
 
Ijetcas14 356
Ijetcas14 356Ijetcas14 356
Ijetcas14 356
 
Ijetcas14 306
Ijetcas14 306Ijetcas14 306
Ijetcas14 306
 
Ijetcas14 344
Ijetcas14 344Ijetcas14 344
Ijetcas14 344
 
Ijetcas14 325
Ijetcas14 325Ijetcas14 325
Ijetcas14 325
 
Aijrfans14 214
Aijrfans14 214Aijrfans14 214
Aijrfans14 214
 
Ijetcas14 314
Ijetcas14 314Ijetcas14 314
Ijetcas14 314
 
Ijetcas14 350
Ijetcas14 350Ijetcas14 350
Ijetcas14 350
 
Aijrfans14 215
Aijrfans14 215Aijrfans14 215
Aijrfans14 215
 
Ijetcas14 507
Ijetcas14 507Ijetcas14 507
Ijetcas14 507
 
Ijetcas14 308
Ijetcas14 308Ijetcas14 308
Ijetcas14 308
 
Ijetcas14 309
Ijetcas14 309Ijetcas14 309
Ijetcas14 309
 
Ijetcas14 584
Ijetcas14 584Ijetcas14 584
Ijetcas14 584
 
Ijetcas14 351
Ijetcas14 351Ijetcas14 351
Ijetcas14 351
 

Similar to Ijetcas14 546

A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
 
An Improvement to the Brent’s Method
An Improvement to the Brent’s MethodAn Improvement to the Brent’s Method
An Improvement to the Brent’s MethodWaqas Tariq
 
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERING
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGINFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERING
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGcsandit
 
Numerical Study of Some Iterative Methods for Solving Nonlinear Equations
Numerical Study of Some Iterative Methods for Solving Nonlinear EquationsNumerical Study of Some Iterative Methods for Solving Nonlinear Equations
Numerical Study of Some Iterative Methods for Solving Nonlinear Equationsinventionjournals
 
Applied Mathematics and Sciences: An International Journal (MathSJ)
Applied Mathematics and Sciences: An International Journal (MathSJ)Applied Mathematics and Sciences: An International Journal (MathSJ)
Applied Mathematics and Sciences: An International Journal (MathSJ)mathsjournal
 
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING cscpconf
 
Preconditioning in Large-scale VDA
Preconditioning in Large-scale VDAPreconditioning in Large-scale VDA
Preconditioning in Large-scale VDAJoseph Parks
 
83662164 case-study-1
83662164 case-study-183662164 case-study-1
83662164 case-study-1homeworkping3
 
A New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptxA New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptxMasoudIbrahim3
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...mathsjournal
 
Parameter Optimisation for Automated Feature Point Detection
Parameter Optimisation for Automated Feature Point DetectionParameter Optimisation for Automated Feature Point Detection
Parameter Optimisation for Automated Feature Point DetectionDario Panada
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONmathsjournal
 

Similar to Ijetcas14 546 (20)

A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...
 
H027052054
H027052054H027052054
H027052054
 
An Improvement to the Brent’s Method
An Improvement to the Brent’s MethodAn Improvement to the Brent’s Method
An Improvement to the Brent’s Method
 
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERING
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERINGINFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERING
INFLUENCE OF QUANTITY OF PRINCIPAL COMPONENT IN DISCRIMINATIVE FILTERING
 
Numerical Study of Some Iterative Methods for Solving Nonlinear Equations
Numerical Study of Some Iterative Methods for Solving Nonlinear EquationsNumerical Study of Some Iterative Methods for Solving Nonlinear Equations
Numerical Study of Some Iterative Methods for Solving Nonlinear Equations
 
Cu24631635
Cu24631635Cu24631635
Cu24631635
 
Applied Mathematics and Sciences: An International Journal (MathSJ)
Applied Mathematics and Sciences: An International Journal (MathSJ)Applied Mathematics and Sciences: An International Journal (MathSJ)
Applied Mathematics and Sciences: An International Journal (MathSJ)
 
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING
 
Preconditioning in Large-scale VDA
Preconditioning in Large-scale VDAPreconditioning in Large-scale VDA
Preconditioning in Large-scale VDA
 
83662164 case-study-1
83662164 case-study-183662164 case-study-1
83662164 case-study-1
 
A New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptxA New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptx
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Parameter Optimisation for Automated Feature Point Detection
Parameter Optimisation for Automated Feature Point DetectionParameter Optimisation for Automated Feature Point Detection
Parameter Optimisation for Automated Feature Point Detection
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
 
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATIONA NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
A NEW METHOD OF CENTRAL DIFFERENCE INTERPOLATION
 

More from Iasir Journals (20)

ijetcas14 650
ijetcas14 650ijetcas14 650
ijetcas14 650
 
Ijetcas14 648
Ijetcas14 648Ijetcas14 648
Ijetcas14 648
 
Ijetcas14 647
Ijetcas14 647Ijetcas14 647
Ijetcas14 647
 
Ijetcas14 643
Ijetcas14 643Ijetcas14 643
Ijetcas14 643
 
Ijetcas14 641
Ijetcas14 641Ijetcas14 641
Ijetcas14 641
 
Ijetcas14 639
Ijetcas14 639Ijetcas14 639
Ijetcas14 639
 
Ijetcas14 632
Ijetcas14 632Ijetcas14 632
Ijetcas14 632
 
Ijetcas14 624
Ijetcas14 624Ijetcas14 624
Ijetcas14 624
 
Ijetcas14 619
Ijetcas14 619Ijetcas14 619
Ijetcas14 619
 
Ijetcas14 615
Ijetcas14 615Ijetcas14 615
Ijetcas14 615
 
Ijetcas14 608
Ijetcas14 608Ijetcas14 608
Ijetcas14 608
 
Ijetcas14 605
Ijetcas14 605Ijetcas14 605
Ijetcas14 605
 
Ijetcas14 604
Ijetcas14 604Ijetcas14 604
Ijetcas14 604
 
Ijetcas14 598
Ijetcas14 598Ijetcas14 598
Ijetcas14 598
 
Ijetcas14 594
Ijetcas14 594Ijetcas14 594
Ijetcas14 594
 
Ijetcas14 593
Ijetcas14 593Ijetcas14 593
Ijetcas14 593
 
Ijetcas14 591
Ijetcas14 591Ijetcas14 591
Ijetcas14 591
 
Ijetcas14 589
Ijetcas14 589Ijetcas14 589
Ijetcas14 589
 
Ijetcas14 585
Ijetcas14 585Ijetcas14 585
Ijetcas14 585
 
Ijetcas14 583
Ijetcas14 583Ijetcas14 583
Ijetcas14 583
 

Recently uploaded

Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGSIVASHANKAR N
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 

Recently uploaded (20)

Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 

Ijetcas14 546

  • 1. International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 128 ISSN (Print): 2279-0047 ISSN (Online): 2279-0055 On finding leading to Newton Raphson’s improved method Nitin Jain1, Kushal D Murthy1 and Hamsapriye2 1Student of Department of Electronics & Communication Engineering, 2 Professor, Department of Mathematics, 1,2 R. V. College of Engineering, Mysore Road, Bangalore, Karnataka-560 059, INDIA __________________________________________________________________________________________ Abstract: New iterative algorithms for finding the nth root of a positive number m, to any degree of accuracy, are discussed. The convergence of these methods is also analyzed and the factors affecting the rate of convergence are studied analytically, as well as graphically. The parameters involved in the iterative schemes are studied. Expressions are derived for the optimal values of these parameters. The rates of convergence of these new methods can be accelerated through these parameters, which prove to be much faster than the Newton Raphson method for finding in some cases. Several examples are given for clarity purposes. Also, numerical comparative study is made between the improved Newton Raphson’s method and the third order Halley’s method. Keywords: Iterative algorithm; Fixed point iteration; Fixed-b method; Adaptive-b method; Simplified Adaptive- b method; Halley’s method; Newton Raphson Method; Newon-Raphson Improved. __________________________________________________________________________________________ I. Introduction The root of a positive m is a number satisfying . Any real number m has n such roots. In this paper, we are concerned with the numerical approximation of . There are numerical methods, such as, the bisection method, the regula-falsi method, the Newton-Raphson method and many more. Any standard textbook on numerical analysis explains these methods [1]. All these methods use the function . In [2], an iterative algorithm for finding the is discussed, which involves generating a sequence of approximations to . The method is also directly related to the continued fraction representation of . The convergence of this method is established by studying the eigenvalues and eigenvectors of a matrix, directly related to the algorithm itself. The approximations are then obtained from the following sequence of fractions: which can also be viewed as a sequence generated from where a is replaced with γ. If we consider any fraction as a two dimensional vector, then a can be represented by , and vice-versa. The right hand expression in relation (1) is then equivalent to the matrix product Therefore, the successive generation of the sequence of approximations to , involve the multiplication of higher powers of the square matrix in (3). The convergence of the iterative algorithm directly depends on the nature of the eigenvalues and eigenvectors of the matrix. In this paper, we have discussed four numerical methods in sequence: the Fixed-b method, the Adaptive-b method, a simplified form of the Adaptive-b method, called the Simplified Adaptive-b method leading to the Newon-Raphson Improved (NRI) method. Several numerical examples are worked out for a clear understanding of these methods. The Newton-Raphson’s improved method is also compared with the third order Halley’s method [3], [4]. In section 2, we have discussed the Fixed-b method and in section 3 we have analyzed this method in a greater detail. Section 4 discusses the Adaptive-b method and its analysis. In section 5, an improved version of Newton-Raphson method, called the NRI method is explained, which is derived from the Simplified Adaptive-b method (SA-b). Several examples are included for clarity purposes. Although m is any positive number, we have chosen m = 2012, 2013 and 2014, commemorating the years of research.
  • 2. 12 9 Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 129 II. Fixed-b method Intuitively, one can generalize the relation (2) as for finding the cube-root of a number m. The relation (4) converges to , for . But, it is found that for higher values of m the above sequence oscillates and never converges. In order to eliminate these oscillations, we perturbed the right hand expression of (4) to obtain where is a real parameter. For a given m, the convergence of the sequence in (5) depends on . This idea can be further generalized to obtain an approximate value for the sequence . We consider This iterative sequence for will take the for which is different from relation (2). By varying the parameter in (6), we can achieve convergence and improve the rate of convergence as explained in section 3. The sequence in (6) can be rewritten as an iterative formula as given below where we have defined as It is to be mentioned that, and is chosen initially. A. Example To illustrate this new method we choose and . Using the formula in (7), with , we get up to 3 decimal accuracy, in 29 iterations. III. Analysis of Fixed-b method The convergence of the Fixed-b method depends on the choice of and . In this section, we analyze the rate of convergence, by alternately varying b and γ0, for the examples (i) and (ii) . In both examples, we have fixed four decimal places of accuracy. In this example, we have studied how b affects the rate of convergence by fixing . Choosing and in relation (7), we obtain: We have plotted the number of iterations against , which is varied from 1 to 100. The iteration diverges whenever and converges for . A formula for b is derived later in this section, for which the iteration starts to converge, as shown later in this section. Figure 1: No. of Iterations against b
  • 3. 13 0 Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 130 Figure 2: No. of Iterations against γ0 Figure 1 clearly shows that the number of iterations reaches a minimum at and it slowly increases after this value. At this b the number of iterations is observed to be . The region of divergence is shown by a horizontal line parallel to the b−axis. (i) It can be observed that and that whenever the rate of convergence decreases or in other words t increases. (ii) In this example, we have studied how affects the rate of convergence by fixing . From relation (7) we obtain for and : We have plotted the number of iterations against , which is varied up to 400. Figure 2 shows that, the number of iterations vary much more when is chosen closer to the actual root, than, when chosen far away from the actual root. B. Error Analysis Let the error in the stage of iteration be . Thus, the error ratio in the stage so from relation (7), . Thus, the above inequality reduces to We define a new function , which eases the analysis. This function translates the origin to . The inequality (9) can be cast into the form: where . It is to be noted that, relation (9) can be obtained from (10), by setting . Also, Therefore, the criteria becomes a necessary condition for convergence. The exact value of is derived to be solving . The expression for is derived based on the pattern observed for the values of n = 1, 2, 3, 4 and 5. The general form of can be validated by directly substituting in the expression of , yielding zero. C. Range of b for the convergence of the Fixed-b method It is verified that is an increasing function of , when ever . Also, and that . Now, solving for from , we obtain the threshold value , which can be derived to be
  • 4. 13 1 Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 131 The expression is derived by noticing the pattern and then the general expression is validated by back substituting in . The inequality (11) is satisfied and the iteration converges, whenever . In all the earlier worked examples, we have chosen . It is to be noted that for a fixed x and for any , the iteration formula using b1 shows a faster rate of convergence, whenever . The expression is derived by noticing the pattern and then the general expression is validated by back substituting in . The inequality (11) is satisfied and the iteration converges, whenever . In all the earlier worked examples, we have chosen . It is to be noted that for a fixed x and for any , the iteration formula using 1 shows a faster rate of convergence, whenever 1 . IV. Adaptive-b method and its analysis In section 3, we have mentioned that the best choice of is . Since, this choice of involves the original problem of finding , the best approximate to is the iterate itself. Thus, after each iteration, b can be updated to . Note that b now depends on γ. Hence, we set , where is a parameter. Defining as a linear function of gives a new variant of the formula (8). The method is now called as the Adaptive- method and the function is named as . A detailed analysis of the effect of the parameter on the rate of convergence has been discussed later in this section. Setting , the iteration formula in (8) takes a new form, given by Table 1. Comparison between Adaptive-b and Fixed-b methods Iterations Adaptive-b Fixed- γ0 γ1 γ2 γ3 γ4 γ5 γ6 10 13.3688 12.6701 12.6288 12.6285 10 13.3688 12.5293 12.6446 12.6260 12.6289 12.6285 The sequence formed by (13) converges faster than that formed by (7). For example, let and . Set , so that . Table 1 shows that the Adaptive-b method requires 4 iterations, whereas Fixed-b method requires 6 iterations, for four decimal places of accuracy. A. Comparison of Adaptive-b method with Newton Raphson method The Newton Raphson iteration formula for finding the root of a number m is , where Comparing the rates of convergence of Adaptive-b method and Newton-Raphson method, we observe that the two curves and almost coincide. In particular, for , the two curves and coincide and for , and at an appropriate , the two curves are in close proximity. For example, choosing and , we observe that the curve tends to curve, as . Figure 3 explains this fact, where β is chosen to be 2.25 and 3. Intuitively, if , then . The terms after the first are insignificant if and therefore are negligible. Thus, the convergence of Adaptive-b and N R methods are almost the same, if we choose .
  • 5. 13 2 Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 132 Figure 3: Comparison between Adaptive-b method and Newton Raphson method B. Convergence analysis of Adaptive- method The analysis of the rate of convergence of the Adaptive-b method is similar to that of Fixed-b method, wherein we replace the function with . In figure 4, we observe that the graph of for , is almost equal to that of the Newton-Raphson method. In this example, we have chosen , and . The graph for any m > 0 is similar to that shown in Figure 4. The necessary condition for convergence is and it is found that has a point of discontinuity at . It is also verified that is an increasing function of and that , for . This can be observed in figure 5. We can derive the threshold value , similar to that of as, For , the necessary condition is satisfied and that the iteration starts to converge. It is found that monotonically increases with m and an upper bound is given to be . Thus for , the iterative scheme of Adaptive- always converge. V. Newton Raphson Improved method A slight variation in Adaptive-b method results in Simplified Adaptive-b method or SA-b method, wherein we drop the terms , , , from the numerator and the terms , , from the denominator, of relation (13). This gives rise to a new iteration formula after replacing by , as given below where the function is defined as Clearly, when , the two functions and coincide. Also, the threshold value of . This can be derived on similar lines of deriving . For this method converges.
  • 6. Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 133 Figure 4: Error ratios of Adaptive- and NR methods Now, we compare the two methods SA-b and Adaptive-b and this is illustrated in Figure 6. We have chosen , and . We observed that both the Adaptive- and the methods can be faster than the Newton-Raphson method, if we update the parameters β and α at every iteration. Table 2 and Table 3 illustrates this fact by choosing and . In tables 2 and 3, is chosen such that is less than when , greater than when and when . The method of choosing an optimal value of at every iteration is given by the below formula, which characterizes the above behavior Notice that, for , (18) gives values of such that at every iteration stage, thus choosing ensures convergence and unlike earlier methods, no other parameter needs to be chosen. Now, substituting (18) in (16) gives the iteration formula for Newton-Raphson Improved method or NRI method as given below: where the function NRI(γk ) is defined as, For closer to , NRI method tends to Newton-Raphson method. At , and . Table 2. Comparison of the SA-b, the Adaptive-b, (updating α and β) and the Newton Raphson method when is chosen Iterations α SA-b β Adaptive-b NR γ0 γ1 γ2 γ3 γ4 γ5 γ6 γ7 γ8 γ9 1 1 1 1.5 2 2 2 2 2 100 50.1007 25.413 14.2794 12.5166 12.6274 12.6264 1 1 1.5 1.5 2 2 2 2 2 100 50.1031 25.4574 16.5224 12.8684 12.6314 12.6265 12.6264 100 66.7338 44.6398 30.0966 20.8052 50.4203 13.1021 12.6435 12.6265 12.6264 In terms of computational complexity for implementing on a computer, both the and the methods require multiplications and one division, per iteration. Although the NRI method needs two addition operations more than the NR method, there is a significant reduction in the number of iterations in the NRI method, for a fixed decimal of accuracy.
  • 7. Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 134 Table 3. Comparison of the SA-b, the Adaptive-b, (updating and ) and the Newton Raphson method when is chosen Iterations α SA-b β Adaptive-b NR γ0 γ1 γ2 γ3 γ4 γ5 γ6 γ7 γ8 γ9 γ10 γ11 γ12 γ13 γ14 γ15 200 20 15 10 5 4 3 2 2 2 2 2 2 2 2 1 11.0100 11.2764 11.5611 11.8792 12.2768 12.4926 12.5941 12.6265 12.6264 200 20 15 10 5 4 3 2 2 2 2 2 2 2 2 1 10.9604 11.2363 11.5304 11.8584 12.2674 12.4889 12.5930 12.6265 12.6264 1 671.6667 447.7793 298.5229 199.0228 132.6988 88.5040 59.0883 39.5844 26.8178 18.8115 14.4372 12.8441 12.6301 12.6265 12.6264 Thus, the method is superior to the NR method for computation of . For example, if , and , the NRI method takes 8 iterations, whereas, the Newton-Raphson method takes 61 iterations. Table 4 illustrates the number of iterations “i" required by the two methods, in achieving four decimal place of accuracy, for different Thus, the method is superior to the NR method for computation of . For example, if , and , the NRI method takes 8 iterations, whereas, the Newton-Raphson method takes 61 iterations. Table 4 illustrates the number of iterations “i" required by the two methods, in achieving four decimal place of accuracy, for different . Finally, let us compare NRI method with the Halley’s method, which is a third order method. As shown in Table 4, NRI method can be faster than Halley’s method. In some cases, NRI method is slightly slower than Halley’s method but NRI method needs three lesser multiplication operations per iteration. The formula for computing in by Halley's method can be derived in the form as Figure 5: ERβ (β, 0) against β
  • 8. Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 135 m, n γ0 i NRI NR Halley 2014.91, 5 1 1 2 3 4 5 6 . 24 25 1.9975 3.8468 4.6845 4.5822 4.5798 403.7820 323.0256 258.4205 206.7364 165.3891 132.3113 . 4.5799 4.5798 1.4994 2.2421 3.2875 4.3222 4.5781 4.5798 100 1 2 3 . 10 11 12 13 14 . 17 75.0000 56.2500 42.1876 5.8783 4.9008 4.6021 4.5800 4.5798 80.0000 64.0000 51.2000 . 10.7559 8.6348 6.9803 5.7540 4.9708 . 4.5798 44.4445 29.63 19.7548 . 4.5798 Figure 6: Comparison of with Adaptive-b Method VI. Conclusions New iterative methods named as, the Fixed-b method, the Adaptive- method, the Simplified Adaptive-b method and the Newton Raphson Improved method, have been studied and analyzed, for finding the root of a number m. The convergence of these methods has also been explained. The parameters that affect the rate of convergence have been explained. Further, we also have discussed about choosing an optimum value of these parameters. These iterative methods have been compared to the well-known Newton-Raphson method. It is evident from the examples that the NRI method is much faster than the Newton-Raphson method, for finding . We conclude by stating that the NRI method is a better alternative for finding . Although, Halley’s method is third order method, the numerical examples prove that, at times, Newton Raphson’s improved method can be still better. Table 4. Comparison of the NRI, the NR and the Halley’s methods, for various n, m and . . . . .
  • 9. Nitin Jain et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(2), June-August, 2014, pp. 128- 136 IJETCAS 14-546; © 2014, IJETCAS All Rights Reserved Page 136 m, n γ0 i NRI NR Halley 0.2012, 4 1 1 2 3 4 0.7505 0.6750 0.6698 0.6697 0.8003 0.6984 0.6715 0.6697 0.7149 0.67 0.6697 0.2012 1 2 3 4 . 10 11 12 0.3960 0.6503 0.6700 0.6697 6.3266 4.7451 3.5593 2.6706 ... 0.6793 0.6699 0.6697 0.3325 0.5215 0.6578 0.6697 References [1] Kendall E. Atkinson, “An Introduction to Numerical Analysis (Second Edition)”, John Wiley & Sons, 1988. [2] Theodore Eisenberg, “On an unknown algorithm for computing square roots”, Intl. Jl. Math.Edu. Sci. Tech., 34(1), (2003), pp. 153-158. [3] Haibin Zhang, Lizhen Zhang, Sen Zhang, “Original Halley Method and its Improvement with Automatic Differentiation, FSKD, Sixth International Conference on Fuzzy Systems and Knowledge Discovery”, IEEE Computer Society, Vol. 4, (2009), pp. 351-355. [4] Scavo, T. R. and Thoo, J. B. “On the Geometry of Halley’s Method”, Amer. Math. Mont 102, (1995), pp. 417 – 426.