Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

No Downloads

Total views

1,936

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

85

Comments

0

Likes

2

No embeds

No notes for slide

- 1. ITERATIVESMETHODS<br />
- 2. JACOBI METHOD<br />ITERATIVESMETHODS<br />
- 3. JACOBI METHOD<br />Suppose we are trying to solve a system of linear equation Mx= b. If we assume that the diagonal entries are non-zero (true if the matrix M is positive definite), then we may rewrite this equation as:<br />Dx+ Moffx = b<br />Where:<br />Dis the diagonal matrix containing the diagonal entries of M and Moff contains the off-diagonal entries of M. Because all the entries of the diagonal matrix are non-zero, the inverse is simply the diagonal matrix whose diagonal entries are the reciprocals of the corresponding entries of D.<br />
- 4. Thus, we may bring the off-diagonal entries to the right hand side and multiply by D-1:<br />x = D-1(b - Moffx) <br />You will recall from the class on iteration, we now have an equation of the form x = f(x), except in this case, the argument is a vector, and thus, one method of solving such a problem is to start with an initial vector x0.<br />
- 5. EXAMPLE <br />Use the Jacobi method to approximate the solution of the following system of linear equations:<br />Withinitialvalues:<br />
- 6. Continue the iterations until two successive approximations are identical when rounded to three significant digits.<br />Tobegin, writethesystem in theform:<br />
- 7. As a convenient initial approximation. So, the first approximation is:<br />
- 8. Continuing thisprocedure, youobtainthesequence of approximationsshown in Table. <br />
- 9. Becausethelasttwocolumns in table are identical, you can concludethattothreesignificantdigitsthesolutionis:<br />Forthesystem of linear equations given in example, theJacobimethodissaid ti converge. Thatis, repeated iterations succeed in producingapproximationthatiscorrecttothreesignificantdigits. As is generally true foriterativemethods, grateraccuracywouldrequire more iterations. <br />
- 10. GAUSS-SEIDEL METHOD<br />ITERATIVESMETHODS<br />
- 11. GAUSS-SEIDEL METHOD<br />The Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the GermanmathematiciansCarl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite.<br />
- 12. First solve for the unknowns in order.<br />Then we assume an initial value for [X(0)]<br />We must remember that I always use the most recent value xi. This means that calculated values apply for the calculations are in the current iteration<br />
- 13. Calculation of relative absolute error approximate<br />Then find the correct answer when the maximum relative absolute error is approximately less than the specified tolerance for all unknowns.<br />
- 14. EXAMPLE <br />Solve the following system of equations:<br />Matrixcoefficients:<br />Withinitialvalues:<br />
- 15. Let's see if the matrix is diagonally dominant<br />Satisfy all inequalities, therefore the solution should converge using the Gauss Seidel.<br />
- 16. Rewritingeachequation:<br />Withinitialvalues:<br />
- 17. Iteration1<br />Theapproximaterelativeabsoluteerror:<br />The approximate maximum relative absolute error after the first iteration is 100%.<br />
- 18. Substituting the above values into the equations<br />
- 19. Iteration 2<br />Theapproximaterelativeabsoluteerror:<br />The approximate maximum relative absolute error after the second iteration is 24.7%.<br />
- 20. Substituting the above values into the equations<br />
- 21. Iteration 3<br />Theapproximaterelativeabsoluteerror:<br />The approximate maximum relative absolute error after the second iteration is 8.9%.<br />
- 22. Substituting the above values into the equations<br />
- 23. Iteration 4<br />Theapproximaterelativeabsoluteerror:<br />The approximate maximum relative absolute error after the second iteration is 0.06%.<br />
- 24. Theresultingsolutionis:<br />Theexactsolutionis:<br />
- 25. GAUSS-SEIDEL RELAXATION METHOD<br />Gauss-Seidel<br />ITERATIVESMETHODS<br />
- 26. GAUSS-SEIDEL RELAXATION<br />The Gauss-Seidel method is a technique for solving the equations of the linear system of equationsAx = b one at a time in sequence, and uses previously computed results as soon as they are available,<br />
- 27. There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method. Secondly, the new iterate depends upon the order in which the equations are examined. If this ordering is changed, the components of the new iterates (and not just their order) will also change. <br />
- 28. In terms of matrices, the definition of the Gauss-Seidel method can be expressed as <br />where the matrices D, -L and -U represent the diagonal, strictly lower triangular, and strictly upper triangular parts of, respectively. <br />The Gauss-Seidel method is applicable to strictly diagonally dominant, or symmetric positive definite matrices .<br />
- 29. IMPROVING THE CONVERGENCE USING RELAXATION SOR<br />The relaxation represents a slight modification to the Gauss Seidel and this improves the convergence. Estimated after each new value of x, that value is changed by a weighted average of the results of previous and current iteration.<br />Where w is a weighting factor that has a Valors between 0 and 2.<br />
- 30. Example.<br />Figure 5. The effect of freezing the boundary on several levels of a surface.<br />
- 31. EXAMPLE <br />Solve the following system of equations for Relaxation SOR:<br />Withinitialvalues:<br />With:<br />
- 32. Rewritingeachequation:<br />Withinitialvalues X1:<br />
- 33. Iteration1<br />
- 34. Iteration2<br />
- 35. And we must make the followings iterations y and obtained:<br />
- 36. If A is symmetric and positive definite, the Gauss-Seidel method converges.<br />If A is symmetric and the matrix is the form:<br />And is positive definite, the Jacobi method is convergent.<br />CONVERGENCE OF ITERATIVES METHODS<br />
- 37. If A is symmetric and positive definite, the relaxation method converges if and only if 0 < w <2.<br />If w < 1 the method is named subrelajation and if w > 1, Overrelaxation.<br />If A is symmetric, positive definite and tridiagonal, the optimal value of w for the convergence of the relaxation method is:<br /> where:Pj: The spectral radius of the matrix Jacobi iteration method.<br />
- 38. http://www.google.com.co/imgres?imgurl=http://www.unavarra.es/personal/victor_dominguez/portMatlab.jpg&imgrefurl=http://www.unavarra.es/personal/victor_dominguez/libroMatlab.htm&usg=__g0H6VfxnJXinSq932hKACQW-4mE=&h=423&w=596&sz=61&hl=es&start=19&itbs=1&tbnid=qcEPT90K9KGOJM:&tbnh=96&tbnw=135&prev=/images%3Fq%3DGauss-Seidel%2Bcon%2Brelajaci%25C3%25B3n%26hl%3Des%26sa%3DG%26gbv%3D2%26tbs%3Disch:1<br />http://www.ana.iusiani.ulpgc.es/metodos_numericos/document/apuntes/Parte_4.pdf<br />Bibliography<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment