Direct Methods For The Solution Of Systems Of

1,671 views

Published on

The Engineer of Industrial Universtiy of Santander, Elkin Santafe, give us a little summary about direct methods for the solution of systems of equations

Published in: Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,671
On SlideShare
0
From Embeds
0
Number of Embeds
77
Actions
Shares
0
Downloads
91
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Direct Methods For The Solution Of Systems Of

  1. 1. Direct methods for the solution of systems of linear equations<br />
  2. 2. Definition of the problem<br />A x b<br />x<br />=<br />vector of independent terms<br />coefficient matrix<br />coefficient matrix<br />x<br />=<br />
  3. 3. <ul><li> If b=0, the system is homogeneous
  4. 4. If b!=0, the system isn’t homogeneous</li></ul>Augmented form of the matrix<br />
  5. 5. Specials Systems<br />
  6. 6. Specials Systems<br />
  7. 7. Existence and uniqueness<br />
  8. 8. Existence and uniqueness<br />ILL- CONDITIONAL SYSTEM<br />Singular Systems<br />
  9. 9. TYPES OF DIRECT METHODS<br />Gauss<br />Gauss with pivoting<br />Gauss - Jordan<br />Thomas<br />
  10. 10. Gaussianelimination<br />In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss.<br />Elementary row operations are used to reduce a matrix to row echelon form. Gauss–Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications.<br />
  11. 11. Algorithmoverview<br /> The process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.<br /> Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.<br /> Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.<br />
  12. 12. OtherApplications<br />Finding the inverse of a matrix<br />Suppose A is a matrix and you need to calculate its inverse. The identity matrix is augmented to the right of A, forming a matrix (the block matrixB = [A,I]). Through application of elementary row operations and the Gaussian elimination algorithm, the left block of B can be reduced to the identity matrix I, which leaves A−1 in the right block of B.<br /> If the algorithm is unable to reduce A to triangular form, then A is not invertible.<br />
  13. 13. General algorithm to compute ranks and bases<br />The Gaussian elimination algorithm can be applied to any matrix A. If we get "stuck" in a given column, we move to the next column. In this way, for example, some matrices can be transformed to a matrix that has a reduced row echelon form like <br />(the *'s are arbitrary entries). This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vector space spanned by the columns of A has a basis consisting of the first, third, fourth, seventh and ninth column of A (the columns of the ones in T), and the *'s tell you how the other columns of A can be written as linear combinations of the basis columns.<br />
  14. 14. Gauss with pivoting<br />Avoids the problem of division by zero orclose to zero.<br />There are two techniques<br />Keep up the pivot position.<br />Ordering the system.<br />
  15. 15. Gauss Jordan Elimination Through Pivoting<br /> A system of linear equations can be placed into matrix form. Each equation becomes a row and each variable becomes a column. An additional column is added for the right hand side. A system of linear equations and the resulting matrix are shown.<br />The system of linear equations ...<br />3x + 2y - 4z = 3 <br />2x + 3y + 3z = 15 <br />5x – 3y + z = 14<br />becomes the augmented matrix ...<br />
  16. 16. What is pivoting?<br /> The objective of pivoting is to make an element above or below <br />a leading one into a zero.<br /> The "pivot" or "pivot element" is an element on the left hand side of a matrix that you want the elements above and below to be zero. <br />Normally, this element is a one. If you can find a book that mentions pivoting, they will usually tell you that you must pivot on a one. If you restrict yourself to the three elementary row operations, then this is a true statement. <br />However, if you are willing to combine the second and third elementary row operations, you come up with another row operation (not elementary, but still valid). <br /> You can multiply a row by a non-zero constant and add it to a non-zero multiple of another row, replacing that row.<br /> So what? If you are required to pivot on a one, then you must sometimes use the second elementary row operation and divide a row through by the leading element to make it into a one. Division leads to fractions. While fractions are your friends, you're less likely to make a mistake if you don't use them. <br /> What's the catch? If you don't pivot on a one, you are likely to encounter larger numbers. Most people are willing to work with the larger numbers to avoid the fractions.<br />
  17. 17. Thomas’ Method<br />ADVANTAGES<br />The machine memory is reduced by not having to store 0's. Vectors are stored only a, b, c. Uses 3n locations instead of nxn (advantageous for n ≥ 50).<br />Doesn’t require pivoting.<br />Reduces the number of operations<br />
  18. 18. Jacobian Method<br />Given a square system of n linear equations:<br />Ax = B<br />where:<br />
  19. 19. Then A can be decomposed into a diagonal component D, and the remainder R:<br />The system of linear equations may be rewritten as:<br />and finally:<br />The Jacobi method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. <br />
  20. 20. Analytically, this may be written as:<br />The element-based formula is thus:<br />Note that the computation of xi(k+1) requires each element in x(k) except itself. Unlike the Gauss–Seidel method, we can't overwrite xi(k) with xi(k+1), as that value will be needed by the rest of the computation. This is the most meaningful difference between the Jacobi and Gauss–Seidel methods, and is the reason why the former can be implemented as a parallel algorithm, unlike the latter. The minimum amount of storage is two vectors of size n.<br />
  21. 21. Algorithm<br />Choose an initial guess x0 to the solutionwhile convergence not reached do for i := 1 step until n do σ = 0 for j := 1 step until n do if j != i then <br /> end if <br /> end (j-loop)<br /> end (i-loop) check if convergence is reached <br />end (while convergence condition not reached loop) <br />
  22. 22. Gauss-Seidel Method<br /> The Gauss-Seidel method (called Seidel's method by Jeffreys and Jeffreys 1988, p. 305) is a technique for solving the equations of the linear system of equations one at a time in sequence, and uses previously computed results as soon as they are available, <br />
  23. 23. There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method. Secondly, the new iterate depends upon the order in which the equations are examined. If this ordering is changed, the components of the new iterates (and not just their order) will also change.<br />
  24. 24. In terms of matrices, the definition of the Gauss-Seidel method can be expressed as <br />where the matrices D, -L, and -U represent the diagonal, strictlylower triangular, and strictlyupper triangularparts of A, respectively. <br />The Gauss-Seidelmethod is applicabletostrictlydiagonallydominant, orsymmetric positive definite matrices A. <br />
  25. 25. Bibliography<br />Métodos Numéricos en Ingeniería de Petróleos. Elkin Rodolfo Santafé Rangel, lngeniero de Petróleos, Bucaramanga – Colombia © 2008<br />http://en.wikipedia.org/wiki/Gaussian_elimination<br />http://people.richland.edu/james/lecture/m116/matrices/pivot.html<br />http://en.wikipedia.org/wiki/Jacobi_method<br />
  26. 26. Imagesfrom<br />http://3.bp.blogspot.com/_OW7IO06BPD0/Sww0x0vlzOI/AAAAAAAABU0/9wfL8l2sZSk/s1600/002-abstract-skydome-matrix.png<br />http://mata.gia.rwth-aachen.de/Vortraege/Sabrina_Mueller/Geschichte_der_Zahlen/Bilder/gauss.jpg<br />

×