Upcoming SlideShare
×

# Gauss Elimination

8,057 views
7,768 views

Published on

Published in: Education, Technology
1 Comment
3 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• very useful presentation....thx

Are you sure you want to  Yes  No
Views
Total views
8,057
On SlideShare
0
From Embeds
0
Number of Embeds
31
Actions
Shares
0
268
1
Likes
3
Embeds 0
No embeds

No notes for slide

### Gauss Elimination

1. 1. • Graphical method • Cramer's Rule • The Elimination of Unknowns
2. 2. A graphical solution is obtainable for two equations by plotting on Cartesian coordinates with one axis corresponding to x1 and the other x2. a11 x1 + a12x2 = b1 a21 x1 + a22x2 = b2 Both equations can be solved for x2: Thus, the equations are now in the form of straight lines; that is, x1=(slope)x1+intercept.Yhese lines can be graphed on Cartesian coordinates with x2 as the ordinate and x1 as the abscissa. The values of x1 and x2 at the intersection of the lines represent the solution
3. 3. Example Use the graphical method to solve : Solution. Let x1 be the abscissa. Solve eq,1 and eq,2 for x2
4. 4. Graphical Method 9 8 7 6 Solution x1=4 x2=3 5 X2 2 4 1 3 2 1 0 0 1 2 3 4 5 6 7 8 9 X1 Graphical solution of a set of two simultaneous linear algebraic equations. The intersection of the lines represents the solution
5. 5. Cramer´s rule is another solution technique that is best suited to small numbers of equation . This rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2,b3 … bn, for example ,x1 would be computed as 1 12 13 2 22 23 3 23 33 1 =
6. 6. Example: Use Cramer´s rule to solve + − = − + = + + = 1 0 −2 3 0 −2 1 3 −2 1 0 3 = 0 −1 3 = 1 −1 3 = 0 1 3 = 0 −1 1 2 0 5 0 0 5 2 0 5 2 0 0 The determinant can be written as det D=-9, det Dx=-15, det Dy=27 y det − Dz=6. Therefore = − = − = −
7. 7. The elimination of unknowns by combining equations is an algebraic approach that can be illustrated for a set of two equations: 11 1 + 12 2 = 1 21 1 + 22 2 = 2 The basic strategy is to multiply the equations by constants so that one of the unknowns will be eliminated when the two equations are combined. The result is a single equation that can be solved for the remaining unknown, and this value can then be substituted into either of the original equations to compute the other variable 21 (11 1 + 12 2 ) = 1 21 11 (21 1 + 22 2 ) = 2 11
8. 8. This method to solve equations is called Naïve Gauss Elimination because it does not avoid division by zero. The technique for n equations consists of two phases : • Elimination of unknowns • Solution through back substitution
9. 9. The first phase is designed to reduce the set of 11 1 + 12 2 + 13 3 +. . + = 1 equation to an upper triangular system. 21 1 + 22 2 + 23 3 +. . + = 2 Multiply eq.1 by 21 /11 to give: . . 21 21 21 1 1 + 2 2 + 3 3 +. . + = 21 + 12 2 +. .+ + 1 = + 11 11 11 1 Now this equation can be subtracted from eq.2 to give: = 21 21 21 22 − + 2 +..+ 2 − + = 2 − 11 12 11 1 11 1 or ´´22 2 +. . +´´2 = ´2 ´ ´ = ´ ´´ ´´ Where the prime indicates that the elements have been changed from their original values. The procedure is ´´ then repeated for the remaining equations. eq,. Can be = ´´ multiplied by 31 and the result can be subtracted from 11 ´ − ´ = the third equation. Now the equations are solved ´ starting from the last equation as it has only one = ( − − )/ unknown by back sustitution.
10. 10. Example: Use Naïve Gauss elimination to solve Working in the matrix form First step Divide Row 1 by 20 and then multiply it by –3, that is, multiply Row 1 by Subtract the result from Row 2
11. 11. to get the resulting equations as Divide Row 1 by 20 and then multiply it by 5, that is, multiply Row 1 by Subtract the result from Row 3 to get the resulting equations as
12. 12. Second step Now for the second step of forward elimination, we will use Row 2 as the pivot equation and eliminate Row 3 Column 2 to get the resulting equations as Back substitution We can now solve the above equations by back substitution. From the third equation Substituting the value of in the second equation
13. 13. Substituting the value of and in the first equation, Hence the solution is
14. 14. Division by zero: It is possible for division by zero to occur during the beginning of the steps of forward elimination. For example will result in division by zero in the first step of forward elimination as the coefficient of in the first equation is zero as is evident when we write the equations in matrix form.
15. 15. Another example There is no issue of division by zero in the first step of forward elimination. The pivot element is the coefficient of in the first equation, 5, and that is a non-zero number. However, at the end of the first step of forward elimination, Now at the beginning of the 2nd step of forward elimination, the coefficient of in Equation 2 would be used as the pivot element. That element is zero and hence would create the division by zero problem. So it is important to consider that the possibility of division by zero can occur at the beginning of any step of forward elimination. Round-off error: The Naïve Gauss elimination method is prone to round-off errors. This is true when there are large numbers of equations as errors propagate. Also, if there is subtraction of numbers from each other, it may create large errors.
16. 16. Round off errors were large when five significant digits were used as opposed to six significant digits. One method of decreasing the round-off error would be to use more significant digits, that is, use double or quad precision for representing the numbers. However, this would not avoid possible division by zero errors in the Naïve Gauss elimination method. To avoid division by zero as well as reduce (not eliminate) round-off error, Gaussian elimination with partial pivoting is the method of choice.
17. 17. Gauss-Jordan Elimination is a variant of Gaussian Elimination. Again, we are transforming the coefficient matrix into another matrix that is much easier to solve, and the system represented by the new augmented matrix has the same solution set as the original system of linear equations. In Gauss-Jordan Elimination, the goal is to transform the coefficient matrix into a diagonal matrix, and the zeros are introduced into the matrix one column at a time Gauss-Jordan Elimination Steps • Write the augmented matrix for the system of linear equations. • Use elementary row operations on the augmented matrix [A|b] to transform A into diagonal form. If a zero is located on the diagonal, switch the rows until a nonzero is in that place. If you are unable to do so, stop; the system has either infinite or no solutions. • By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one.
18. 18. Example: In Gauss-Jordan Elimination we want to introduce zeros both below and above the diagonal. 1. Write the augmented matrix for the system of linear equations. As before , we use the symbol to indicate that the matrix preceding the arrows is being changed due to the specified operation; the matrix following the arrow displays the result of that change.
19. 19. 2. Use elementary row operations on the augmented matrix [A|b] to transform A into diagonal form At this point we have a diagonal coefficient matrix. The final step in Gauss- Jordan Elimination is to make each diagonal element equal to one. To do this, we divide each row of the augmented matrix by the diagonal element in that row
20. 20. 3. By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one. Our solution is simply the right-hand side of the augmented matrix. Notice that the coefficient matrix is now a diagonal matrix with ones on the diagonal. This is a special matrix called the identity matrix
21. 21. The LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant. Definition 1 If A is a square matrix and it can be factored as A = LU where L is a lower triangular matrix and U is an upper triangular matrix, then we say that A has an LU Decomposition of LU. Theorem 1 If A is a square matrix and it can be reduced to a row-echelon form, U, without interchanging any rows then A can be factored as A = LU where L is a lower triangular matrix
22. 22. We’re not going to prove this theorem but let’s examine it in some detail and we’ll find a way to determine a way of determining L. Let’s start off by assuming that we’ve got a square matrix A and that we are able to reduce it row-echelon form U without interchanging any rows. We know that each row operation that we used has a corresponding elementary matrix, so let’s suppose that the elementary matrices corresponding to the row operations we used are 1 2 … . We also know that elementary matrices are invertible so let’s multiply each side by the inverses
23. 23. Now, it can be shown that provided we avoid interchanging rows the elementary row operations that we needed to reduce A to U will all have corresponding elementary matrices that are lower triangular matrices. We also know from the previous section that inverses of lower triangular matrices are lower triangular matrices and products of lower triangular matrices are lower triangular matrices. In other words, is a lower triangular matrix and so using this we get the LU-Decomposition for A of A = LU . Example 1 Determine an LU-Decomposition for the following matrix and use the LU-Decomposition method to find the solution to the following system of equations.
24. 24. So, first let’s go through the row operations to get this into row-echelon form and remember that we aren’t allowed to do any interchanging of rows. Also, we’ll do this step by step so that we can keep track of the row operations that we used since we’re going to need to write down the elementary matrices that are associated with them eventually.
25. 25. And we have got U
26. 26. Now we need to get L. This is going to take a little more work. We’ll need the elementary matrices for each of these, or more precisely their inverses. Recall that we can get the elementary matrix for a particular row operation by applying that operation to the appropriately sized identity matrix (3´3 in this case). Also recall that the inverse matrix can be found by applying the inverse operation to the identity matrix. Here are the elementary matrices and their inverses for each of the operations above.
27. 27. we know can compute L.
28. 28. We can verify that we’ve gotten an LU-Decomposition with a quick computation SOLUTION SYSTEM: Now we are going to let’s write down the matrix form of the system According to the method outlined above this means that we actually need to solve the following two systems and
29. 29. So, let’s get started on the first one. Notice that we don’t really need to do anything other than write down the equations that are associated with this system and solve using forward substitution. The first equation will give us 1 for free and once we know that the second equation will give us 2 . Finally, with these two values in hand the third equation will give us 3 . Here is that work. The second system that we need to solve is then,
30. 30. Again, notice that to solve this all we need to do is write down the equations and do back substitution. The third equation will give us 3 for free and plugging this into the second equation will give us 2 .. etc. Here’s the work for this.
31. 31. • Computational Science: Tools for a Changing World A High School Curriculum by Richard A. Tapia and Cynthia Lanius • numericalmethods.eng.usf.edu/.../mws_gen_sle_txt_gaussian.doc • LINEAR ALGEBRA, Systems of Equations and Matrices by Paul Dawkins • http://ceee.rice.edu/Books/CS/chapter2/linear44.html • Numerical Methods for Engineers, Fifth Edition, Steven C. Chapra and Raymond P. Canale.