Upcoming SlideShare
×

# Es272 ch4b

226

Published on

Published in: Technology, Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
226
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
9
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Es272 ch4b

1. 1. Part 4b: NUMERICAL LINEAR ALGEBRA – – – – – LU Decomposition Matrix Inverse System Condition Special Matrices Gauss-Seidel
2. 2. LU Decomposition:  In Gauss elimination, both coefficieints and constants are munipulated until an upper-triangular matrix is obtained. a11 x1 a12 x2 b1 ' ' a23 x3 ... a2 n xn ' b2 '' ' a33 x3 ... a3' n xn ' a22 x2 a13 x3 ... a1n xn b3'' ... (n ann 1) xn ( bnn 1)  In some applications, the coefficient matrix [A] stays constant while the right-hand-side constants vector (b) changes.  [L][U] decomposition does not require repeated eliminations. Once [L][U] decomposition is applied to matrix [A], it can be repeteadly used for different values of (b) vector.
3. 3. Decomposition methodology: Solution to the linear system A x or b A x b 0 The system can also be stated in an upper triangular form: U x d or U x d 0 Now, suppose there exist a lower triangular matrix (L) such that L U x d A x b Then, it follows that L U A and L d b  Solution for (x) can be obtained by a two-step strategy (explained next).
4. 4. Decomposition strategy: A x b Decomposition U L L (d ) Apply backward substitution to calculate (x) U ( x) (b) Apply forward substitution to calculate (d) (d )  The process involves one decomposition, one forward substitution, and one backward substitution processes.  Once matrices L and U are computed once; manipulated constant vector (d) is repeatedly calculated from matrix L; hence vector (x).
5. 5. LU Decomposition and Gauss Elimination:  Gauss elimination processes involves an LU decomposition in itself.  Forward elimination produces an upper triangular matrix: .. .. .. U 0 .. .. 0 0 ..  In fact, while U is formed during elimination, an L matrix is formed such that (for 3x3) 1 0 f 21 1 0 f 31 L 0 f 32 1 where f 21 a21 a11 a31 a11 f 31 A f 32 L U ' a32 a22 … This decomposition is unique when the diagonals of L are ones.
6. 6. EX 10.1 : Apply LU decomposition based on the Gauss elimination for Example 9.5 (using 6 S.D.): Coefficient matrix: 3 0.2 0.1 7 0.3 0.3 A 0.1 0.2 10 Forward elimination resulted in the following upper triangular form: 3 0.1 0.2 U 0 7.00333 0.293333 0 0 Lower triangular matrix will have L 1 f 21 f 31 0 1 f 32 0 0 1 1 a21 a11 a31 a11 10.0120 0 0 1 0 ' a32 ' a22 1 1 0.0333333 0.100000 0 0 1 0 0.0271300 1
7. 7. Check the result: 1 A 0 3 0.0333333 L U 0 1 0 0 7.00333 0.100000 0.0271300 1 0 We obtain: 0 0.2 0.293333 10.0120 compare to: 3 A 0.1 0.1 7 0.0999999 0.3 0.2 0 .3 0.2 9.99996 3 0.2 0.1 7 0.3 0.3 A 0.1 0.2 10 Some round-off error is introduced To find the solution: Calculate (d) by applying one forward substitution. L (d ) (b) Calculate (x) by applying one back substitution. U ( x) (d ) [L] facilitates obtaining modified (b) each time (b) has been changed during calculations.
8. 8. EX 10.2: Solve the system in the previous example using LU decomposition: We have: A 1 L U 0 0 3 0.0333333 1 0 0 7.00333 0.100000 0.0271300 1 0 0.1 0 0.2 0.293333 10.0120 > Apply the forward substitution: 1 0 0 d1 0.0333333 1 0 d2 0.100000 0.0271300 1 d 3 7.85 19.3 71.4 d1 7.85 d2 19.5617 d3 70.0843 > Apply the backward substitution: 3 0. 1 0.2 x1 0 7.00333 0.293333 x2 0 10.0120 0 x3 7.85 19.3 71.4 x1 3 x2 2.5 x3 7.00003
9. 9. Total FLOPs with LU decomposition n3 3 O n2 same as Gauss elimination Crout Decomposition: 1 A L U (Doolittle decomposition/factorizaton) .. 1 1 A L U U (forming) (Crout decomposition) .. 1 row operation Column ro operation  They have comperable performances.  Crout decompositon can be implemented by a concise series of formulas. (see the book).  Storage can be minimized: L > No need to store 1’s in U. (forming) > No need to store 0’s in L and U. > Elements of U can be stored in zeros of L. A (remaining)
10. 10. Matrix Inverse  If A is a square matrix,there exist an A-1,s.t. A A 1 A 1 A I  LU decomposition offers an efficient way to find A-1. A x b decomposition U L forward substitution Backward substitution For constant vector, enter (I:,i ) (ith column of the identity matrix.) L (d ) 1 :, j U ( A ) (d ) ( I: , j ) Solution gives ith column of A-1.
11. 11. EX 10.3 : Use LU decomposition to determine the inverse of the system in EX 10.1 3 0.2 0.1 7 0.3 0. 3 A 0.1 0.2 10 Corresponding upper and lower triangular matrices are 3 U 0.1 1 0 7.00333 0.293333 0 L 10.0120 0 0 0 0.0333333 0.2 1 0 0.100000 0.0271300 1 To calculate the first column of A-1 : > Forward substitution: 1 0 0.0333333 0.100000 0 d1 1 d1 0 d2 0 d2 0.03333 0.0271300 1 d 3 0 d3 0.1009 1 1
12. 12. > Back substitution: 3 0.1 0.2 x1 1 x1 0.33249 0 7.00333 0.293333 x2 0.03333 x2 0.00518 0 10.0120 0.1009 x3 0.01008 0 x3 To calculate the second column First column of A-1 To calculate the third column b1 0 x1 0.004944 b1 0 x1 0.006798 b2 1 x2 0.142903 b2 0 x2 0.004183 b3 0 x3 0.00271 b3 1 x3 0.09988 We finally get 0.33249 A 1 0.004944 0.006798 0.00518 0.142903 0.004183 0.01008 0.00271 0.09988
13. 13. Importance of Inverse in Engineering Applications:  Many engineering problems can be represented by a linear equation A x System design matrix b Response (e.g., deformation) Stimulus (e.g., force)  The formal solution to this equation x A 1 b For a 3x3 system we can write explicitly x1 a111b1 a121b2 a131b3 x2 1 1 a21 b1 a22 b2 1 a23 b3 x3 a311b1 a321b2 a331b3 There is a linear relationship between stimulus and response. Proportionality constants are the coefficients of A-1 .
14. 14. System Condition  Condition number indicates ill-conditioning of a system.  We will determine condition number using matrix norms. Matrix norms:  A norm is a measure of the size of a multi-component entity (e.g., a vector) x3 n x1 xi 1-norm i 1 x 1/ 2 n x x 2 2 i x e i 1 n x xi p i 1 2-norm (Euclidean norm) x1 1/ p p 2 p-norm x2
15. 15.  We can extend Euclidean norm for matrices: n 1/ 2 n Ae 2 a i, j (Frobenius norm) i 1 j 1  There are other norms too…, e.g., n A max 1 i n aij (row-sum norm) aij (column-sum norm) j 1 n A max 1 j n i 1  Each of these norms returns a single (positive) value for the characteristics of the matrix.
16. 16. Matrix Condition Number:  Matrix condition number can be defined as Cond A A A 1 ( Cond A 1 )  If Cond [A] >> 1 ill-conditioned matrix  It can be shown that i.e., the relative error of the x x Cond A A A For example; [A] contains element of t S.F. (precision of 10-t) Cond [A] 10c computed solution cannot be larger than the relative error of the coefficients of [A] multiplied by the condition number. (x) will contain elements of (t-c) S.F. (precision of 10c-t)
17. 17. EX 10.4 : Estimate the condition number of the 3x3 Hilbert matrix using row sum norm 1 1/ 2 1/ 3 Hilbert matrix is A inherently illconditioned. 1/ 2 1/ 3 1/ 4 1/ 3 1/ 4 1/ 5 First normalize the matrix by dividing each row by the largest coefficient: 1 1/ 2 1/ 3 A 1 2 / 3 1/ 2 1 3/ 4 3/ 5 Row-sum norm: 1 1/ 2 1/ 3 A 1 1/ 2 1/ 3 1.833 1 2 / 3 1/ 2 1 2 / 3 1/ 2 2.1667 1 3/ 4 3/ 5 1 3/ 4 2.35 3/ 5 A 2.35
18. 18. Inverse of the scaled matrix: 9 A 1 36 30 18 96 90 this part takes the longest time of computation. 10 60 60 Row-sum norm: 9 A 1 36 18 96 30 90 10 A 60 1 36 96 60 192 60 Condition number: Cond A (2.35)(192 ) 451 .2 matrix is ill-conditioned. e.g., for a single precision (7.2 digits) computation; c log(451.2) 2.65 (7.2-2.65)=4.55 ~ 4 S.F. in the solution! (precision of ~10-4)
19. 19. Iterative refinement:  This technique especially useful for reducing round-off errors.  Consider a system: a11 x1 a12 x2 a13 x3 b1 a21 x1 a22 x2 a23 x3 b2 a31 x1 a32 x2 a33 x3 b3  Assume an approximate solution of the form a11 x1o o a12 x2 o a13 x3 b1o a21 x1o o a22 x2 o a23 x3 o b2 a31 x1o o a32 x2 o a33 x3 o b3  We can write a relationship between exact and approximate solutions: x1 x1o x1 x2 o 2 x x2 x3 o x3 x3
20. 20.  Insert these into the original equations: a11 ( x1 x1 ) a12 ( x2 x2 ) a13 ( x3 x3 ) b1 a21 ( x1 x1 ) a22 ( x2 x2 ) a23 ( x3 x3 ) b2 a31 ( x1 x1 ) a32 ( x2 x2 ) a33 ( x3 x3 ) b3  Now subtract the approximate solution from above to get a11 x1 a12 x2 a13 x3 b1 b1o a21 x1 a22 x2 a23 x3 o b2 b2 e2 a31 x1 a32 x2 a33 x3 o b3 b3 e3 e1  This a new set of simultaneous linear equation which can be solved for the correction factors.  Solution can be improved by applying the corrections to the previous solution (iterative refinement procedure)  It is especially suitable for LU decomposition since constant vector (b) continuously changes.
21. 21. Special Matrices  In engineering applications, special matrices are very common. > Banded matrices aij 0 BW=3 if i j (BW 1) / 2 tridiagonal system > Symmetric matrices aij a ji or A A T > Spare matrices (most elements are zero) only black areas are non-zero BW
22. 22.  Application of elimination methods to spare matrices are not efficient (e.g., need to deal with many zeros unnecessarily).  We employ special methods in working with these systems. Cholesky Decomposition:  This method is applicable to symmetric matrices.  A symmetric matrix can be decomposed as i 1 aki A L L T or lki lij lkj j 1 for i 1,2,...,k 1 lii k 1 lkk 2 lkj akk j 1 Symmetric matrices are very common in engineering applications. So, this method has wide applications.
23. 23. EX 11.2 : Apply Cholesky decomposition to 6 A Apply the recursion relation: k=1 l11 a11 (k=2,i=1) 6 a21 l11 l21 55 15 55 225 55 225 979 2.4495 15 2.4495 (k=3, i=1) a31 55 l31 l11 2.4495 l33 15 6.1237 22 .454 l22 2 a22 l21 (k=3, i=2) a32 l21l31 l32 l22 2 2 a33 l31 l32 979 (22.454) 2 (20.916) 2 6.1106 55 (6.1237) 2 4.1833 225 6.1237 (22 .454 ) 4.1833 20 .916 2.4495 L 6.1237 4.1833 22.454 20.916 6.1106
24. 24. Gauss-Seidel  Iterative methods are strong alternatives to elimination methods  In iterative methods solution is constantly improved so there is no concern of round-off errors.  As we did in root finding, > start with an initial guess. > iterate for refined estimates of the solution.  Gauss-Seidel is one of the most commonly used iterative method.  For the solution of A ( x) (b)  We write each unknown in the diagonal in terms of the other unknowns:
25. 25. In case of a 3x3 system: x1 x2 x3 b1 a12 x2 a13 x3 a11 Start with initial guesses x2 and x3 b2 Use new x1 and old x3 a21 x1 a23 x3 a11 b3 a31 x1 a32 x2 a33 Use new x1 and x2 calculate new x1 calculate new x2 calculate new x3 iterate...  In Gauss Seidel, new estimates are immediately used in subsequent calculations.  Alternatively, old values (x1 , x2 , x3) are collectively used to calculate new values (x1 , x2 , x3) Jacobi iteration (not commonly used)
26. 26. EX 11.2 : Use Gauss-Seidel method to obtain the solution of 3x1 0.1x2 0.1x1 7 x2 0.2 x3 0.3x3 0.3x1 0.2 x2 10 x3 true results x1 = 3 x2 = - 2.5 x3 = 7 7.85 19 .3 71 .4 Gauss-Seidel iteration: x1 7.85 0.1x2 0.2 x3 x2 3 19.3 0.1x1 0.3x3 7 x3 71.4 0.3x1 0.2 x2 10 Assume initial guesses all are 0. x1 7.85 0 0 3 x3 2.616667 x2 19.3 0.1(2.616667) 0 7 71.4 0.3(2.616667) 0.2( 2.794524) 10 7.005610 2.794524
27. 27. For the second iteration, we repeat the process: x1 7.85 0.1( 2.794524) 0.2(7.005610) 3 x2 19.3 0.1(2.990557) 0.3(7.005610) 7 x3 71.4 0.3(2.990557) 0.2( 2.499625) 10 2.990557 2.499625 7.000291 The solution is rapidly converging to the true solution. t 0.31% t 0.015% t 0.0042%
28. 28. Convergence in Gauss-Seidel:  Gauss-Seidel is similar to the fixed-point iteration method in root finding methods. As in the fixed-point iteration, Gauss-Seidel also is prone to > divergence > slow convergence  Convergence of the method can be checked by the following criteria: n aii aij j 1 j i that is, the absolute value of the diagonal coefficient in each of the row must be larger than sum of the absolute values of all other coefficients in the same row. (diagonally dominant system)  Fortunately, many engineering applications fulfill this requirement.
29. 29. Improvement of convergence by relaxation:  After each value of x is computed using Gauss-Seidel equations, the value is modified by a weighted average of the old and new values. xinew xinew (1 ) xiold 0 2 If 0< <1 underrelaxation (to make a system converge) If 1< <2 overrelaxation (to accelerate the convergence )  The choice of is empirical and depends on the problem.
1. #### A particular slide catching your eye?

Clipping is a handy way to collect important slides you want to go back to later.