NUMERICAL METHODS
(Meng2063)
CHAPTER THREE
LINEAR ALGEBRAIC EQUATION
NUMERICAL METHODS
(Meng2063)
By: Tesfahun Meshesha
3.1. INTRODUCTION
• Consider a system of n linear algebraic equations in n unknowns,
𝑎11𝑥1 + 𝑎12𝑥2 + ⋯ + 𝑎1𝑛𝑥𝑛 = 𝑏1
𝑎21𝑥1 + 𝑎22𝑥2 + ⋯ + 𝑎2𝑛𝑥𝑛 = 𝑏2
⋮ ⋮ + ⋯ + ⋮ ⋮
𝑎𝑛1𝑥1 + 𝑎𝑛2𝑥2 + ⋯ + 𝑎𝑛𝑛𝑥𝑛 = 𝑏𝑛
• where 𝒂𝒊𝒋, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, … , 𝑛, are the known coefficients 𝑏𝑖 , 𝑖 =
1, 2, … , 𝑛, are the known right hand side values, and 𝒙𝒊, 𝑖 = 1, 2, … , 𝑛 are the
unknowns to be determined
Cont . . .
• In matrix notation we write the system as;
𝑨𝒙 = 𝒃 (𝟑. 𝟏)
Where:
𝐀 =
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
. . . … … …
𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛
, 𝒙 =
𝑥1
𝑥2
…
𝑥𝑛
and 𝒃 =
𝑏1
𝑏2
…
𝑏𝑛
• The matrix [A | b], obtained by appending the column b to the matrix A is
called the augmented matrix. That is
𝐀 𝐛 =
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
. . . … … …
𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛
𝑏1
𝑏2
…
𝑏𝑛
Cont . . .
• The methods of solution of the linear algebraic equations may be classified as
i. Direct methods produce the exact solution after a finite number of steps
(disregarding the round-off errors). We can determine the total number of
operations (additions, subtractions, divisions and multiplications). This number
is called the operational count of the method.
ii. Iterative methods are based on the idea of successive approximations. We start
with an initial approximation to the solution vector 𝒙 = 𝒙𝟎, and obtain a
sequence of approximate vectors 𝒙𝟎, 𝒙𝟏, . . . , 𝒙𝒌,
• Gauss elimination, Gauss Jordan elimination, and LU Decomposition
3.2. Gauss Elimination
• The method is based on the idea of reducing the given system of
equations 𝑨𝒙 = 𝒃, to an upper triangular system of equations 𝑼𝒙 = 𝒛, using
elementary row operations.
• This reduced system 𝑼𝒙 = 𝒛, is then solved by the back substitution method to
obtain the solution vector x.
• Consider a 3x3 augmented matrix
𝑎11 𝑎12 𝑎13
𝑎21 𝑎22 𝑎23
𝑎31 𝑎32 𝑎33
𝑏1
𝑏2
𝑏3
(3.2)
Cont . . .
First stage of elimination
• For 𝒂𝟏𝟏 ≠ 𝟎, element 𝒂𝟏𝟏 in the 1 × 1 position is called the first pivot.
• Multiply the first row in (3.2) by 𝒂𝟐𝟏/𝒂𝟏𝟏 and 𝒂𝟑𝟏/𝒂𝟏𝟏 respectively and subtract
from the second and third rows.
𝑹𝟐 – (𝒂𝟐𝟏/𝒂𝟏𝟏)𝑹𝟏 and 𝑹𝟑 – (𝒂𝟑𝟏/𝒂𝟏𝟏)𝑹𝟏- elementary row operation
• New augmented matrix ≫
𝑎11 𝑎12 𝑎13
0 𝑎22
1 𝑎23
1
0 𝑎32
1
𝑎33
1
𝑏1
𝑏2
1
𝑏3
1
(3.3)
• where:
𝑎22
1 = 𝑎22 −
𝑎21
𝑎11
𝑎12 , 𝑎23
1 = 𝑎23 −
𝑎21
𝑎11
𝑎13 , 𝑏2
1
= 𝑏2 −
𝑎21
𝑎11
𝑏1
𝑎32
1
= 𝑎32 −
𝑎31
𝑎11
𝑎12 , 𝑎33
1
= 𝑎33 −
𝑎31
𝑎11
𝑎13 , 𝑏3
1
= 𝑏3 −
𝑎31
𝑎11
𝑏1
Cont . . .
Second stage of elimination
• Assume 𝒂𝟐𝟐
𝟏
≠ 𝟎 and element 𝒂𝟐𝟐
𝟏
≠ 𝟎 in the 𝟐 × 𝟐 position is called the
second pivot.
• Multiply the second row in (3.3) by 𝒂𝟑𝟐
𝟏 / 𝒂𝟐𝟐
𝟏 and subtract from the third row.
That is, Elementary row operation≫ 𝑹𝟑 – (𝒂𝟑𝟐
𝟏 / 𝒂𝟐𝟐
𝟏 )𝑹𝟐.
• We obtain the new augmented matrix as
𝑎11 𝑎12 𝑎13
0 𝑎22
1
𝑎23
1
0 0 𝑎33
2
𝑏1
𝑏2
1
𝑏3
2
(3.4)
• where:
𝑎33
2
= 𝑎33
1
−
𝑎32
1
𝑎22
1
𝑎23
1
, 𝑏3
2
= 𝑏3
1
−
𝑎32
1
𝑎22
1
𝑏2
1
Cont . . .
• In equation (3.4), the element 𝑎33
2
≠ 0 is called the third pivot.
• The system in (3.4), is in the required upper triangular form [𝑼|𝒛].
• The solution vector 𝒙 is now obtained by back substitution.
Back substitution:
𝒙𝟑 = 𝒃𝟑
𝟐
𝒂𝟑𝟑
𝟐
𝒙𝟐 =
𝒃𝟐
𝟏
− 𝒂𝟐𝟑
𝟏 𝒙𝟑
𝒂𝟐𝟐
𝟏
𝒙𝟏 =
(𝒃𝟏 – 𝒂𝟏𝟐𝒙𝟐 – 𝒂𝟏𝟑 𝒙𝟑)
𝒂𝟏𝟏
Cont . . .
Remarks:
• Gauss elimination method fail when
i. any one of the pivots is zero, as the elimination progresses if a pivot is zero, then division
by it gives over flow error, since division by zero is not defined.
ii. a pivot is a very small number, then division by it introduces large round-off errors and the
solution may contain large errors.
Partial pivoting – avoid gauss elimination failure:
• In first stage of elimination, the first column of the augmented matrix is searched for the
largest element in magnitude and brought as the first pivot by interchanging the first row
with the row having the largest element in magnitude.
• In second stage of elimination, the second column is searched for the largest element in
magnitude among the n – 1 elements leaving the first element, and this element is brought as
the second pivot by interchanging the second row with the later row having the largest
element in magnitude.
• This procedure is continued until the upper triangular system is obtained.
Cont . . .
Example 3.2.1:
Solve the following system of equations using the Gauss elimination without partial pivoting;
𝑥1 + 10𝑥2 − 𝑥3 = 3
2𝑥1 + 3𝑥2 + 20 𝑥3 = 7
10𝑥1 − 𝑥2 + 2 𝑥3 = 4
Solution:
The augmented matrix is given by;
1 10 −1
2 3 20
10 −1 2
3
7
4
Phase 1: Forward elimination
First stage of elimination:
𝑅2
1
= 𝑅2 −
𝑎21
𝑎11
𝑅1 ≫≫
𝑎21
1
= 𝑎21 −
𝑎21
𝑎11
𝑎11 = 2 −
2
1
1 = 0
𝑎22
1
= 𝑎22 −
𝑎21
𝑎11
𝑎12 = 3 −
2
1
10 = −17
𝑎23
1
= 𝑎23 −
𝑎21
𝑎11
𝑎13 = 20 −
2
1
−1 = 22
𝑏2
1
= 𝑏2 −
𝑎21
𝑎11
𝑏1 = 7 −
2
1
3 = 1
𝑅3
1
= 𝑅3 −
𝑎31
𝑎11
𝑅1 ≫≫
𝑎31
1
= 𝑎31 −
𝑎31
𝑎11
𝑎11 = 10 −
10
1
1 = 0
𝑎32
1
= 𝑎32 −
𝑎31
𝑎11
𝑎12 = −1 −
10
1
10 = −101
𝑎33
1
= 𝑎33 −
𝑎31
𝑎11
𝑎13 = 2 −
10
1
−1 = 12
𝑏3
1
= 𝑏3 −
𝑎31
𝑎11
𝑏1 = 4 −
10
1
3 = −26
1 10 −1
0 −17 22
0 −101 12
3
1
−26
Second stage of elimination:
𝑅3
2
= 𝑅3
1
−
𝑎32
1
𝑎22
1 𝑅2
1
≫
𝑎31
2
= 𝑎31
1
−
𝑎32
1
𝑎22
1 𝑎21
1
= 0 −
−101
−17
0 = 0
𝑎32
2
= 𝑎32
1
−
𝑎32
1
𝑎22
1 𝑎22
1
= −101 −
−101
−17
−17 = 0
𝑎33
2
= 𝑎33
1
−
𝑎32
1
𝑎22
1 𝑎23
1
= 12 −
−101
−17
22 = −118.7058
𝑏3
2
= 𝑏3
1
−
𝑎32
1
𝑎22
1 𝑏2
1
= −26 −
−101
−17
1 = −31.941
1 10 −1
0 −17 22
0 0 −118.7058
3
1
−31.94
Phase 2: Back substitution
From the result obtained at second stage of elimination, we can find the value of 𝑥𝑖
−118.705𝑥3= −31.94;
𝑥3 = 0.26908
− 17𝑥2 + 22 (0.26908) = 1
𝑥2 = 0.28939
𝑥1 + 10(0.28939) − (0.26908) = 3
𝑥1 = 0.3751
3.3. Gauss Seidel Iteration
₰ Gauss elimination methods discussed above to solve system of equations is
direct method, and Gauss Seidel method is an iterative approach to solve systems
of linear equation.
₰ Gauss-Seidel Iteration Method use the updated values of 𝑥1, 𝑥2. . . , 𝑥𝑖−1 in
computing the value of the variable xi. We assume that the pivots 𝑎𝑖𝑖 ≠ 0, for
all i. We write the equations as;
𝑎11𝑥1 = 𝑏1 – 𝑎12𝑥2 + 𝑎13𝑥3
𝑎22𝑥2 = 𝑏2 – 𝑎21𝑥1 + 𝑎23𝑥3
𝑎33𝑥3 = 𝑏3 – 𝑎31𝑥1 + 𝑎32𝑥2
(3.3.1)
Cont . . .
⁜ Based on eq (3.3.1), Gauss-Seidel iteration method is given by;
x1
k+1 =
1
a11
b1 – a12x2
k + a13x3
k
x2
k+1 =
1
a22
b2 – a21x1
k+1 + a23x3
k
x3
k+1 =
1
a33
b3 – a31x1
k+1 + a32x2
k+1
where k = 0,1,2, . . . ⇝ No of iteration
Remark:
⁜ A sufficient condition for convergence of the Gauss-Seidel method is that the system of
equations is diagonally dominant.
⁜ This implies that convergence may be obtained even if the system is not diagonally dominant
⁜ If the system is not diagonally dominant, we may exchange the equations, if possible, such that
the new system is diagonally dominant and convergence is guaranteed.
Cont . . .
Example 3.3.1:
Find the solution for a system of equations correct to three decimal places, using Gauss-Seidel
iteration method.
45𝑥1 + 2𝑥2 + 3𝑥3 = 58
– 3𝑥1 + 22𝑥2 + 2𝑥3 = 47
5𝑥1 + 𝑥2 + 20𝑥3 = 67
Cont . . .
Solution: The given system of equations is strongly diagonally dominant. Hence, we can expect fast convergence. Gauss-Seidel method gives the
iteration
𝑥1
𝑘+1
=
1
45
58 – 2𝑥2
𝑘
− 3𝑥3
𝑘
𝑥2
𝑘+1
=
1
22
47 + 3𝑥1
𝑘+1
− 2𝑥3
𝑘
𝑥3
𝑘+1
=
1
20
67 – 5𝑥1
𝑘+1
− 𝑥2
𝑘+1
𝑤ℎ𝑒𝑟𝑒 𝑘 = 0,1,2, . . . ⇝ 𝑁𝑜 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛
Starting with 𝑥1
(0)
= 0, 𝑥2
(0)
= 0, 𝑎𝑛𝑑 𝑥3
(0)
= 0, we get the following results;
First iteration: k =0
𝑥1
1
=
1
45
58 – 2 𝑥2
0
− 3𝑥3
0
=
1
45
58 = 1.28889
𝑥2
1
=
1
22
47 + 3 𝑥1
1
− 2𝑥3
0
=
1
22
47 + 3 1.28889 − 0 = 2.31212
𝑥3
1
=
1
20
67 − 5 𝑥1
1
− 𝑥2
1
=
1
20
67 − 5 1.28889 − (2.31212) = 2.91217
Second iteration: k =1
𝑥1
2
=
1
45
58 – 2 𝑥2
1
− 3𝑥3
1
=
1
45
58 – 2 2.31212 − 3 2.91217 = 0.99198
𝑥2
2
=
1
22
47 + 3 𝑥1
2
− 2𝑥3
1
=
1
22
47 + 3 0.99198 − 2 2.91217 = 2.00689
𝑥3
2
=
1
20
67 – 5 𝑥1
2
− 𝑥2
2
=
1
20
67 – 5 0.99198 − 2.00689 = 3.00166
Cont . . .
Third iteration: k =2
𝑥1
3
=
1
45
58 – 2 𝑥2
2
− 3𝑥3
2
=
1
45
58 – 2 2.000689 − 3 3.00166 = 0.99958
𝑥2
3
=
1
22
47 + 3 𝑥1
3
− 2𝑥3
2
=
1
22
47 + 3 0.99958 − 2 3.00166 = 1.99979
𝑥3
3
=
1
20
67 – 5 𝑥1
3
− 𝑥2
3
=
1
20
67 – 5 0.99958 − 1.99979 = 3.0001155
Fourth iteration: k =3
𝑥1
4
=
1
45
58 – 2 𝑥2
3
− 3𝑥3
3
=
1
45
58 – 2 1.99979 − 3 3.000155 = 0.999999
𝑥2
4
=
1
22
47 + 3 𝑥1
4
− 2𝑥3
3
=
1
22
47 + 3 0.999999 − 2 3.00155 = 1.99998
𝑥3
4
=
1
20
67 – 5 𝑥1
4
− 𝑥2
4
=
1
20
67 – 5 0.99999 − 1.99998 = 3.00000125
We find;
𝑥1
4
− 𝑥1
3
= 0.99999 − 0.99958 = 0.00041
𝑥2
4
− 𝑥2
3
= 1.99998 − 1.99979 = 0.00019
𝑥3
4
− 𝑥3
3
= 3.00000125 − 3.0001155 = 0.000114
Since, all the errors in magnitude are less than 0.0005, the required solution is
𝑥1 = 1.0 𝑥2 = 1.99999, and 𝑥3 = 3.0.
Rounding to three decimal places, we get 𝑥1 = 1.0 𝑥2 = 2.0, and 𝑥3 = 3.0
3.4. LU - Decomposition
• In many applications where linear systems appear, one needs to solve Ax = b for many
different vectors b.
⇝ For instance, a structure must be tested under several different loads, not just one.
• Gaussian elimination with pivoting is the most efficient and accurate way to solve a linear
system.
• If we need to solve several different systems with the same 𝐴 , and 𝐴 is big, then we
would like to avoid repeating the steps of Gaussian elimination on 𝐴 for every
different 𝑏 .
• This can be accomplished by the LU decomposition, which in effect records the steps of
Gaussian elimination.
Cont . . .
₰ What we will do is decompose the matrix 𝑨 into the product of a lower
triangular and an upper triangular matrix:
𝐴 = 𝐿𝑈 (3.4.1)
₰ This allows us to solve linear systems by solving two triangular systems:
𝐿𝑦 = 𝑏
𝑈𝑥 = 𝑦
(3.4.2)
₰ Thus, we first solve 𝑳𝒚 = 𝒃 and then 𝑼𝒙 = 𝒚 to get the solution
Cont . . .
• The main idea of the LU decomposition is to record the steps used in Gaussian elimination on A in
the places where the zero is produced. Consider the matrix:
𝐴 =
1 −2 3
2 −5 12
0 2 −10
• The first step of Gaussian elimination is to subtract 2 times the first row from the second row. In
order to record what we have done, we will put the multiplier 2, into the place it was used to make a
zero, i.e. the second row, first column.
1 −2 3
2 −1 6
0 2 −10
Cont . . .
• There is already a zero in the lower left corner, so we don’t need to eliminate anything there.
We record this fact with a (0). To eliminate the third row, second column, we need to subtract
−2 times the second row from the third row. Recording the −2 in the spot we have:
1 −2 3
2 −1 6
0 −2 2
• Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with
the records and ones on the diagonal, i.e.:
𝐿 =
1 0 0
2 1 0
0 −2 1
𝑎𝑛𝑑 𝑈 =
1 −2 3
0 −1 6
0 0 2
• Then we have the following mysterious coincidence:
𝐿𝑈 =
1 0 0
2 1 0
0 −2 1
1 −2 3
0 −1 6
0 0 2
=
1 −2 3
2 −5 12
0 2 −10
= 𝐴
Thus we see that 𝐴 is actually the product of 𝐿 and 𝑈.
Cont . . .
Example 3.5.1:
Solve the system of equation shown in Example 3.4.1, using LU decomposition;
𝑥1 + 𝑥2 + 𝑥3 = 1
4𝑥1 + 3𝑥2 − 𝑥3 = 6
3𝑥1 + 5𝑥2 + 3𝑥3 = 4
Solution:
𝐴 =
1 1 1
4 3 −1
3 5 3
𝑎𝑛𝑑 𝑏𝑇 = 1 6 4
𝑅2
1
= 𝑅2 − 4 1 𝑅1. Therefor multiplier (4), will be saved but only for row two column one;
𝑅3
1
= 𝑅3 − 3 1 𝑅1. Here also multiplier (3), will be saved but only for row three column one;
𝑅2
1
= 𝑅2 −
𝑎21
𝑎11
𝑅1 ≫≫
𝑎21
1 = 𝑎21 −
𝑎21
𝑎11
𝑎11 = 4 −
4
1
1 = 4
𝑎22
1
= 𝑎22 −
𝑎21
𝑎11
𝑎12 = 3 −
4
1
1 = −1
𝑎23
1
= 𝑎23 −
𝑎21
𝑎11
𝑎13 = −1 −
4
1
1 = −5
𝑎𝑛𝑑
𝑅3
1
= 𝑅3 −
𝑎31
𝑎11
𝑅1 ≫≫
𝑎31
1
= 𝑎31 −
𝑎31
𝑎11
𝑎11 = 3 −
3
1
1 = 3
𝑎32
1
= 𝑎32 −
𝑎31
𝑎11
𝑎12 = 5 −
3
1
1 = 2
𝑎33
1 = 𝑎33 −
𝑎31
𝑎11
𝑎13 = 3 −
3
1
1 = 0
Cont . . .
1 1 1
4 −1 −5
3 2 0
𝑅3
2
= 𝑅3
1
−
2
−1
𝑅1
1
. Here the multiplier (-2), will be saved but only for row three column two;
𝑅3
2
= 𝑅3
1
−
𝑎32
1
𝑎22
1 𝑅2
1
≫
𝑎31
2 = 𝑎31
1 −
𝑎32
1
𝑎22
1 𝑎21
1 = 0 −
2
−1
0 = 3
𝑎32
2
= 𝑎32
1
−
𝑎32
1
𝑎22
1 𝑎22
1
= 2 −
2
−1
−1 = (−2)
𝑎33
2
= 𝑎33
1
−
𝑎32
1
𝑎22
1 𝑎23
1
= 0 −
2
−1
−5 = −10
1 1 1
4 −1 −5
3 (−2) −10
• From the above result, the matrix 𝐴 is decomposed to lower and upper triangular as follow;
Cont . . .
Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with the records and ones
on the diagonal
𝐿 =
1 0 0
4 1 0
3 −2 1
𝑎𝑛𝑑 𝑈 =
1 1 1
0 −1 −5
0 0 −10
From equation (3.4.2), we have
𝐿𝑦 = 𝑏, Recall: 𝑏 =
1
6
4
≫≫
1 0 0
4 1 0
3 −2 1
𝑦1
𝑦2
𝑦3
=
1
6
4
𝑦1 = 1, 𝑦2 = 2, 𝑎𝑛𝑑 𝑦3 = 5
Similarly from equation (3.5.2), we have;
𝑈𝑥 = 𝑦 ≫≫
1 1 1
0 −1 −5
0 0 −10
𝑥1
𝑥2
𝑥3
=
1
2
5
𝑥1 = 1, 𝑥2 = 0.5, 𝑎𝑛𝑑 𝑥3 = −0.5
Thank You!

Numerical Methods

  • 1.
    NUMERICAL METHODS (Meng2063) CHAPTER THREE LINEARALGEBRAIC EQUATION NUMERICAL METHODS (Meng2063) By: Tesfahun Meshesha
  • 2.
    3.1. INTRODUCTION • Considera system of n linear algebraic equations in n unknowns, 𝑎11𝑥1 + 𝑎12𝑥2 + ⋯ + 𝑎1𝑛𝑥𝑛 = 𝑏1 𝑎21𝑥1 + 𝑎22𝑥2 + ⋯ + 𝑎2𝑛𝑥𝑛 = 𝑏2 ⋮ ⋮ + ⋯ + ⋮ ⋮ 𝑎𝑛1𝑥1 + 𝑎𝑛2𝑥2 + ⋯ + 𝑎𝑛𝑛𝑥𝑛 = 𝑏𝑛 • where 𝒂𝒊𝒋, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, … , 𝑛, are the known coefficients 𝑏𝑖 , 𝑖 = 1, 2, … , 𝑛, are the known right hand side values, and 𝒙𝒊, 𝑖 = 1, 2, … , 𝑛 are the unknowns to be determined
  • 3.
    Cont . .. • In matrix notation we write the system as; 𝑨𝒙 = 𝒃 (𝟑. 𝟏) Where: 𝐀 = 𝑎11 𝑎12 … 𝑎1𝑛 𝑎21 𝑎22 … 𝑎2𝑛 . . . … … … 𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛 , 𝒙 = 𝑥1 𝑥2 … 𝑥𝑛 and 𝒃 = 𝑏1 𝑏2 … 𝑏𝑛 • The matrix [A | b], obtained by appending the column b to the matrix A is called the augmented matrix. That is 𝐀 𝐛 = 𝑎11 𝑎12 … 𝑎1𝑛 𝑎21 𝑎22 … 𝑎2𝑛 . . . … … … 𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛 𝑏1 𝑏2 … 𝑏𝑛
  • 4.
    Cont . .. • The methods of solution of the linear algebraic equations may be classified as i. Direct methods produce the exact solution after a finite number of steps (disregarding the round-off errors). We can determine the total number of operations (additions, subtractions, divisions and multiplications). This number is called the operational count of the method. ii. Iterative methods are based on the idea of successive approximations. We start with an initial approximation to the solution vector 𝒙 = 𝒙𝟎, and obtain a sequence of approximate vectors 𝒙𝟎, 𝒙𝟏, . . . , 𝒙𝒌, • Gauss elimination, Gauss Jordan elimination, and LU Decomposition
  • 5.
    3.2. Gauss Elimination •The method is based on the idea of reducing the given system of equations 𝑨𝒙 = 𝒃, to an upper triangular system of equations 𝑼𝒙 = 𝒛, using elementary row operations. • This reduced system 𝑼𝒙 = 𝒛, is then solved by the back substitution method to obtain the solution vector x. • Consider a 3x3 augmented matrix 𝑎11 𝑎12 𝑎13 𝑎21 𝑎22 𝑎23 𝑎31 𝑎32 𝑎33 𝑏1 𝑏2 𝑏3 (3.2)
  • 6.
    Cont . .. First stage of elimination • For 𝒂𝟏𝟏 ≠ 𝟎, element 𝒂𝟏𝟏 in the 1 × 1 position is called the first pivot. • Multiply the first row in (3.2) by 𝒂𝟐𝟏/𝒂𝟏𝟏 and 𝒂𝟑𝟏/𝒂𝟏𝟏 respectively and subtract from the second and third rows. 𝑹𝟐 – (𝒂𝟐𝟏/𝒂𝟏𝟏)𝑹𝟏 and 𝑹𝟑 – (𝒂𝟑𝟏/𝒂𝟏𝟏)𝑹𝟏- elementary row operation • New augmented matrix ≫ 𝑎11 𝑎12 𝑎13 0 𝑎22 1 𝑎23 1 0 𝑎32 1 𝑎33 1 𝑏1 𝑏2 1 𝑏3 1 (3.3) • where: 𝑎22 1 = 𝑎22 − 𝑎21 𝑎11 𝑎12 , 𝑎23 1 = 𝑎23 − 𝑎21 𝑎11 𝑎13 , 𝑏2 1 = 𝑏2 − 𝑎21 𝑎11 𝑏1 𝑎32 1 = 𝑎32 − 𝑎31 𝑎11 𝑎12 , 𝑎33 1 = 𝑎33 − 𝑎31 𝑎11 𝑎13 , 𝑏3 1 = 𝑏3 − 𝑎31 𝑎11 𝑏1
  • 7.
    Cont . .. Second stage of elimination • Assume 𝒂𝟐𝟐 𝟏 ≠ 𝟎 and element 𝒂𝟐𝟐 𝟏 ≠ 𝟎 in the 𝟐 × 𝟐 position is called the second pivot. • Multiply the second row in (3.3) by 𝒂𝟑𝟐 𝟏 / 𝒂𝟐𝟐 𝟏 and subtract from the third row. That is, Elementary row operation≫ 𝑹𝟑 – (𝒂𝟑𝟐 𝟏 / 𝒂𝟐𝟐 𝟏 )𝑹𝟐. • We obtain the new augmented matrix as 𝑎11 𝑎12 𝑎13 0 𝑎22 1 𝑎23 1 0 0 𝑎33 2 𝑏1 𝑏2 1 𝑏3 2 (3.4) • where: 𝑎33 2 = 𝑎33 1 − 𝑎32 1 𝑎22 1 𝑎23 1 , 𝑏3 2 = 𝑏3 1 − 𝑎32 1 𝑎22 1 𝑏2 1
  • 8.
    Cont . .. • In equation (3.4), the element 𝑎33 2 ≠ 0 is called the third pivot. • The system in (3.4), is in the required upper triangular form [𝑼|𝒛]. • The solution vector 𝒙 is now obtained by back substitution. Back substitution: 𝒙𝟑 = 𝒃𝟑 𝟐 𝒂𝟑𝟑 𝟐 𝒙𝟐 = 𝒃𝟐 𝟏 − 𝒂𝟐𝟑 𝟏 𝒙𝟑 𝒂𝟐𝟐 𝟏 𝒙𝟏 = (𝒃𝟏 – 𝒂𝟏𝟐𝒙𝟐 – 𝒂𝟏𝟑 𝒙𝟑) 𝒂𝟏𝟏
  • 9.
    Cont . .. Remarks: • Gauss elimination method fail when i. any one of the pivots is zero, as the elimination progresses if a pivot is zero, then division by it gives over flow error, since division by zero is not defined. ii. a pivot is a very small number, then division by it introduces large round-off errors and the solution may contain large errors. Partial pivoting – avoid gauss elimination failure: • In first stage of elimination, the first column of the augmented matrix is searched for the largest element in magnitude and brought as the first pivot by interchanging the first row with the row having the largest element in magnitude. • In second stage of elimination, the second column is searched for the largest element in magnitude among the n – 1 elements leaving the first element, and this element is brought as the second pivot by interchanging the second row with the later row having the largest element in magnitude. • This procedure is continued until the upper triangular system is obtained.
  • 10.
    Cont . .. Example 3.2.1: Solve the following system of equations using the Gauss elimination without partial pivoting; 𝑥1 + 10𝑥2 − 𝑥3 = 3 2𝑥1 + 3𝑥2 + 20 𝑥3 = 7 10𝑥1 − 𝑥2 + 2 𝑥3 = 4 Solution: The augmented matrix is given by; 1 10 −1 2 3 20 10 −1 2 3 7 4 Phase 1: Forward elimination First stage of elimination: 𝑅2 1 = 𝑅2 − 𝑎21 𝑎11 𝑅1 ≫≫ 𝑎21 1 = 𝑎21 − 𝑎21 𝑎11 𝑎11 = 2 − 2 1 1 = 0 𝑎22 1 = 𝑎22 − 𝑎21 𝑎11 𝑎12 = 3 − 2 1 10 = −17 𝑎23 1 = 𝑎23 − 𝑎21 𝑎11 𝑎13 = 20 − 2 1 −1 = 22 𝑏2 1 = 𝑏2 − 𝑎21 𝑎11 𝑏1 = 7 − 2 1 3 = 1 𝑅3 1 = 𝑅3 − 𝑎31 𝑎11 𝑅1 ≫≫ 𝑎31 1 = 𝑎31 − 𝑎31 𝑎11 𝑎11 = 10 − 10 1 1 = 0 𝑎32 1 = 𝑎32 − 𝑎31 𝑎11 𝑎12 = −1 − 10 1 10 = −101 𝑎33 1 = 𝑎33 − 𝑎31 𝑎11 𝑎13 = 2 − 10 1 −1 = 12 𝑏3 1 = 𝑏3 − 𝑎31 𝑎11 𝑏1 = 4 − 10 1 3 = −26 1 10 −1 0 −17 22 0 −101 12 3 1 −26 Second stage of elimination: 𝑅3 2 = 𝑅3 1 − 𝑎32 1 𝑎22 1 𝑅2 1 ≫ 𝑎31 2 = 𝑎31 1 − 𝑎32 1 𝑎22 1 𝑎21 1 = 0 − −101 −17 0 = 0 𝑎32 2 = 𝑎32 1 − 𝑎32 1 𝑎22 1 𝑎22 1 = −101 − −101 −17 −17 = 0 𝑎33 2 = 𝑎33 1 − 𝑎32 1 𝑎22 1 𝑎23 1 = 12 − −101 −17 22 = −118.7058 𝑏3 2 = 𝑏3 1 − 𝑎32 1 𝑎22 1 𝑏2 1 = −26 − −101 −17 1 = −31.941 1 10 −1 0 −17 22 0 0 −118.7058 3 1 −31.94 Phase 2: Back substitution From the result obtained at second stage of elimination, we can find the value of 𝑥𝑖 −118.705𝑥3= −31.94; 𝑥3 = 0.26908 − 17𝑥2 + 22 (0.26908) = 1 𝑥2 = 0.28939 𝑥1 + 10(0.28939) − (0.26908) = 3 𝑥1 = 0.3751
  • 11.
    3.3. Gauss SeidelIteration ₰ Gauss elimination methods discussed above to solve system of equations is direct method, and Gauss Seidel method is an iterative approach to solve systems of linear equation. ₰ Gauss-Seidel Iteration Method use the updated values of 𝑥1, 𝑥2. . . , 𝑥𝑖−1 in computing the value of the variable xi. We assume that the pivots 𝑎𝑖𝑖 ≠ 0, for all i. We write the equations as; 𝑎11𝑥1 = 𝑏1 – 𝑎12𝑥2 + 𝑎13𝑥3 𝑎22𝑥2 = 𝑏2 – 𝑎21𝑥1 + 𝑎23𝑥3 𝑎33𝑥3 = 𝑏3 – 𝑎31𝑥1 + 𝑎32𝑥2 (3.3.1)
  • 12.
    Cont . .. ⁜ Based on eq (3.3.1), Gauss-Seidel iteration method is given by; x1 k+1 = 1 a11 b1 – a12x2 k + a13x3 k x2 k+1 = 1 a22 b2 – a21x1 k+1 + a23x3 k x3 k+1 = 1 a33 b3 – a31x1 k+1 + a32x2 k+1 where k = 0,1,2, . . . ⇝ No of iteration Remark: ⁜ A sufficient condition for convergence of the Gauss-Seidel method is that the system of equations is diagonally dominant. ⁜ This implies that convergence may be obtained even if the system is not diagonally dominant ⁜ If the system is not diagonally dominant, we may exchange the equations, if possible, such that the new system is diagonally dominant and convergence is guaranteed.
  • 13.
    Cont . .. Example 3.3.1: Find the solution for a system of equations correct to three decimal places, using Gauss-Seidel iteration method. 45𝑥1 + 2𝑥2 + 3𝑥3 = 58 – 3𝑥1 + 22𝑥2 + 2𝑥3 = 47 5𝑥1 + 𝑥2 + 20𝑥3 = 67
  • 14.
    Cont . .. Solution: The given system of equations is strongly diagonally dominant. Hence, we can expect fast convergence. Gauss-Seidel method gives the iteration 𝑥1 𝑘+1 = 1 45 58 – 2𝑥2 𝑘 − 3𝑥3 𝑘 𝑥2 𝑘+1 = 1 22 47 + 3𝑥1 𝑘+1 − 2𝑥3 𝑘 𝑥3 𝑘+1 = 1 20 67 – 5𝑥1 𝑘+1 − 𝑥2 𝑘+1 𝑤ℎ𝑒𝑟𝑒 𝑘 = 0,1,2, . . . ⇝ 𝑁𝑜 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 Starting with 𝑥1 (0) = 0, 𝑥2 (0) = 0, 𝑎𝑛𝑑 𝑥3 (0) = 0, we get the following results; First iteration: k =0 𝑥1 1 = 1 45 58 – 2 𝑥2 0 − 3𝑥3 0 = 1 45 58 = 1.28889 𝑥2 1 = 1 22 47 + 3 𝑥1 1 − 2𝑥3 0 = 1 22 47 + 3 1.28889 − 0 = 2.31212 𝑥3 1 = 1 20 67 − 5 𝑥1 1 − 𝑥2 1 = 1 20 67 − 5 1.28889 − (2.31212) = 2.91217 Second iteration: k =1 𝑥1 2 = 1 45 58 – 2 𝑥2 1 − 3𝑥3 1 = 1 45 58 – 2 2.31212 − 3 2.91217 = 0.99198 𝑥2 2 = 1 22 47 + 3 𝑥1 2 − 2𝑥3 1 = 1 22 47 + 3 0.99198 − 2 2.91217 = 2.00689 𝑥3 2 = 1 20 67 – 5 𝑥1 2 − 𝑥2 2 = 1 20 67 – 5 0.99198 − 2.00689 = 3.00166
  • 15.
    Cont . .. Third iteration: k =2 𝑥1 3 = 1 45 58 – 2 𝑥2 2 − 3𝑥3 2 = 1 45 58 – 2 2.000689 − 3 3.00166 = 0.99958 𝑥2 3 = 1 22 47 + 3 𝑥1 3 − 2𝑥3 2 = 1 22 47 + 3 0.99958 − 2 3.00166 = 1.99979 𝑥3 3 = 1 20 67 – 5 𝑥1 3 − 𝑥2 3 = 1 20 67 – 5 0.99958 − 1.99979 = 3.0001155 Fourth iteration: k =3 𝑥1 4 = 1 45 58 – 2 𝑥2 3 − 3𝑥3 3 = 1 45 58 – 2 1.99979 − 3 3.000155 = 0.999999 𝑥2 4 = 1 22 47 + 3 𝑥1 4 − 2𝑥3 3 = 1 22 47 + 3 0.999999 − 2 3.00155 = 1.99998 𝑥3 4 = 1 20 67 – 5 𝑥1 4 − 𝑥2 4 = 1 20 67 – 5 0.99999 − 1.99998 = 3.00000125 We find; 𝑥1 4 − 𝑥1 3 = 0.99999 − 0.99958 = 0.00041 𝑥2 4 − 𝑥2 3 = 1.99998 − 1.99979 = 0.00019 𝑥3 4 − 𝑥3 3 = 3.00000125 − 3.0001155 = 0.000114 Since, all the errors in magnitude are less than 0.0005, the required solution is 𝑥1 = 1.0 𝑥2 = 1.99999, and 𝑥3 = 3.0. Rounding to three decimal places, we get 𝑥1 = 1.0 𝑥2 = 2.0, and 𝑥3 = 3.0
  • 16.
    3.4. LU -Decomposition • In many applications where linear systems appear, one needs to solve Ax = b for many different vectors b. ⇝ For instance, a structure must be tested under several different loads, not just one. • Gaussian elimination with pivoting is the most efficient and accurate way to solve a linear system. • If we need to solve several different systems with the same 𝐴 , and 𝐴 is big, then we would like to avoid repeating the steps of Gaussian elimination on 𝐴 for every different 𝑏 . • This can be accomplished by the LU decomposition, which in effect records the steps of Gaussian elimination.
  • 17.
    Cont . .. ₰ What we will do is decompose the matrix 𝑨 into the product of a lower triangular and an upper triangular matrix: 𝐴 = 𝐿𝑈 (3.4.1) ₰ This allows us to solve linear systems by solving two triangular systems: 𝐿𝑦 = 𝑏 𝑈𝑥 = 𝑦 (3.4.2) ₰ Thus, we first solve 𝑳𝒚 = 𝒃 and then 𝑼𝒙 = 𝒚 to get the solution
  • 18.
    Cont . .. • The main idea of the LU decomposition is to record the steps used in Gaussian elimination on A in the places where the zero is produced. Consider the matrix: 𝐴 = 1 −2 3 2 −5 12 0 2 −10 • The first step of Gaussian elimination is to subtract 2 times the first row from the second row. In order to record what we have done, we will put the multiplier 2, into the place it was used to make a zero, i.e. the second row, first column. 1 −2 3 2 −1 6 0 2 −10
  • 19.
    Cont . .. • There is already a zero in the lower left corner, so we don’t need to eliminate anything there. We record this fact with a (0). To eliminate the third row, second column, we need to subtract −2 times the second row from the third row. Recording the −2 in the spot we have: 1 −2 3 2 −1 6 0 −2 2 • Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with the records and ones on the diagonal, i.e.: 𝐿 = 1 0 0 2 1 0 0 −2 1 𝑎𝑛𝑑 𝑈 = 1 −2 3 0 −1 6 0 0 2 • Then we have the following mysterious coincidence: 𝐿𝑈 = 1 0 0 2 1 0 0 −2 1 1 −2 3 0 −1 6 0 0 2 = 1 −2 3 2 −5 12 0 2 −10 = 𝐴 Thus we see that 𝐴 is actually the product of 𝐿 and 𝑈.
  • 20.
    Cont . .. Example 3.5.1: Solve the system of equation shown in Example 3.4.1, using LU decomposition; 𝑥1 + 𝑥2 + 𝑥3 = 1 4𝑥1 + 3𝑥2 − 𝑥3 = 6 3𝑥1 + 5𝑥2 + 3𝑥3 = 4 Solution: 𝐴 = 1 1 1 4 3 −1 3 5 3 𝑎𝑛𝑑 𝑏𝑇 = 1 6 4 𝑅2 1 = 𝑅2 − 4 1 𝑅1. Therefor multiplier (4), will be saved but only for row two column one; 𝑅3 1 = 𝑅3 − 3 1 𝑅1. Here also multiplier (3), will be saved but only for row three column one; 𝑅2 1 = 𝑅2 − 𝑎21 𝑎11 𝑅1 ≫≫ 𝑎21 1 = 𝑎21 − 𝑎21 𝑎11 𝑎11 = 4 − 4 1 1 = 4 𝑎22 1 = 𝑎22 − 𝑎21 𝑎11 𝑎12 = 3 − 4 1 1 = −1 𝑎23 1 = 𝑎23 − 𝑎21 𝑎11 𝑎13 = −1 − 4 1 1 = −5 𝑎𝑛𝑑 𝑅3 1 = 𝑅3 − 𝑎31 𝑎11 𝑅1 ≫≫ 𝑎31 1 = 𝑎31 − 𝑎31 𝑎11 𝑎11 = 3 − 3 1 1 = 3 𝑎32 1 = 𝑎32 − 𝑎31 𝑎11 𝑎12 = 5 − 3 1 1 = 2 𝑎33 1 = 𝑎33 − 𝑎31 𝑎11 𝑎13 = 3 − 3 1 1 = 0
  • 21.
    Cont . .. 1 1 1 4 −1 −5 3 2 0 𝑅3 2 = 𝑅3 1 − 2 −1 𝑅1 1 . Here the multiplier (-2), will be saved but only for row three column two; 𝑅3 2 = 𝑅3 1 − 𝑎32 1 𝑎22 1 𝑅2 1 ≫ 𝑎31 2 = 𝑎31 1 − 𝑎32 1 𝑎22 1 𝑎21 1 = 0 − 2 −1 0 = 3 𝑎32 2 = 𝑎32 1 − 𝑎32 1 𝑎22 1 𝑎22 1 = 2 − 2 −1 −1 = (−2) 𝑎33 2 = 𝑎33 1 − 𝑎32 1 𝑎22 1 𝑎23 1 = 0 − 2 −1 −5 = −10 1 1 1 4 −1 −5 3 (−2) −10 • From the above result, the matrix 𝐴 is decomposed to lower and upper triangular as follow;
  • 22.
    Cont . .. Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with the records and ones on the diagonal 𝐿 = 1 0 0 4 1 0 3 −2 1 𝑎𝑛𝑑 𝑈 = 1 1 1 0 −1 −5 0 0 −10 From equation (3.4.2), we have 𝐿𝑦 = 𝑏, Recall: 𝑏 = 1 6 4 ≫≫ 1 0 0 4 1 0 3 −2 1 𝑦1 𝑦2 𝑦3 = 1 6 4 𝑦1 = 1, 𝑦2 = 2, 𝑎𝑛𝑑 𝑦3 = 5 Similarly from equation (3.5.2), we have; 𝑈𝑥 = 𝑦 ≫≫ 1 1 1 0 −1 −5 0 0 −10 𝑥1 𝑥2 𝑥3 = 1 2 5 𝑥1 = 1, 𝑥2 = 0.5, 𝑎𝑛𝑑 𝑥3 = −0.5
  • 23.