The document discusses numerical methods for solving linear algebraic equations. It begins by introducing the general form of a system of n linear equations with n unknowns.
It then describes two classes of solution methods: direct and iterative. Gaussian elimination is presented as a direct method that transforms the system of equations into an upper triangular system which can then be solved using back substitution. Gauss-Seidel iteration is introduced as an iterative method that uses successive approximations to obtain solutions. The document provides examples to illustrate how to apply both Gaussian elimination and Gauss-Seidel iteration to solve systems of linear equations.
Introduction to the study of linear algebraic equations, matrix notation, and classification of solution methods. Discusses direct vs iterative methods and operational counts.
Description of the Gauss elimination approach, including stages of elimination, back substitution, and pitfalls of zero pivots. Discusses the role of partial pivoting for accuracy.
Introduction to Gauss Seidel iteration as an iterative method for solving equations, emphasizing convergence conditions and providing a detailed example with iterative solutions.
Discussion on LU decomposition for efficient resolution of linear systems, illustrating methods for matrix decomposition and providing a practical example of usage.
3.1. INTRODUCTION
• Considera system of n linear algebraic equations in n unknowns,
𝑎11𝑥1 + 𝑎12𝑥2 + ⋯ + 𝑎1𝑛𝑥𝑛 = 𝑏1
𝑎21𝑥1 + 𝑎22𝑥2 + ⋯ + 𝑎2𝑛𝑥𝑛 = 𝑏2
⋮ ⋮ + ⋯ + ⋮ ⋮
𝑎𝑛1𝑥1 + 𝑎𝑛2𝑥2 + ⋯ + 𝑎𝑛𝑛𝑥𝑛 = 𝑏𝑛
• where 𝒂𝒊𝒋, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, … , 𝑛, are the known coefficients 𝑏𝑖 , 𝑖 =
1, 2, … , 𝑛, are the known right hand side values, and 𝒙𝒊, 𝑖 = 1, 2, … , 𝑛 are the
unknowns to be determined
3.
Cont . ..
• In matrix notation we write the system as;
𝑨𝒙 = 𝒃 (𝟑. 𝟏)
Where:
𝐀 =
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
. . . … … …
𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛
, 𝒙 =
𝑥1
𝑥2
…
𝑥𝑛
and 𝒃 =
𝑏1
𝑏2
…
𝑏𝑛
• The matrix [A | b], obtained by appending the column b to the matrix A is
called the augmented matrix. That is
𝐀 𝐛 =
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
. . . … … …
𝑎1𝑛 𝑎2𝑛 … 𝑎𝑛𝑛
𝑏1
𝑏2
…
𝑏𝑛
4.
Cont . ..
• The methods of solution of the linear algebraic equations may be classified as
i. Direct methods produce the exact solution after a finite number of steps
(disregarding the round-off errors). We can determine the total number of
operations (additions, subtractions, divisions and multiplications). This number
is called the operational count of the method.
ii. Iterative methods are based on the idea of successive approximations. We start
with an initial approximation to the solution vector 𝒙 = 𝒙𝟎, and obtain a
sequence of approximate vectors 𝒙𝟎, 𝒙𝟏, . . . , 𝒙𝒌,
• Gauss elimination, Gauss Jordan elimination, and LU Decomposition
5.
3.2. Gauss Elimination
•The method is based on the idea of reducing the given system of
equations 𝑨𝒙 = 𝒃, to an upper triangular system of equations 𝑼𝒙 = 𝒛, using
elementary row operations.
• This reduced system 𝑼𝒙 = 𝒛, is then solved by the back substitution method to
obtain the solution vector x.
• Consider a 3x3 augmented matrix
𝑎11 𝑎12 𝑎13
𝑎21 𝑎22 𝑎23
𝑎31 𝑎32 𝑎33
𝑏1
𝑏2
𝑏3
(3.2)
6.
Cont . ..
First stage of elimination
• For 𝒂𝟏𝟏 ≠ 𝟎, element 𝒂𝟏𝟏 in the 1 × 1 position is called the first pivot.
• Multiply the first row in (3.2) by 𝒂𝟐𝟏/𝒂𝟏𝟏 and 𝒂𝟑𝟏/𝒂𝟏𝟏 respectively and subtract
from the second and third rows.
𝑹𝟐 – (𝒂𝟐𝟏/𝒂𝟏𝟏)𝑹𝟏 and 𝑹𝟑 – (𝒂𝟑𝟏/𝒂𝟏𝟏)𝑹𝟏- elementary row operation
• New augmented matrix ≫
𝑎11 𝑎12 𝑎13
0 𝑎22
1 𝑎23
1
0 𝑎32
1
𝑎33
1
𝑏1
𝑏2
1
𝑏3
1
(3.3)
• where:
𝑎22
1 = 𝑎22 −
𝑎21
𝑎11
𝑎12 , 𝑎23
1 = 𝑎23 −
𝑎21
𝑎11
𝑎13 , 𝑏2
1
= 𝑏2 −
𝑎21
𝑎11
𝑏1
𝑎32
1
= 𝑎32 −
𝑎31
𝑎11
𝑎12 , 𝑎33
1
= 𝑎33 −
𝑎31
𝑎11
𝑎13 , 𝑏3
1
= 𝑏3 −
𝑎31
𝑎11
𝑏1
7.
Cont . ..
Second stage of elimination
• Assume 𝒂𝟐𝟐
𝟏
≠ 𝟎 and element 𝒂𝟐𝟐
𝟏
≠ 𝟎 in the 𝟐 × 𝟐 position is called the
second pivot.
• Multiply the second row in (3.3) by 𝒂𝟑𝟐
𝟏 / 𝒂𝟐𝟐
𝟏 and subtract from the third row.
That is, Elementary row operation≫ 𝑹𝟑 – (𝒂𝟑𝟐
𝟏 / 𝒂𝟐𝟐
𝟏 )𝑹𝟐.
• We obtain the new augmented matrix as
𝑎11 𝑎12 𝑎13
0 𝑎22
1
𝑎23
1
0 0 𝑎33
2
𝑏1
𝑏2
1
𝑏3
2
(3.4)
• where:
𝑎33
2
= 𝑎33
1
−
𝑎32
1
𝑎22
1
𝑎23
1
, 𝑏3
2
= 𝑏3
1
−
𝑎32
1
𝑎22
1
𝑏2
1
8.
Cont . ..
• In equation (3.4), the element 𝑎33
2
≠ 0 is called the third pivot.
• The system in (3.4), is in the required upper triangular form [𝑼|𝒛].
• The solution vector 𝒙 is now obtained by back substitution.
Back substitution:
𝒙𝟑 = 𝒃𝟑
𝟐
𝒂𝟑𝟑
𝟐
𝒙𝟐 =
𝒃𝟐
𝟏
− 𝒂𝟐𝟑
𝟏 𝒙𝟑
𝒂𝟐𝟐
𝟏
𝒙𝟏 =
(𝒃𝟏 – 𝒂𝟏𝟐𝒙𝟐 – 𝒂𝟏𝟑 𝒙𝟑)
𝒂𝟏𝟏
9.
Cont . ..
Remarks:
• Gauss elimination method fail when
i. any one of the pivots is zero, as the elimination progresses if a pivot is zero, then division
by it gives over flow error, since division by zero is not defined.
ii. a pivot is a very small number, then division by it introduces large round-off errors and the
solution may contain large errors.
Partial pivoting – avoid gauss elimination failure:
• In first stage of elimination, the first column of the augmented matrix is searched for the
largest element in magnitude and brought as the first pivot by interchanging the first row
with the row having the largest element in magnitude.
• In second stage of elimination, the second column is searched for the largest element in
magnitude among the n – 1 elements leaving the first element, and this element is brought as
the second pivot by interchanging the second row with the later row having the largest
element in magnitude.
• This procedure is continued until the upper triangular system is obtained.
3.3. Gauss SeidelIteration
₰ Gauss elimination methods discussed above to solve system of equations is
direct method, and Gauss Seidel method is an iterative approach to solve systems
of linear equation.
₰ Gauss-Seidel Iteration Method use the updated values of 𝑥1, 𝑥2. . . , 𝑥𝑖−1 in
computing the value of the variable xi. We assume that the pivots 𝑎𝑖𝑖 ≠ 0, for
all i. We write the equations as;
𝑎11𝑥1 = 𝑏1 – 𝑎12𝑥2 + 𝑎13𝑥3
𝑎22𝑥2 = 𝑏2 – 𝑎21𝑥1 + 𝑎23𝑥3
𝑎33𝑥3 = 𝑏3 – 𝑎31𝑥1 + 𝑎32𝑥2
(3.3.1)
12.
Cont . ..
⁜ Based on eq (3.3.1), Gauss-Seidel iteration method is given by;
x1
k+1 =
1
a11
b1 – a12x2
k + a13x3
k
x2
k+1 =
1
a22
b2 – a21x1
k+1 + a23x3
k
x3
k+1 =
1
a33
b3 – a31x1
k+1 + a32x2
k+1
where k = 0,1,2, . . . ⇝ No of iteration
Remark:
⁜ A sufficient condition for convergence of the Gauss-Seidel method is that the system of
equations is diagonally dominant.
⁜ This implies that convergence may be obtained even if the system is not diagonally dominant
⁜ If the system is not diagonally dominant, we may exchange the equations, if possible, such that
the new system is diagonally dominant and convergence is guaranteed.
13.
Cont . ..
Example 3.3.1:
Find the solution for a system of equations correct to three decimal places, using Gauss-Seidel
iteration method.
45𝑥1 + 2𝑥2 + 3𝑥3 = 58
– 3𝑥1 + 22𝑥2 + 2𝑥3 = 47
5𝑥1 + 𝑥2 + 20𝑥3 = 67
3.4. LU -Decomposition
• In many applications where linear systems appear, one needs to solve Ax = b for many
different vectors b.
⇝ For instance, a structure must be tested under several different loads, not just one.
• Gaussian elimination with pivoting is the most efficient and accurate way to solve a linear
system.
• If we need to solve several different systems with the same 𝐴 , and 𝐴 is big, then we
would like to avoid repeating the steps of Gaussian elimination on 𝐴 for every
different 𝑏 .
• This can be accomplished by the LU decomposition, which in effect records the steps of
Gaussian elimination.
17.
Cont . ..
₰ What we will do is decompose the matrix 𝑨 into the product of a lower
triangular and an upper triangular matrix:
𝐴 = 𝐿𝑈 (3.4.1)
₰ This allows us to solve linear systems by solving two triangular systems:
𝐿𝑦 = 𝑏
𝑈𝑥 = 𝑦
(3.4.2)
₰ Thus, we first solve 𝑳𝒚 = 𝒃 and then 𝑼𝒙 = 𝒚 to get the solution
18.
Cont . ..
• The main idea of the LU decomposition is to record the steps used in Gaussian elimination on A in
the places where the zero is produced. Consider the matrix:
𝐴 =
1 −2 3
2 −5 12
0 2 −10
• The first step of Gaussian elimination is to subtract 2 times the first row from the second row. In
order to record what we have done, we will put the multiplier 2, into the place it was used to make a
zero, i.e. the second row, first column.
1 −2 3
2 −1 6
0 2 −10
19.
Cont . ..
• There is already a zero in the lower left corner, so we don’t need to eliminate anything there.
We record this fact with a (0). To eliminate the third row, second column, we need to subtract
−2 times the second row from the third row. Recording the −2 in the spot we have:
1 −2 3
2 −1 6
0 −2 2
• Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with
the records and ones on the diagonal, i.e.:
𝐿 =
1 0 0
2 1 0
0 −2 1
𝑎𝑛𝑑 𝑈 =
1 −2 3
0 −1 6
0 0 2
• Then we have the following mysterious coincidence:
𝐿𝑈 =
1 0 0
2 1 0
0 −2 1
1 −2 3
0 −1 6
0 0 2
=
1 −2 3
2 −5 12
0 2 −10
= 𝐴
Thus we see that 𝐴 is actually the product of 𝐿 and 𝑈.
20.
Cont . ..
Example 3.5.1:
Solve the system of equation shown in Example 3.4.1, using LU decomposition;
𝑥1 + 𝑥2 + 𝑥3 = 1
4𝑥1 + 3𝑥2 − 𝑥3 = 6
3𝑥1 + 5𝑥2 + 3𝑥3 = 4
Solution:
𝐴 =
1 1 1
4 3 −1
3 5 3
𝑎𝑛𝑑 𝑏𝑇 = 1 6 4
𝑅2
1
= 𝑅2 − 4 1 𝑅1. Therefor multiplier (4), will be saved but only for row two column one;
𝑅3
1
= 𝑅3 − 3 1 𝑅1. Here also multiplier (3), will be saved but only for row three column one;
𝑅2
1
= 𝑅2 −
𝑎21
𝑎11
𝑅1 ≫≫
𝑎21
1 = 𝑎21 −
𝑎21
𝑎11
𝑎11 = 4 −
4
1
1 = 4
𝑎22
1
= 𝑎22 −
𝑎21
𝑎11
𝑎12 = 3 −
4
1
1 = −1
𝑎23
1
= 𝑎23 −
𝑎21
𝑎11
𝑎13 = −1 −
4
1
1 = −5
𝑎𝑛𝑑
𝑅3
1
= 𝑅3 −
𝑎31
𝑎11
𝑅1 ≫≫
𝑎31
1
= 𝑎31 −
𝑎31
𝑎11
𝑎11 = 3 −
3
1
1 = 3
𝑎32
1
= 𝑎32 −
𝑎31
𝑎11
𝑎12 = 5 −
3
1
1 = 2
𝑎33
1 = 𝑎33 −
𝑎31
𝑎11
𝑎13 = 3 −
3
1
1 = 0
21.
Cont . ..
1 1 1
4 −1 −5
3 2 0
𝑅3
2
= 𝑅3
1
−
2
−1
𝑅1
1
. Here the multiplier (-2), will be saved but only for row three column two;
𝑅3
2
= 𝑅3
1
−
𝑎32
1
𝑎22
1 𝑅2
1
≫
𝑎31
2 = 𝑎31
1 −
𝑎32
1
𝑎22
1 𝑎21
1 = 0 −
2
−1
0 = 3
𝑎32
2
= 𝑎32
1
−
𝑎32
1
𝑎22
1 𝑎22
1
= 2 −
2
−1
−1 = (−2)
𝑎33
2
= 𝑎33
1
−
𝑎32
1
𝑎22
1 𝑎23
1
= 0 −
2
−1
−5 = −10
1 1 1
4 −1 −5
3 (−2) −10
• From the above result, the matrix 𝐴 is decomposed to lower and upper triangular as follow;
22.
Cont . ..
Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with the records and ones
on the diagonal
𝐿 =
1 0 0
4 1 0
3 −2 1
𝑎𝑛𝑑 𝑈 =
1 1 1
0 −1 −5
0 0 −10
From equation (3.4.2), we have
𝐿𝑦 = 𝑏, Recall: 𝑏 =
1
6
4
≫≫
1 0 0
4 1 0
3 −2 1
𝑦1
𝑦2
𝑦3
=
1
6
4
𝑦1 = 1, 𝑦2 = 2, 𝑎𝑛𝑑 𝑦3 = 5
Similarly from equation (3.5.2), we have;
𝑈𝑥 = 𝑦 ≫≫
1 1 1
0 −1 −5
0 0 −10
𝑥1
𝑥2
𝑥3
=
1
2
5
𝑥1 = 1, 𝑥2 = 0.5, 𝑎𝑛𝑑 𝑥3 = −0.5