MATRIX METHODS
SYSTEMS OF LINEAR EQUATIONS
ENGR 351
Numerical Methods for Engineers
Southern Illinois University Carbondale
College of Engineering
Dr. L.R. Chevalier
Dr. B.A. DeVantier
Copyright© 2003 by Lizette R. Chevalier and Bruce A. DeVantier
Permission is granted to students at Southern Illinois University at Carbondale
to make one copy of this material for use in the class ENGR 351, Numerical
Methods for Engineers. No other permission is granted.
All other rights are reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without
the prior written permission of the copyright owner.
Systems of Linear Algebraic
Equations
Specific Study Objectives
• Understand the graphic interpretation of
ill-conditioned systems and how it
relates to the determinant
• Be familiar with terminology: forward
elimination, back substitution, pivot
equations and pivot coefficient
• Apply matrix inversion to evaluate stimulus-
response computations in engineering
• Understand why the Gauss-Seidel method is
particularly well-suited for large sparse
systems of equations
• Know how to assess diagonal dominance of a
system of equations and how it relates to
whether the system can be solved with the
Gauss-Seidel method
Specific Study Objectives
• Understand the rationale behind
relaxation and how to apply this
technique
Specific Study Objectives
How to represent a system of linear
equations as a matrix
[A]{x} = {c}
where {x} and {c} are both column vectors
[ ]










−
−
=




















=
−=++
=++
−=++
44.0
67.0
01.0
5.03.01.0
9.115.0
152.03.0
}{}{
44.05.03.01.0
67.09.15.0
01.052.03.0
3
2
1
321
321
321
x
x
x
CXA
xxx
xxx
xxx
How to represent a system of linear
equations as a matrix
Practical application
• Consider a problem in structural
engineering
• Find the forces and reactions
associated with a statically determinant
truss
hinge: transmits both
vertical and horizontal
forces at the surface
roller: transmits
vertical forces
30
90
60
1000 kg
30
90
60
F1
H2
V2
V3
2
3
1
FREE BODY DIAGRAM F
F
H
v
=
=
∑
∑
0
0
F2
F3
Node 1 F1,V
F1,H
F3
F1
6030
F F F F
F F F F
F F
F F
H H
V V
= = − + +
= = − − +
− + =
− − = −
∑
∑
0 30 60
0 30 60
30 60 0
30 60 1000
1 3 1
1 3 1
1 3
1 3
cos cos
sin sin
cos cos
sin sin
,
,
 
 
 
 
F H F F
F V F
H
V
= = + +
= = +
∑
∑
0 30
0 30
2 2 1
2 1
cos
sin


Node 2
F2
F1
30
H2
V2
F F F
F F V
H
V
= = − −
= = +
∑
∑
0 60
0 60
3 2
3 3
cos
sin


Node 3
F2
F3
60
V3
060sin
060cos
030sin
030cos
100060sin30sin
060cos30cos
33
23
12
122
31
31
=+
=−−
=+
=++
−=−−
=+−
VF
FF
FV
FFH
FF
FF






SIX EQUATIONS
SIX UNKNOWNS
F1 F2 F3 H2 V2 V3
1
2
3
4
5
6
-cos30 0 cos60 0 0 0
-sin30 0 -sin60 0 0 0
cos30 1 0 1 0 0
sin30 0 0 0 1 0
0 -1 -cos60 0 0 0
0 0 sin60 0 0 1
0
-1000
0
0
0
0
Do some book keeping
This is the basis for your matrices and the equation
[A]{x}={c}
−
− −
− −




































=
−


















0866 0 05 0 0 0
05 0 0866 0 0 0
0 866 1 0 1 0 0
05 0 0 0 1 0
0 1 05 0 0 0
0 0 0866 0 0 1
0
1000
0
0
0
0
1
2
3
2
2
3
. .
. .
.
.
.
.
F
F
F
H
V
V
System of Linear Equations
• We have focused our last lectures on
finding a value of x that satisfied a
single equation
• f(x) = 0
• Now we will deal with the case of
determining the values of x1, x2, .....xn,
that simultaneously satisfy a set of
equations
System of Linear Equations
• Simultaneous equations
• f1(x1, x2, .....xn) = 0
• f2(x1, x2, .....xn) = 0
• .............
• fn(x1, x2, .....xn) = 0
• Methods will be for linear equations
• a11x1 + a12x2 +...... a1nxn =c1
• a21x1 + a22x2 +...... a2nxn =c2
•
..........
Mathematical Background
Matrix Notation
• a horizontal set of elements is called a row
• a vertical set is called a column
• first subscript refers to the row number
• second subscript refers to column number
[ ]A
a a a a
a a a a
a a a a
n
n
m m m mn
=












11 12 13 1
21 22 23 2
1 2 3
...
...
. . . .
...
[ ]












=
mnmmm
n
n
aaaa
aaaa
aaaa
A
...
....
...
...
321
2232221
1131211
This matrix has m rows an n column.
It has the dimensions m by n (m x n)
[ ]












=
mnmmm
n
n
aaaa
aaaa
aaaa
A
...
....
...
...
321
2232221
1131211
This matrix has m rows and n column.
It has the dimensions m by n (m x n)
note
subscript
[ ]A
a a a a
a a a a
a a a a
n
n
m m m mn
=












11 12 13 1
21 22 23 2
1 2 3
...
...
. . . .
...
row 2
column 3
Note the consistent
scheme with subscripts
denoting row,column
Row vector: m=1
Column vector: n=1 Square matrix: m = n
[ ] [ ]B b b bn= 1 2 .......
[ ]C
c
c
cm
=
















1
2
.
.
[ ]A
a a a
a a a
a a a
=










11 12 13
21 22 23
31 32 33
Types of Matrices
• Symmetric matrix
• Diagonal matrix
• Identity matrix
• Inverse of a matrix
• Transpose of a matrix
• Upper triangular matrix
• Lower triangular matrix
• Banded matrix
Definitions
Symmetric Matrix
aij = aji for all i’s and j’s
[ ]A =










5 1 2
1 3 7
2 7 8
Does a23 = a32 ?
Yes. Check the other elements
on your own.
Diagonal Matrix
A square matrix where all elements
off the main diagonal are zero
[ ]A
a
a
a
a
=












11
22
33
44
0 0 0
0 0 0
0 0 0
0 0 0
Identity Matrix
A diagonal matrix where all elements
on the main diagonal are equal to 1
[ ]A =












1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
The symbol [I] is used to denote the identify matrix.
Inverse of [A]
[ ][ ] [ ] [ ] [ ]IAAAA ==
−− 11
Transpose of [A]
[ ]A
a a a
a a a
a a a
t
m
m
n n mn
=


















11 21 1
12 22 2
1 2
. . .
. . .
. . . . . .
. . . . . .
. . . . . .
. . .
Upper Triangle Matrix
Elements below the main diagonal
are zero
[ ]A
a a a
a a
a
=










11 12 13
22 23
33
0
0 0
Lower Triangular Matrix
All elements above the main
diagonal are zero
[ ]A =










5 0 0
1 3 0
2 7 8
Banded Matrix
All elements are zero with the
exception of a band centered on the
main diagonal
[ ]A
a a
a a a
a a a
a a
=












11 12
21 22 23
32 33 34
43 44
0 0
0
0
0 0
Matrix Operating Rules
• Addition/subtraction
• add/subtract corresponding terms
• aij + bij = cij
• Addition/subtraction are commutative
• [A] + [B] = [B] + [A]
• Addition/subtraction are associative
• [A] + ([B]+[C]) = ([A] +[B]) + [C]
Matrix Operating Rules
• Multiplication of a matrix [A] by a scalar
g is obtained by multiplying every
element of [A] by g
[ ] [ ]B g A
ga ga ga
ga ga ga
ga ga ga
n
n
m m mn
= =


















11 12 1
21 22 2
1 2
. . .
. . .
. . . . . .
. . . . . .
. . . . . .
. . .
Matrix Operating Rules
• The product of two matrices is represented as
[C] = [A][B]
• n = column dimensions of [A]
• n = row dimensions of [B]
c a bij ik kj
k
N
=
=
∑1
[A] m x n [B] n x k = [C] m x k
interior dimensions
must be equal
exterior dimensions conform to dimension of resulting matrix
Simple way to check whether
matrix multiplication is possible
Recall the equation presented for
matrix multiplication
• The product of two matrices is represented as
[C] = [A][B]
• n = column dimensions of [A]
• n = row dimensions of [B]
c a bij ik kj
k
N
=
=
∑1
Example
Determine [C] given [A][B] = [C]
[ ]
[ ]










−
−
=










−
−
=
203
123
142
320
241
231
B
A
Matrix multiplication
• If the dimensions are suitable, matrix
multiplication is associative
• ([A][B])[C] = [A]([B][C])
• If the dimensions are suitable, matrix
multiplication is distributive
• ([A] + [B])[C] = [A][C] + [B][C]
• Multiplication is generally not
commutative
• [A][B] is not equal to [B][A]
Determinants
Denoted as det A or lAl
for a 2 x 2 matrix
bcad
dc
ba
bcad
dc
ba
−=
−=
Determinants
254
329
132
−
−
For a 3 x 3
254
329
132
−
−
254
329
132
−
−
+ - +










516
234
971
Problem
Determine the determinant of the matrix.
Properties of Determinants
• det A = det AT
• If all entries of any row or column is
zero, then det A = 0
• If two rows or two columns are identical,
then det A = 0
• Note: determinants can be calculated
using mdeterm function in Excel
Excel Demonstration
• Excel treats matrices as arrays
• To obtain the results of multiplication,
addition, and inverse operations, you hit
control-shift-enter as opposed to enter.
• The resulting matrix cannot be altered…
let’s see an example using Excel in
class
matrix.xls
Matrix Methods
• Cramer’s Rule
• Gauss elimination
• Matrix inversion
• Gauss Seidel/Jacobi
[ ]A
a a a
a a a
a a a
=










11 12 13
21 22 23
31 32 33
Graphical Method
2 equations, 2 unknowns
a x a x c
a x a x c
x
a
a
x
c
a
x
a
a
x
c
a
11 1 12 2 1
21 1 22 2 2
2
11
12
1
1
12
2
21
22
1
2
22
+ =
+ =
= −





 +
= −





 +
x2
x1
( x1, x2 )
3 2 18
3
2
9
1 2
2 1
x x
x x
+ =
= −





 +
x2
x1
3
2
9
− + =
= −
−




 +
x x
x x
1 2
2 1
2 2
1
2
1
x2
x1
2
1
1
3 2 18
2 2
3
2
9
1
2
1
1 2
1 2
2 1
2 1
x x
x x
x x
x x
+ =
− + =
= −





 +
= −
−




 +
x2
x1
( 4 , 3 )
3
2
2
1
9
1
Check: 3(4) + 2(3) = 12 + 6 = 18
Special Cases
• No solution
• Infinite solution
• Ill-conditioned
x2
x1
( x1, x2 )
a) No solution - same slope f(x)
xb) infinite solution f(x)
x
-1/2 x1 + x2 = 1
-x1 +2x2 = 2
c) ill conditioned
so close that the points of
intersection are difficult to
detect visually
f(x)
x
• If the determinant is zero, the slopes
are identical
a x a x c
a x a x c
11 1 12 2 1
21 1 22 2 2
+ =
+ =
Rearrange these equations so that we have an
alternative version in the form of a straight line:
i.e. x2 = (slope) x1 + intercept
Ill Conditioned Systems
x
a
a
x
c
a
x
a
a
x
c
a
2
11
12
1
1
12
2
21
22
1
2
22
= − +
= − +
If the slopes are nearly equal (ill-conditioned)
a
a
a
a
a a a a
a a a a
11
12
21
22
11 22 21 12
11 22 21 12 0
≅
≅
− ≅
a a
a a
A11 12
21 22
= det
Isn’t this the determinant?
Ill Conditioned Systems
If the determinant is zero the slopes are equal.
This can mean:
- no solution
- infinite number of solutions
If the determinant is close to zero, the system is ill
conditioned.
So it seems that we should use check the determinant of a
system before any further calculations are done.
Let’s try an example.
Ill Conditioned Systems
Example
Determine whether the following matrix is ill-conditioned.






−
=












12
22
5.22.19
7.42.37
2
1
x
x
( )( ) ( )( )
37 2 4 7
19 2 2 5
37 2 2 5 4 7 19 2
2 76
. .
. .
. . . .
.
= −
=
What does this tell us? Is this close to zero? Hard to say.
If we scale the matrix first, i.e. divide by the largest
a value in each row, we can get a better sense of things.
Solution
-80
-60
-40
-20
0
0 5 10 15
x
y
This is further justified
when we consider a graph
of the two functions.
Clearly the slopes are
nearly equal
1 0126
1 0130
0 004
.
.
.=
Solution
Another Check
• Scale the matrix of coefficients, [A], so that the
largest element in each row is 1. If there are
elements of [A]-1
that are several orders of magnitude
greater than one, it is likely that the system is ill-
conditioned.
• Multiply the inverse by the original coefficient matrix.
If the results are not close to the identity matrix, the
system is ill-conditioned.
• Invert the inverted matrix. If it is not close to the
original coefficient matrix, the system is ill-
conditioned.
We will consider how to obtain an inverted matrix later.
Cramer’s Rule
• Not efficient for solving large numbers
of linear equations
• Useful for explaining some inherent
problems associated with solving linear
equations.
[ ]{ } { }bxA
b
b
b
x
x
x
aaa
aaa
aaa
=










=




















3
2
1
3
2
1
333231
232221
131211
Cramer’s Rule
x
A
b a a
b a a
b a a
1
1 12 13
2 22 23
3 32 33
1
=
to solve for
xi - place {b} in
the ith column
Cramer’s Rule
to solve for
xi - place {b} in
the ith column
33231
22221
11211
3
33331
23221
13111
2
33323
23222
13121
1
1
11
baa
baa
baa
A
x
aba
aba
aba
A
x
aab
aab
aab
A
x
=
==
EXAMPLE
Use of Cramer’s Rule
2 3 5
5
2 3
1 1
5
5
1 2
1 2
1
2
x x
x x
x
x
− =
+ =
−











=






( )( ) ( )( )
( )( ) ( )( )[ ]
( )( ) ( )( )[ ]
2 3
1 1
5
5
2 1 3 1 2 3 5
1
5
5 3
5 1
1
5
5 1 3 5
20
5
4
1
5
2 5
1 5
1
5
2 5 5 1
5
5
1
1
2
1
2
−











=






= − − = + =
=
−
=





 − − = =
= =





 − = =
x
x
A
x
x
Solution
Elimination of Unknowns
( algebraic approach)
( )
( )
2112221111121
1212122111121
112222121
211212111
2222121
1212111
caxaaxaa
SUBTRACTcaxaaxaa
acxaxa
acxaxa
cxaxa
cxaxa
=+
=+
×=+
×=+
=+
=+
21122211
212122
1
11222112
211121
2
1122112112222111
2112221111121
1212122111121
aaaa
caca
x
aaaa
caca
x
acacxaaxaa
caxaaxaa
SUBTRACTcaxaaxaa
−
−
=
−
−
=
−=−
=+
=+
NOTE: same result as
Cramer’s Rule
Elimination of Unknowns
( algebraic approach)
Gauss Elimination
• One of the earliest methods developed
for solving simultaneous equations
• Important algorithm in use today
• Involves combining equations in order
to eliminate unknowns and create an
upper triangular matrix
• Progressively back substitute to find
each unknown
Two Phases of Gauss
Elimination
a a a c
a a a c
a a a c
a a a c
a a c
a c
11 12 13 1
21 22 23 2
31 32 33 3
11 12 13 1
22 23 2
33 3
0
0 0
|
|
|
|
|
|
' ' '
'' ''




















Forward
Elimination
Note: the prime indicates
the number of times the
element has changed from
the original value.
Two Phases of Gauss
Elimination
( )
( )
11
3132121
1
'
22
3
1
23
'
2
2
''
33
''
3
3
''
3
''
33
'
2
'
23
'
22
1131211
|00
|0
|
a
xaxac
x
a
xac
x
a
c
x
ca
caa
caaa
−−
=
−
=
=










Back substitution
Rules
• Any equation can be multiplied (or
divided) by a nonzero scalar
• Any equation can be added to (or
subtracted from) another equation
• The positions of any two equations in
the set can be interchanged.
EXAMPLE
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
Perform Gauss Elimination of the following matrix.
Solution
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
Multiply the first equation by
a21/ a11 = 4/2 = 2
Note: a11 is called the pivot element
2624 321 =++ xxx
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
2624 321 =++ xxx
a21 / a11 = 4/2 = 2
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + = 3952
1744
2624
321
321
321
=++
=++
=++
xxx
xxx
xxx
a21 / a11 = 4/2 = 2
Subtract the revised first equation from the
second equation
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
a21 / a11 = 4/2 = 2
3952
1744
2624
321
321
321
=++
=++
=++
xxx
xxx
xxx
( ) ( ) ( ) ( )4 4 4 2 7 6 1 2
0 2 1
1 2 3
1 2 3
− + − + − = −
+ + = −
x x x
x x x
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
a21 / a11 = 4/2 = 2
Subtract the revised first equation from the
second equation
3952
1744
2624
321
321
321
=++
=++
=++
xxx
xxx
xxx
( ) ( ) ( ) ( )4 4 4 2 7 6 1 2
0 2 1
1 2 3
1 2 3
− + − + − = −
+ + = −
x x x
x x x
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
a21 / a11 = 4/2 = 2
Subtract the revised first equation from the
second equation
3952
1744
2624
321
321
321
=++
=++
=++
xxx
xxx
xxx
3952
120
132
321
321
321
=++
−=++
=++
xxx
xxx
xxx
NEW
MATRIX
( ) ( ) ( ) ( )4 4 4 2 7 6 1 2
0 2 1
1 2 3
1 2 3
− + − + − = −
+ + = −
x x x
x x x
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
a21 / a11 = 4/2 = 2
Subtract the revised first equation from the
second equation
3952
1744
2624
321
321
321
=++
=++
=++
xxx
xxx
xxx
3952
120
132
321
321
321
=++
−=++
=++
xxx
xxx
xxx
NOW LET’S
GET A ZERO
HERE
Multiply equation 1 by a31/a11 = 2/2 = 1
and subtract from equation 3
( ) ( ) ( ) ( )2 2 5 1 9 3 3 1
0 4 6 2
1 2 3
1 2 3
− + − + − = −
+ + =
x x x
x x x
2 3 1
4 4 7 1
2 5 9 3
1 2 3
1 2 3
1 2 3
x x x
x x x
x x x
+ + =
+ + =
+ + =
Solution
2 3 1
4 4 7 1
2 5 9 3
2 3 1
2 1
4 6 2
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
x x x
x x x
x x x
x x x
x x
x x
+ + =
+ + =
+ + =
+ + =
+ = −
+ =
Following the same rationale, subtract the 3rd equation
from the first equation
Continue the
computation
by multiplying
the second equation
by a32’/a22’ = 4/2 =2
Subtract the third
equation of the new
matrix
Solution
2 3 1
2 1
4 6 2
2 3 1
2 1
4 4
1 2 3
2 3
2 3
1 2 3
2 3
3
x x x
x x
x x
x x x
x x
x
+ + =
+ = −
+ =
+ + =
+ = −
=
THIS DERIVATION OF
AN UPPER TRIANGULAR MATRIX
IS CALLED THE FORWARD
ELIMINATION PROCESS
Solution
From the system we immediately calculate:
x3
4
4
1= =
Continue to back substitute
2 3 1
2 1
4 4
1 2 3
2 3
3
x x x
x x
x
+ + =
+ = −
=
( )
x
x
2
1
1 1
2
1
1 3 1
2
1
2
=
− −
= −
=
− − −
= −
THIS SERIES OF
STEPS IS THE
BACK
SUBSTITUTION
Solution
Pitfalls of the Elimination Method
• Division by zero
• Round off errors
• magnitude of the pivot element is small compared
to other elements
• Ill conditioned systems
Pivoting
• Partial pivoting
• rows are switched so that the pivot element is not
zero
• rows are switched so that the largest element is
the pivot element
• Complete pivoting
• columns as well as rows are searched for the
largest element and switched
• rarely used because switching columns changes
the order of the x’s adding unjustified complexity to
the computer program
For example
Pivoting is used here to avoid
division by zero
2 3 8
4 6 7 3
2 6 5
2 3
1 2 3
1 2 3
x x
x x x
x x x
+ =
+ + = −
+ + =
4 6 7 3
2 3 8
2 6 5
1 2 3
2 3
1 2 3
x x x
x x
x x x
+ + = −
+ =
+ + =
Another Improvement: Scaling
• Minimizes round-off errors for cases where
some of the equations in a system have
much larger coefficients than others
• In engineering practice, this is often due to
the widely different units used in the
development of the simultaneous equations
• As long as each equation is consistent, the
system will be technically correct and
solvable
Use Gauss Elimination to solve the following set
of linear equations. Employ partial pivoting when
necessary.
3 13 50
2 6 45
4 8 4
2 3
1 2 3
1 3
x x
x x x
x x
− = −
− + =
+ =
Example (solution in notes)
3 13 50
2 6 45
4 8 4
2 3
1 2 3
1 3
x x
x x x
x x
− = −
− + =
+ =
First write in matrix form, employing short hand
presented in class.
0 3 13 50
2 6 1 45
4 0 8 4
− −
−













We will clearly run into
problems of division
by zero.
Use partial pivoting
Solution
0 3 13 50
2 6 1 45
4 0 8 4
− −
−













Pivot with equation
with largest an1










−−
−










−
−−
501330
45162
4804
4804
45162
501330
















−−
−−










−−
−










−
−−
501330
43360
4804
501330
45162
4804
4804
45162
501330









Begin developing
upper triangular matrix
( )
( )
( ) ( )
4 0 8 4
0 6 3 43
0 3 13 50
4 0 8 4
0 6 3 43
0 0 14 5 285
285
14 5
1966
43 3 1966
6
8149
4 8 1966
4
2 931
3 8149 13 1966 50
3 2
1






− −
− −










− −
− −










=
−
−
= =
+
−
= −
=
−
= −
− − = −
. .
.
.
.
.
.
.
.
. .
x x
x
CHECK
okay
...end of
problem
Gauss-Jordan
• Variation of Gauss elimination
• Primary motive for introducing this method is
that it provides a simple and convenient
method for computing the matrix inverse.
• When an unknown is eliminated, it is
eliminated from all other equations, rather
than just the subsequent one
• All rows are normalized by dividing them by
their pivot elements
• Elimination step results in an identity matrix
rather than an UT matrix
[ ]A
a a a
a a
a
=










11 12 13
22 23
33
0
0 0 [ ]A =












1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Gauss-Jordan
Graphical depiction of Gauss-Jordan
( )
( )
( )
a a a c
a a a c
a a a c
c
c
c
n
n
n
11 12 13 1
21 22 23 2
31 32 33 3
2
3
1 0 0
0 1 0
0 0 1
1
|
|
|
|
|
|
' ' '
'' ''




















( )
( )
( )
( )
( )
( )
1 0 0
0 1 0
0 0 1
1
1
2
3
1
2 2
3 3
|
|
|
c
c
c
x c
x c
x c
n
n
n
n
n
n










=
=
=
( )
( )
( )
a a a c
a a a c
a a a c
c
c
c
n
n
n
11 12 13 1
21 22 23 2
31 32 33 3
2
3
1 0 0
0 1 0
0 0 1
1
|
|
|
|
|
|
' ' '
'' ''




















Graphical depiction of Gauss-Jordan
Matrix Inversion
• [A] [A] -1
= [A]-1
[A] = I
• One application of the inverse is to solve
several systems differing only by {c}
• [A]{x} = {c}
• [A]-1
[A] {x} = [A]-1
{c}
• [I]{x}={x}= [A]-1
{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion
[ ] [ ]
[ ] [ ]
A I
a a a
a a a
a a a
a a a
a a a
a a a
I A
11 12 13
21 22 23
31 32 33
11
1
12
1
13
1
21
1
22
1
23
1
31
1
32
1
33
1
1
1 0 0
0 1 0
0 0 1
1 0 0
0 1 0
0 0 1


























− − −
− − −
− − −
−
Note: the superscript
“-1” denotes that
the original values
have been converted
to the matrix inverse,
not 1/aij
WHEN IS THE INVERSE MATRIX USEFUL?
CONSIDER STIMULUS-RESPONSE
CALCULATIONS THAT ARE SO COMMON IN
ENGINEERING.
Stimulus-Response
Computations
• Conservation Laws
mass
force
heat
momentum
• We considered the conservation
of force in the earlier example of
a truss
• [A]{x}={c}
• [interactions]{response}={stimuli}
• Superposition
• if a system subject to several different stimuli, the
response can be computed individually and the
results summed to obtain a total response
• Proportionality
• multiplying the stimuli by a quantity results in the
response to those stimuli being multiplied by the
same quantity
• These concepts are inherent in the scaling of terms
during the inversion of the matrix
Stimulus-Response Computations
Example
Given the following, determine {x} for the
two different loads {c}
{ } { }
{ } { }174
321
413
362
112
1
−=
=










−−
−
−
=
=
−
T
T
c
c
A
cAx
Solution
{ } { }
{ } { }174
321
413
362
112
1
−=
=










−−
−
−
=
=
−
T
T
c
c
A
cAx
{c}T
= {1 2 3}
x1 = (2)(1) + (-1)(2) + (1)(3) = 3
x2 = (-2)(1) + (6)(2) + (3)(3) = 19
x3 = (-3)(1) + (1)(2) + (-4)(3) = -13
{c} T
= {4 -7 1)
x1 = (2)(4) + (-1)(-7) + (1)(1)=16
x2 = (-2)(4) + (6)(-7) + (3)(1) = -47
x3 = (-3)(4) + (1)(-7) + (-4)(1) = -23
Gauss Seidel Method
• An iterative approach
• Continue until we converge within some pre-
specified tolerance of error
• Round off is no longer an issue, since you control
the level of error that is acceptable
• Fundamentally different from Gauss elimination.
This is an approximate, iterative method
particularly good for large number of equations
Gauss-Seidel Method
• If the diagonal elements are all nonzero, the
first equation can be solved for x1
• Solve the second equation for x2, etc.
x
c a x a x a x
a
n n
1
1 12 2 13 3 1
11
=
− − − −
To assure that you understand this, write the equation for x2
x
c a x a x a x
a
x
c a x a x a x
a
x
c a x a x a x
a
x
c a x a x a x
a
n n
n n
n n
n
n n n nn n
nn
1
1 12 2 13 3 1
11
2
2 21 1 23 3 2
22
3
3 31 1 32 2 3
33
1 1 3 2 1 1
=
− − − −
=
− − − −
=
− − − −
=
− − − − − −





Gauss-Seidel Method
• Start the solution process by guessing
values of x
• A simple way to obtain initial guesses is
to assume that they are all zero
• Calculate new values of xi starting with
• x1 = c1/a11
• Progressively substitute through the
equations
• Repeat until tolerance is reached
( )
( )
( )
( )
( )
( )
x c a x a x a
x c a x a x a
x c a x a x a
x c a a a
c
a
x
x c a x a a x
x c a x a x a x
1 1 12 2 13 3 11
2 2 21 1 23 3 22
3 3 31 1 32 2 33
1 1 12 13 11
1
11
1
2 2 21 1 23 22 2
3 3 31 1 32 2 33 3
0 0
0
= − −
= − −
= − −
= − − = =
= − − =
= − − =
/
/
/
/ '
' / '
' ' / '
Gauss-Seidel Method
Example
2 3 1 2
4 1 2 2
3 2 1 1
−
−













Given the following augmented matrix,
complete one iteration of the Gauss Seidel
method.
2 3 1 2
4 1 2 2
3 2 1 1
−
−













( )
( )( ) ( )( )
( )
( )( ) ( )( )
( )
( )( ) ( )( )
x c a a a
c
a
x
x
x c a x a a x
x
x c a x a x a x
x
1 1 12 13 11
1
11
1
1
2 2 21 1 23 22 2
2
3 3 31 1 32 2 33 3
3
0 0
2 3 0 1 0
2
2
2
1
0
2 4 1 2 0
1
2 4
1
6
1 3 1 2 6
1
1 3 12
1
10
= − − = =
=
− − −
= =
= − − =
=
− − −
=
− −
= −
= − − =
=
− − −
=
− +
=
/ '
' / '
' ' / '
GAUSS SEIDEL
Jacobi Iteration
• Iterative like Gauss Seidel
• Gauss-Seidel immediately uses the
value of xi in the next equation to predict
xi+1
• Jacobi calculates all new values of xi’s
to calculate a set of new xi values
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
FIRST ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
x c a x a x a x c a x a x a
SECOND ITERATION
x c a x a x a x c a x a x a
x c a x a x a x c a x
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1 23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
1 1 12 2 13 3 11 1 1 12 2 13 3 11
2 2 21 1 23 3 22 2 2 21 1
= − − = − −
= − − = − −
= − − = − −
= − − = − −
= − − = − −
/ /
/ /
/ /
/ /
/ ( )
( ) ( )
a x a
x c a x a x a x c a x a x a
23 3 22
3 3 31 1 32 2 33 3 3 31 1 32 2 33
/
/ /= − − = − −
Graphical depiction of difference between Gauss-Seidel and Jacobi
2 3 1 2
4 1 2 2
3 2 1 1
−
−













Note: We worked the Gauss Seidel method earlier
Given the following augmented matrix, complete
one iteration of the Gauss Seidel method and
the Jacobi method.
Example
Gauss-Seidel Method
convergence criterion
ε εa i
i
j
i
j
i
j s
x x
x
, =
−
× <
−1
100
as in previous iterative procedures in finding the roots,
we consider the present and previous estimates.
As with the open methods we studied previously with one
point iterations
1. The method can diverge
2. May converge very slowly
Class question:
where do these
formulas come from?
Convergence criteria for two
linear equations
( )
( )
u x x
c
a
a
a
x
v x x
c
a
a
a
x
consider the partial derivatives of u and v
u
x
u
x
a
a
v
x
a
a
v
x
1 2
1
11
12
11
2
1 2
2
22
21
22
2
1 2
12
11
1
21
22 2
0
0
,
,
= −
= −
= = −
= − =
∂
∂
∂
∂
∂
∂
∂
∂
Convergence criteria for two linear
equations cont.
∂
∂
∂
∂
∂
∂
∂
∂
u
x
v
x
u
y
v
y
+ <
+ <
1
1
Criteria for convergence
where presented earlier
in class material
for nonlinear equations.
Noting that x = x1 and
y = x2
Substituting the previous equation:
Convergence criteria for two linear
equations cont.
a
a
a
a
21
22
12
11
1 1< <
This is stating that the absolute values of the slopes must
be less than unity to ensure convergence.
Extended to n equations:
a a where j n excluding j iii ij> = =∑ 1,
Convergence criteria for two linear
equations cont.
a a where j n excluding j iii ij> = =∑ 1,
This condition is sufficient but not necessary; for convergence.
When met, the matrix is said to be diagonally dominant.
Diagonal Dominance









−
=




















−
−
4
3
9
x
x
x
9.05.01.0
4.08.02.0
4.02.01
3
2
1
To determine whether a matrix is diagonally
dominant you need to evaluate the values on
the diagonal.
Diagonal Dominance
Now, check to see if these numbers satisfy the
following rule for each row (note: each row
represents a unique equation).
a a where j n excluding j iii ij> = =∑ 1,









−
=




















−
−
4
3
9
x
x
x
9.05.01.0
4.08.02.0
4.02.01
3
2
1
x2
x1
Review the concepts
of divergence and
convergence by graphically
illustrating Gauss-Seidel
for two linear equations
u x x
v x x
:
:
11 13 286
11 9 99
1 2
1 2
+ =
− =
x1
Note: we are converging
on the solution
v x x
u x x
:
:
11 9 99
11 13 286
1 2
1 2
− =
+ =
CONVERGENCE
x2
x1
Change the order of
the equations: i.e. change
direction of initial
estimates
u x x
v x x
:
:
11 13 286
11 9 99
1 2
1 2
+ =
− =
DIVERGENCE
x2
Improvement of Convergence
Using Relaxation
This is a modification that will enhance slow convergence.
After each new value of x is computed, calculate a new value
based on a weighted average of the present and previous
iteration.
( )x x xi
new
i
new
i
old
= + −λ λ1
Improvement of Convergence Using
Relaxation
• if λ = 1unmodified
• if 0 < λ < 1 underrelaxation
• nonconvergent systems may converge
• hasten convergence by dampening out
oscillations
• if 1< λ < 2 overrelaxation
• extra weight is placed on the present value
• assumption that new value is moving to the
correct solution by too slowly
( )x x xi
new
i
new
i
old
= + −λ λ1

Matrix

  • 1.
    MATRIX METHODS SYSTEMS OFLINEAR EQUATIONS ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier Dr. B.A. DeVantier
  • 2.
    Copyright© 2003 byLizette R. Chevalier and Bruce A. DeVantier Permission is granted to students at Southern Illinois University at Carbondale to make one copy of this material for use in the class ENGR 351, Numerical Methods for Engineers. No other permission is granted. All other rights are reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the copyright owner.
  • 3.
    Systems of LinearAlgebraic Equations Specific Study Objectives • Understand the graphic interpretation of ill-conditioned systems and how it relates to the determinant • Be familiar with terminology: forward elimination, back substitution, pivot equations and pivot coefficient
  • 4.
    • Apply matrixinversion to evaluate stimulus- response computations in engineering • Understand why the Gauss-Seidel method is particularly well-suited for large sparse systems of equations • Know how to assess diagonal dominance of a system of equations and how it relates to whether the system can be solved with the Gauss-Seidel method Specific Study Objectives
  • 5.
    • Understand therationale behind relaxation and how to apply this technique Specific Study Objectives
  • 6.
    How to representa system of linear equations as a matrix [A]{x} = {c} where {x} and {c} are both column vectors
  • 7.
  • 8.
    Practical application • Considera problem in structural engineering • Find the forces and reactions associated with a statically determinant truss hinge: transmits both vertical and horizontal forces at the surface roller: transmits vertical forces 30 90 60
  • 9.
    1000 kg 30 90 60 F1 H2 V2 V3 2 3 1 FREE BODYDIAGRAM F F H v = = ∑ ∑ 0 0 F2 F3
  • 10.
    Node 1 F1,V F1,H F3 F1 6030 FF F F F F F F F F F F H H V V = = − + + = = − − + − + = − − = − ∑ ∑ 0 30 60 0 30 60 30 60 0 30 60 1000 1 3 1 1 3 1 1 3 1 3 cos cos sin sin cos cos sin sin , ,        
  • 11.
    F H FF F V F H V = = + + = = + ∑ ∑ 0 30 0 30 2 2 1 2 1 cos sin   Node 2 F2 F1 30 H2 V2
  • 12.
    F F F FF V H V = = − − = = + ∑ ∑ 0 60 0 60 3 2 3 3 cos sin   Node 3 F2 F3 60 V3
  • 13.
  • 14.
    F1 F2 F3H2 V2 V3 1 2 3 4 5 6 -cos30 0 cos60 0 0 0 -sin30 0 -sin60 0 0 0 cos30 1 0 1 0 0 sin30 0 0 0 1 0 0 -1 -cos60 0 0 0 0 0 sin60 0 0 1 0 -1000 0 0 0 0 Do some book keeping
  • 15.
    This is thebasis for your matrices and the equation [A]{x}={c} − − − − −                                     = −                   0866 0 05 0 0 0 05 0 0866 0 0 0 0 866 1 0 1 0 0 05 0 0 0 1 0 0 1 05 0 0 0 0 0 0866 0 0 1 0 1000 0 0 0 0 1 2 3 2 2 3 . . . . . . . . F F F H V V
  • 16.
    System of LinearEquations • We have focused our last lectures on finding a value of x that satisfied a single equation • f(x) = 0 • Now we will deal with the case of determining the values of x1, x2, .....xn, that simultaneously satisfy a set of equations
  • 17.
    System of LinearEquations • Simultaneous equations • f1(x1, x2, .....xn) = 0 • f2(x1, x2, .....xn) = 0 • ............. • fn(x1, x2, .....xn) = 0 • Methods will be for linear equations • a11x1 + a12x2 +...... a1nxn =c1 • a21x1 + a22x2 +...... a2nxn =c2 • ..........
  • 18.
    Mathematical Background Matrix Notation •a horizontal set of elements is called a row • a vertical set is called a column • first subscript refers to the row number • second subscript refers to column number [ ]A a a a a a a a a a a a a n n m m m mn =             11 12 13 1 21 22 23 2 1 2 3 ... ... . . . . ...
  • 19.
  • 20.
  • 21.
    [ ]A a aa a a a a a a a a a n n m m m mn =             11 12 13 1 21 22 23 2 1 2 3 ... ... . . . . ... row 2 column 3 Note the consistent scheme with subscripts denoting row,column
  • 22.
    Row vector: m=1 Columnvector: n=1 Square matrix: m = n [ ] [ ]B b b bn= 1 2 ....... [ ]C c c cm =                 1 2 . . [ ]A a a a a a a a a a =           11 12 13 21 22 23 31 32 33 Types of Matrices
  • 23.
    • Symmetric matrix •Diagonal matrix • Identity matrix • Inverse of a matrix • Transpose of a matrix • Upper triangular matrix • Lower triangular matrix • Banded matrix Definitions
  • 24.
    Symmetric Matrix aij =aji for all i’s and j’s [ ]A =           5 1 2 1 3 7 2 7 8 Does a23 = a32 ? Yes. Check the other elements on your own.
  • 25.
    Diagonal Matrix A squarematrix where all elements off the main diagonal are zero [ ]A a a a a =             11 22 33 44 0 0 0 0 0 0 0 0 0 0 0 0
  • 26.
    Identity Matrix A diagonalmatrix where all elements on the main diagonal are equal to 1 [ ]A =             1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 The symbol [I] is used to denote the identify matrix.
  • 27.
    Inverse of [A] [][ ] [ ] [ ] [ ]IAAAA == −− 11
  • 28.
    Transpose of [A] []A a a a a a a a a a t m m n n mn =                   11 21 1 12 22 2 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 29.
    Upper Triangle Matrix Elementsbelow the main diagonal are zero [ ]A a a a a a a =           11 12 13 22 23 33 0 0 0
  • 30.
    Lower Triangular Matrix Allelements above the main diagonal are zero [ ]A =           5 0 0 1 3 0 2 7 8
  • 31.
    Banded Matrix All elementsare zero with the exception of a band centered on the main diagonal [ ]A a a a a a a a a a a =             11 12 21 22 23 32 33 34 43 44 0 0 0 0 0 0
  • 32.
    Matrix Operating Rules •Addition/subtraction • add/subtract corresponding terms • aij + bij = cij • Addition/subtraction are commutative • [A] + [B] = [B] + [A] • Addition/subtraction are associative • [A] + ([B]+[C]) = ([A] +[B]) + [C]
  • 33.
    Matrix Operating Rules •Multiplication of a matrix [A] by a scalar g is obtained by multiplying every element of [A] by g [ ] [ ]B g A ga ga ga ga ga ga ga ga ga n n m m mn = =                   11 12 1 21 22 2 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 34.
    Matrix Operating Rules •The product of two matrices is represented as [C] = [A][B] • n = column dimensions of [A] • n = row dimensions of [B] c a bij ik kj k N = = ∑1
  • 35.
    [A] m xn [B] n x k = [C] m x k interior dimensions must be equal exterior dimensions conform to dimension of resulting matrix Simple way to check whether matrix multiplication is possible
  • 36.
    Recall the equationpresented for matrix multiplication • The product of two matrices is represented as [C] = [A][B] • n = column dimensions of [A] • n = row dimensions of [B] c a bij ik kj k N = = ∑1
  • 37.
    Example Determine [C] given[A][B] = [C] [ ] [ ]           − − =           − − = 203 123 142 320 241 231 B A
  • 38.
    Matrix multiplication • Ifthe dimensions are suitable, matrix multiplication is associative • ([A][B])[C] = [A]([B][C]) • If the dimensions are suitable, matrix multiplication is distributive • ([A] + [B])[C] = [A][C] + [B][C] • Multiplication is generally not commutative • [A][B] is not equal to [B][A]
  • 39.
    Determinants Denoted as detA or lAl for a 2 x 2 matrix bcad dc ba bcad dc ba −= −=
  • 40.
    Determinants 254 329 132 − − For a 3x 3 254 329 132 − − 254 329 132 − − + - +
  • 41.
  • 42.
    Properties of Determinants •det A = det AT • If all entries of any row or column is zero, then det A = 0 • If two rows or two columns are identical, then det A = 0 • Note: determinants can be calculated using mdeterm function in Excel
  • 43.
    Excel Demonstration • Exceltreats matrices as arrays • To obtain the results of multiplication, addition, and inverse operations, you hit control-shift-enter as opposed to enter. • The resulting matrix cannot be altered… let’s see an example using Excel in class matrix.xls
  • 44.
    Matrix Methods • Cramer’sRule • Gauss elimination • Matrix inversion • Gauss Seidel/Jacobi [ ]A a a a a a a a a a =           11 12 13 21 22 23 31 32 33
  • 45.
    Graphical Method 2 equations,2 unknowns a x a x c a x a x c x a a x c a x a a x c a 11 1 12 2 1 21 1 22 2 2 2 11 12 1 1 12 2 21 22 1 2 22 + = + = = −       + = −       + x2 x1 ( x1, x2 )
  • 46.
    3 2 18 3 2 9 12 2 1 x x x x + = = −       + x2 x1 3 2 9
  • 47.
    − + = =− −      + x x x x 1 2 2 1 2 2 1 2 1 x2 x1 2 1 1
  • 48.
    3 2 18 22 3 2 9 1 2 1 1 2 1 2 2 1 2 1 x x x x x x x x + = − + = = −       + = − −      + x2 x1 ( 4 , 3 ) 3 2 2 1 9 1 Check: 3(4) + 2(3) = 12 + 6 = 18
  • 49.
    Special Cases • Nosolution • Infinite solution • Ill-conditioned x2 x1 ( x1, x2 )
  • 50.
    a) No solution- same slope f(x) xb) infinite solution f(x) x -1/2 x1 + x2 = 1 -x1 +2x2 = 2 c) ill conditioned so close that the points of intersection are difficult to detect visually f(x) x
  • 51.
    • If thedeterminant is zero, the slopes are identical a x a x c a x a x c 11 1 12 2 1 21 1 22 2 2 + = + = Rearrange these equations so that we have an alternative version in the form of a straight line: i.e. x2 = (slope) x1 + intercept Ill Conditioned Systems
  • 52.
    x a a x c a x a a x c a 2 11 12 1 1 12 2 21 22 1 2 22 = − + =− + If the slopes are nearly equal (ill-conditioned) a a a a a a a a a a a a 11 12 21 22 11 22 21 12 11 22 21 12 0 ≅ ≅ − ≅ a a a a A11 12 21 22 = det Isn’t this the determinant? Ill Conditioned Systems
  • 53.
    If the determinantis zero the slopes are equal. This can mean: - no solution - infinite number of solutions If the determinant is close to zero, the system is ill conditioned. So it seems that we should use check the determinant of a system before any further calculations are done. Let’s try an example. Ill Conditioned Systems
  • 54.
    Example Determine whether thefollowing matrix is ill-conditioned.       − =             12 22 5.22.19 7.42.37 2 1 x x
  • 55.
    ( )( )( )( ) 37 2 4 7 19 2 2 5 37 2 2 5 4 7 19 2 2 76 . . . . . . . . . = − = What does this tell us? Is this close to zero? Hard to say. If we scale the matrix first, i.e. divide by the largest a value in each row, we can get a better sense of things. Solution
  • 56.
    -80 -60 -40 -20 0 0 5 1015 x y This is further justified when we consider a graph of the two functions. Clearly the slopes are nearly equal 1 0126 1 0130 0 004 . . .= Solution
  • 57.
    Another Check • Scalethe matrix of coefficients, [A], so that the largest element in each row is 1. If there are elements of [A]-1 that are several orders of magnitude greater than one, it is likely that the system is ill- conditioned. • Multiply the inverse by the original coefficient matrix. If the results are not close to the identity matrix, the system is ill-conditioned. • Invert the inverted matrix. If it is not close to the original coefficient matrix, the system is ill- conditioned. We will consider how to obtain an inverted matrix later.
  • 58.
    Cramer’s Rule • Notefficient for solving large numbers of linear equations • Useful for explaining some inherent problems associated with solving linear equations. [ ]{ } { }bxA b b b x x x aaa aaa aaa =           =                     3 2 1 3 2 1 333231 232221 131211
  • 59.
    Cramer’s Rule x A b aa b a a b a a 1 1 12 13 2 22 23 3 32 33 1 = to solve for xi - place {b} in the ith column
  • 60.
    Cramer’s Rule to solvefor xi - place {b} in the ith column 33231 22221 11211 3 33331 23221 13111 2 33323 23222 13121 1 1 11 baa baa baa A x aba aba aba A x aab aab aab A x = ==
  • 61.
    EXAMPLE Use of Cramer’sRule 2 3 5 5 2 3 1 1 5 5 1 2 1 2 1 2 x x x x x x − = + = −            =      
  • 62.
    ( )( )( )( ) ( )( ) ( )( )[ ] ( )( ) ( )( )[ ] 2 3 1 1 5 5 2 1 3 1 2 3 5 1 5 5 3 5 1 1 5 5 1 3 5 20 5 4 1 5 2 5 1 5 1 5 2 5 5 1 5 5 1 1 2 1 2 −            =       = − − = + = = − =       − − = = = =       − = = x x A x x Solution
  • 63.
    Elimination of Unknowns (algebraic approach) ( ) ( ) 2112221111121 1212122111121 112222121 211212111 2222121 1212111 caxaaxaa SUBTRACTcaxaaxaa acxaxa acxaxa cxaxa cxaxa =+ =+ ×=+ ×=+ =+ =+
  • 64.
  • 65.
    Gauss Elimination • Oneof the earliest methods developed for solving simultaneous equations • Important algorithm in use today • Involves combining equations in order to eliminate unknowns and create an upper triangular matrix • Progressively back substitute to find each unknown
  • 66.
    Two Phases ofGauss Elimination a a a c a a a c a a a c a a a c a a c a c 11 12 13 1 21 22 23 2 31 32 33 3 11 12 13 1 22 23 2 33 3 0 0 0 | | | | | | ' ' ' '' ''                     Forward Elimination Note: the prime indicates the number of times the element has changed from the original value.
  • 67.
    Two Phases ofGauss Elimination ( ) ( ) 11 3132121 1 ' 22 3 1 23 ' 2 2 '' 33 '' 3 3 '' 3 '' 33 ' 2 ' 23 ' 22 1131211 |00 |0 | a xaxac x a xac x a c x ca caa caaa −− = − = =           Back substitution
  • 68.
    Rules • Any equationcan be multiplied (or divided) by a nonzero scalar • Any equation can be added to (or subtracted from) another equation • The positions of any two equations in the set can be interchanged.
  • 69.
    EXAMPLE 2 3 1 44 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = Perform Gauss Elimination of the following matrix.
  • 70.
    Solution 2 3 1 44 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = Multiply the first equation by a21/ a11 = 4/2 = 2 Note: a11 is called the pivot element 2624 321 =++ xxx
  • 71.
    2 3 1 44 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = 2624 321 =++ xxx a21 / a11 = 4/2 = 2
  • 72.
    2 3 1 44 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = 3952 1744 2624 321 321 321 =++ =++ =++ xxx xxx xxx a21 / a11 = 4/2 = 2
  • 73.
    Subtract the revisedfirst equation from the second equation 2 3 1 4 4 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = a21 / a11 = 4/2 = 2 3952 1744 2624 321 321 321 =++ =++ =++ xxx xxx xxx
  • 74.
    ( ) () ( ) ( )4 4 4 2 7 6 1 2 0 2 1 1 2 3 1 2 3 − + − + − = − + + = − x x x x x x 2 3 1 4 4 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = a21 / a11 = 4/2 = 2 Subtract the revised first equation from the second equation 3952 1744 2624 321 321 321 =++ =++ =++ xxx xxx xxx
  • 75.
    ( ) () ( ) ( )4 4 4 2 7 6 1 2 0 2 1 1 2 3 1 2 3 − + − + − = − + + = − x x x x x x 2 3 1 4 4 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = a21 / a11 = 4/2 = 2 Subtract the revised first equation from the second equation 3952 1744 2624 321 321 321 =++ =++ =++ xxx xxx xxx 3952 120 132 321 321 321 =++ −=++ =++ xxx xxx xxx NEW MATRIX
  • 76.
    ( ) () ( ) ( )4 4 4 2 7 6 1 2 0 2 1 1 2 3 1 2 3 − + − + − = − + + = − x x x x x x 2 3 1 4 4 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = a21 / a11 = 4/2 = 2 Subtract the revised first equation from the second equation 3952 1744 2624 321 321 321 =++ =++ =++ xxx xxx xxx 3952 120 132 321 321 321 =++ −=++ =++ xxx xxx xxx NOW LET’S GET A ZERO HERE
  • 77.
    Multiply equation 1by a31/a11 = 2/2 = 1 and subtract from equation 3 ( ) ( ) ( ) ( )2 2 5 1 9 3 3 1 0 4 6 2 1 2 3 1 2 3 − + − + − = − + + = x x x x x x 2 3 1 4 4 7 1 2 5 9 3 1 2 3 1 2 3 1 2 3 x x x x x x x x x + + = + + = + + = Solution
  • 78.
    2 3 1 44 7 1 2 5 9 3 2 3 1 2 1 4 6 2 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 x x x x x x x x x x x x x x x x + + = + + = + + = + + = + = − + = Following the same rationale, subtract the 3rd equation from the first equation Continue the computation by multiplying the second equation by a32’/a22’ = 4/2 =2 Subtract the third equation of the new matrix Solution
  • 79.
    2 3 1 21 4 6 2 2 3 1 2 1 4 4 1 2 3 2 3 2 3 1 2 3 2 3 3 x x x x x x x x x x x x x + + = + = − + = + + = + = − = THIS DERIVATION OF AN UPPER TRIANGULAR MATRIX IS CALLED THE FORWARD ELIMINATION PROCESS Solution
  • 80.
    From the systemwe immediately calculate: x3 4 4 1= = Continue to back substitute 2 3 1 2 1 4 4 1 2 3 2 3 3 x x x x x x + + = + = − = ( ) x x 2 1 1 1 2 1 1 3 1 2 1 2 = − − = − = − − − = − THIS SERIES OF STEPS IS THE BACK SUBSTITUTION Solution
  • 81.
    Pitfalls of theElimination Method • Division by zero • Round off errors • magnitude of the pivot element is small compared to other elements • Ill conditioned systems
  • 82.
    Pivoting • Partial pivoting •rows are switched so that the pivot element is not zero • rows are switched so that the largest element is the pivot element • Complete pivoting • columns as well as rows are searched for the largest element and switched • rarely used because switching columns changes the order of the x’s adding unjustified complexity to the computer program
  • 83.
    For example Pivoting isused here to avoid division by zero 2 3 8 4 6 7 3 2 6 5 2 3 1 2 3 1 2 3 x x x x x x x x + = + + = − + + = 4 6 7 3 2 3 8 2 6 5 1 2 3 2 3 1 2 3 x x x x x x x x + + = − + = + + =
  • 84.
    Another Improvement: Scaling •Minimizes round-off errors for cases where some of the equations in a system have much larger coefficients than others • In engineering practice, this is often due to the widely different units used in the development of the simultaneous equations • As long as each equation is consistent, the system will be technically correct and solvable
  • 85.
    Use Gauss Eliminationto solve the following set of linear equations. Employ partial pivoting when necessary. 3 13 50 2 6 45 4 8 4 2 3 1 2 3 1 3 x x x x x x x − = − − + = + = Example (solution in notes)
  • 86.
    3 13 50 26 45 4 8 4 2 3 1 2 3 1 3 x x x x x x x − = − − + = + = First write in matrix form, employing short hand presented in class. 0 3 13 50 2 6 1 45 4 0 8 4 − − −              We will clearly run into problems of division by zero. Use partial pivoting Solution
  • 87.
    0 3 1350 2 6 1 45 4 0 8 4 − − −              Pivot with equation with largest an1
  • 88.
  • 89.
  • 90.
    ( ) ( ) () ( ) 4 0 8 4 0 6 3 43 0 3 13 50 4 0 8 4 0 6 3 43 0 0 14 5 285 285 14 5 1966 43 3 1966 6 8149 4 8 1966 4 2 931 3 8149 13 1966 50 3 2 1       − − − −           − − − −           = − − = = + − = − = − = − − − = − . . . . . . . . . . . x x x CHECK okay ...end of problem
  • 91.
    Gauss-Jordan • Variation ofGauss elimination • Primary motive for introducing this method is that it provides a simple and convenient method for computing the matrix inverse. • When an unknown is eliminated, it is eliminated from all other equations, rather than just the subsequent one
  • 92.
    • All rowsare normalized by dividing them by their pivot elements • Elimination step results in an identity matrix rather than an UT matrix [ ]A a a a a a a =           11 12 13 22 23 33 0 0 0 [ ]A =             1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 Gauss-Jordan
  • 93.
    Graphical depiction ofGauss-Jordan ( ) ( ) ( ) a a a c a a a c a a a c c c c n n n 11 12 13 1 21 22 23 2 31 32 33 3 2 3 1 0 0 0 1 0 0 0 1 1 | | | | | | ' ' ' '' ''                    
  • 94.
    ( ) ( ) () ( ) ( ) ( ) 1 0 0 0 1 0 0 0 1 1 1 2 3 1 2 2 3 3 | | | c c c x c x c x c n n n n n n           = = = ( ) ( ) ( ) a a a c a a a c a a a c c c c n n n 11 12 13 1 21 22 23 2 31 32 33 3 2 3 1 0 0 0 1 0 0 0 1 1 | | | | | | ' ' ' '' ''                     Graphical depiction of Gauss-Jordan
  • 95.
    Matrix Inversion • [A][A] -1 = [A]-1 [A] = I • One application of the inverse is to solve several systems differing only by {c} • [A]{x} = {c} • [A]-1 [A] {x} = [A]-1 {c} • [I]{x}={x}= [A]-1 {c} • One quick method to compute the inverse is to augment [A] with [I] instead of {c}
  • 96.
    Graphical Depiction ofthe Gauss-Jordan Method with Matrix Inversion [ ] [ ] [ ] [ ] A I a a a a a a a a a a a a a a a a a a I A 11 12 13 21 22 23 31 32 33 11 1 12 1 13 1 21 1 22 1 23 1 31 1 32 1 33 1 1 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1                           − − − − − − − − − − Note: the superscript “-1” denotes that the original values have been converted to the matrix inverse, not 1/aij
  • 97.
    WHEN IS THEINVERSE MATRIX USEFUL? CONSIDER STIMULUS-RESPONSE CALCULATIONS THAT ARE SO COMMON IN ENGINEERING.
  • 98.
    Stimulus-Response Computations • Conservation Laws mass force heat momentum •We considered the conservation of force in the earlier example of a truss
  • 99.
    • [A]{x}={c} • [interactions]{response}={stimuli} •Superposition • if a system subject to several different stimuli, the response can be computed individually and the results summed to obtain a total response • Proportionality • multiplying the stimuli by a quantity results in the response to those stimuli being multiplied by the same quantity • These concepts are inherent in the scaling of terms during the inversion of the matrix Stimulus-Response Computations
  • 100.
    Example Given the following,determine {x} for the two different loads {c} { } { } { } { }174 321 413 362 112 1 −= =           −− − − = = − T T c c A cAx
  • 101.
    Solution { } {} { } { }174 321 413 362 112 1 −= =           −− − − = = − T T c c A cAx {c}T = {1 2 3} x1 = (2)(1) + (-1)(2) + (1)(3) = 3 x2 = (-2)(1) + (6)(2) + (3)(3) = 19 x3 = (-3)(1) + (1)(2) + (-4)(3) = -13 {c} T = {4 -7 1) x1 = (2)(4) + (-1)(-7) + (1)(1)=16 x2 = (-2)(4) + (6)(-7) + (3)(1) = -47 x3 = (-3)(4) + (1)(-7) + (-4)(1) = -23
  • 102.
    Gauss Seidel Method •An iterative approach • Continue until we converge within some pre- specified tolerance of error • Round off is no longer an issue, since you control the level of error that is acceptable • Fundamentally different from Gauss elimination. This is an approximate, iterative method particularly good for large number of equations
  • 103.
    Gauss-Seidel Method • Ifthe diagonal elements are all nonzero, the first equation can be solved for x1 • Solve the second equation for x2, etc. x c a x a x a x a n n 1 1 12 2 13 3 1 11 = − − − − To assure that you understand this, write the equation for x2
  • 104.
    x c a xa x a x a x c a x a x a x a x c a x a x a x a x c a x a x a x a n n n n n n n n n n nn n nn 1 1 12 2 13 3 1 11 2 2 21 1 23 3 2 22 3 3 31 1 32 2 3 33 1 1 3 2 1 1 = − − − − = − − − − = − − − − = − − − − − −     
  • 105.
    Gauss-Seidel Method • Startthe solution process by guessing values of x • A simple way to obtain initial guesses is to assume that they are all zero • Calculate new values of xi starting with • x1 = c1/a11 • Progressively substitute through the equations • Repeat until tolerance is reached
  • 106.
    ( ) ( ) () ( ) ( ) ( ) x c a x a x a x c a x a x a x c a x a x a x c a a a c a x x c a x a a x x c a x a x a x 1 1 12 2 13 3 11 2 2 21 1 23 3 22 3 3 31 1 32 2 33 1 1 12 13 11 1 11 1 2 2 21 1 23 22 2 3 3 31 1 32 2 33 3 0 0 0 = − − = − − = − − = − − = = = − − = = − − = / / / / ' ' / ' ' ' / ' Gauss-Seidel Method
  • 107.
    Example 2 3 12 4 1 2 2 3 2 1 1 − −              Given the following augmented matrix, complete one iteration of the Gauss Seidel method.
  • 108.
    2 3 12 4 1 2 2 3 2 1 1 − −              ( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) x c a a a c a x x x c a x a a x x x c a x a x a x x 1 1 12 13 11 1 11 1 1 2 2 21 1 23 22 2 2 3 3 31 1 32 2 33 3 3 0 0 2 3 0 1 0 2 2 2 1 0 2 4 1 2 0 1 2 4 1 6 1 3 1 2 6 1 1 3 12 1 10 = − − = = = − − − = = = − − = = − − − = − − = − = − − = = − − − = − + = / ' ' / ' ' ' / ' GAUSS SEIDEL
  • 109.
    Jacobi Iteration • Iterativelike Gauss Seidel • Gauss-Seidel immediately uses the value of xi in the next equation to predict xi+1 • Jacobi calculates all new values of xi’s to calculate a set of new xi values
  • 110.
    ( ) () ( ) ( ) ( ) ( ) ( ) ( ) ( ) FIRST ITERATION x c a x a x a x c a x a x a x c a x a x a x c a x a x a x c a x a x a x c a x a x a SECOND ITERATION x c a x a x a x c a x a x a x c a x a x a x c a x 1 1 12 2 13 3 11 1 1 12 2 13 3 11 2 2 21 1 23 3 22 2 2 21 1 23 3 22 3 3 31 1 32 2 33 3 3 31 1 32 2 33 1 1 12 2 13 3 11 1 1 12 2 13 3 11 2 2 21 1 23 3 22 2 2 21 1 = − − = − − = − − = − − = − − = − − = − − = − − = − − = − − / / / / / / / / / ( ) ( ) ( ) a x a x c a x a x a x c a x a x a 23 3 22 3 3 31 1 32 2 33 3 3 31 1 32 2 33 / / /= − − = − − Graphical depiction of difference between Gauss-Seidel and Jacobi
  • 111.
    2 3 12 4 1 2 2 3 2 1 1 − −              Note: We worked the Gauss Seidel method earlier Given the following augmented matrix, complete one iteration of the Gauss Seidel method and the Jacobi method. Example
  • 112.
    Gauss-Seidel Method convergence criterion εεa i i j i j i j s x x x , = − × < −1 100 as in previous iterative procedures in finding the roots, we consider the present and previous estimates. As with the open methods we studied previously with one point iterations 1. The method can diverge 2. May converge very slowly
  • 113.
    Class question: where dothese formulas come from? Convergence criteria for two linear equations ( ) ( ) u x x c a a a x v x x c a a a x consider the partial derivatives of u and v u x u x a a v x a a v x 1 2 1 11 12 11 2 1 2 2 22 21 22 2 1 2 12 11 1 21 22 2 0 0 , , = − = − = = − = − = ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂
  • 114.
    Convergence criteria fortwo linear equations cont. ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ u x v x u y v y + < + < 1 1 Criteria for convergence where presented earlier in class material for nonlinear equations. Noting that x = x1 and y = x2 Substituting the previous equation:
  • 115.
    Convergence criteria fortwo linear equations cont. a a a a 21 22 12 11 1 1< < This is stating that the absolute values of the slopes must be less than unity to ensure convergence. Extended to n equations: a a where j n excluding j iii ij> = =∑ 1,
  • 116.
    Convergence criteria fortwo linear equations cont. a a where j n excluding j iii ij> = =∑ 1, This condition is sufficient but not necessary; for convergence. When met, the matrix is said to be diagonally dominant.
  • 117.
  • 118.
    Diagonal Dominance Now, checkto see if these numbers satisfy the following rule for each row (note: each row represents a unique equation). a a where j n excluding j iii ij> = =∑ 1,          − =                     − − 4 3 9 x x x 9.05.01.0 4.08.02.0 4.02.01 3 2 1
  • 119.
    x2 x1 Review the concepts ofdivergence and convergence by graphically illustrating Gauss-Seidel for two linear equations u x x v x x : : 11 13 286 11 9 99 1 2 1 2 + = − =
  • 120.
    x1 Note: we areconverging on the solution v x x u x x : : 11 9 99 11 13 286 1 2 1 2 − = + = CONVERGENCE x2
  • 121.
    x1 Change the orderof the equations: i.e. change direction of initial estimates u x x v x x : : 11 13 286 11 9 99 1 2 1 2 + = − = DIVERGENCE x2
  • 122.
    Improvement of Convergence UsingRelaxation This is a modification that will enhance slow convergence. After each new value of x is computed, calculate a new value based on a weighted average of the present and previous iteration. ( )x x xi new i new i old = + −λ λ1
  • 123.
    Improvement of ConvergenceUsing Relaxation • if λ = 1unmodified • if 0 < λ < 1 underrelaxation • nonconvergent systems may converge • hasten convergence by dampening out oscillations • if 1< λ < 2 overrelaxation • extra weight is placed on the present value • assumption that new value is moving to the correct solution by too slowly ( )x x xi new i new i old = + −λ λ1