7. 7
Adding and Multiplying Matrices
j
i
j
i
m
k
,
b
a
c
B
A
C
*
p
m
if
only
defined
is
AB
C
product
The
*
q)
B(p
and
m)
(n
A
matrices
two
of
tion
Multiplica
,
b
a
c
B
A
C
*
size
same
the
have
they
if
only
Defined
*
B
and
A
matrices
two
of
addition
The
1
kj
ik
ij
ij
ij
ij
īĸ
īŊ
ī
īŊ
īŊ
īŊ
ī´
ī´
īĸ
īĢ
īŊ
ī
īĢ
īŊ
īĨ
īŊ
8. 8
Systems of Linear Equations
form
Matrix
form
Standard
7
5
3
6
0
1
3
1
5
.
2
3
4
2
7
6
5
3
5
.
2
3
3
4
2
forms
different
in
presented
be
can
equations
linear
of
system
A
3
2
1
3
1
3
2
1
3
2
1
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
ī
ī
ī
ī
ī¯
īž
ī¯
īŊ
īŧ
īŊ
ī
īŊ
īĢ
ī
īŊ
ī
īĢ
x
x
x
x
x
x
x
x
x
x
x
9. 9
Solutions of Linear Equations
5
2
3
:
equations
following
the
o
solution t
a
is
2
1
2
1
2
1
2
1
īŊ
īĢ
īŊ
īĢ
īē
īģ
īš
īĒ
īĢ
īŠ
īŊ
īē
īģ
īš
īĒ
īĢ
īŠ
x
x
x
x
x
x
10. 10
Solutions of Linear Equations
ī° A set of equations is inconsistent if there
exists no solution to the system of equations:
nt
inconsiste
are
equations
These
5
4
2
3
2
2
1
2
1
īŊ
īĢ
īŊ
īĢ
x
x
x
x
11. 11
Solutions of Linear Equations
ī° Some systems of equations may have infinite
number of solutions
all
for
solution
a
is
)
3
(
5
.
0
solutions
of
number
infinite
have
6
4
2
3
2
2
1
2
1
2
1
a
a
a
x
x
x
x
x
x
īē
īģ
īš
īĒ
īĢ
īŠ
ī
īŊ
īē
īģ
īš
īĒ
īĢ
īŠ
īŊ
īĢ
īŊ
īĢ
12. 12
Graphical Solution of Systems of
Linear Equations
5
2
3
2
1
2
1
īŊ
īĢ
īŊ
īĢ
x
x
x
x
Solution
x1=1, x2=2
13. 13
Cramerâs Rule is Not Practical
way
efficient
in
computed
are
ts
determinan
the
if
used
be
can
It
needed.
are
tions
multiplica
10
2.38
system,
30
by
30
a
solve
To
tions.
multiplica
1)N!
-
1)(N
(N
requires
system
N
by
N
solve
To
.
systems
large
for
practical
not
is
Rule
s
Cramer'
2
2
1
1
1
5
1
3
1
,
1
2
1
1
1
2
5
1
3
system
the
solve
to
used
be
can
Rule
s
Cramer'
35
2
1
ī´
īĢ
īŊ
īŊ
īŊ
īŊ x
x
15. 15
Naive Gaussian Elimination
ī° The method consists of two steps:
īŽ Forward Elimination: the system is
reduced to upper triangular form. A sequence
of elementary operations is used.
īŽ Backward Substitution: Solve the system
starting from the last variable.
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
ī
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
'
'
'
0
0
'
'
0
3
2
1
3
2
1
33
23
22
13
12
11
3
2
1
3
2
1
33
32
31
23
22
21
13
12
11
b
b
b
x
x
x
a
a
a
a
a
a
b
b
b
x
x
x
a
a
a
a
a
a
a
a
a
24. 24
ī° Summary of the Naive Gaussian Elimination
ī° Example
ī° Problems with Naive Gaussian Elimination
īŽ Failure due to zero pivot element
īŽ Error
ī° Pseudo-Code
Naive Gaussian Elimination
25. 25
Naive Gaussian Elimination
o The method consists of two steps
o Forward Elimination: the system is reduced to
upper triangular form. A sequence of elementary
operations is used.
o Backward Substitution: Solve the system starting
from the last variable. Solve for xn ,xn-1,âĻx1.
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
ī
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĢ
īŠ
'
'
'
0
0
'
'
0
3
2
1
3
2
1
33
23
22
13
12
11
3
2
1
3
2
1
33
32
31
23
22
21
13
12
11
b
b
b
x
x
x
a
a
a
a
a
a
b
b
b
x
x
x
a
a
a
a
a
a
a
a
a
30. 30
How Many Solutions Does a System of
Equations AX=B Have?
0
elements
0
elements
B
ing
correspond
B
ing
correspond
rows
zero
rows
zero
more
or
one
has
more
or
one
has
rows
zero
no
has
matrix
reduced
matrix
reduced
matrix
reduced
0
det(A)
0
det(A)
0
det(A)
Infinite
solution
No
Unique
īŊ
īš
īŊ
īŊ
īš
32. Pseudo-Code: Forward Elimination
Do k = 1 to n-1
Do i = k+1 to n
factor = ai,k / ak,k
Do j = k+1 to n
ai,j = ai,j â factor * ak,j
End Do
bi = bi â factor * bk
End Do
End Do
32
33. Pseudo-Code: Back Substitution
xn = bn / an,n
Do i = n-1 downto 1
sum = bi
Do j = i+1 to n
sum = sum â ai,j * xj
End Do
xi = sum / ai,i
End Do
33
34. 34
Gaussian Elimination with
Scaled Partial Pivoting
ī° Problems with Naive Gaussian Elimination
ī° Definitions and Initial step
ī° Forward Elimination
ī° Backward substitution
ī° Example
35. 35
Problems with Naive Gaussian Elimination
o The Naive Gaussian Elimination may fail for
very simple cases. (The pivoting element is zero).
o Very small pivoting element may result in
serious computation errors
īē
īģ
īš
īĒ
īĢ
īŠ
īŊ
īē
īģ
īš
īĒ
īĢ
īŠ
īē
īģ
īš
īĒ
īĢ
īŠ
2
1
1
1
1
0
2
1
x
x
īē
īģ
īš
īĒ
īĢ
īŠ
īŊ
īē
īģ
īš
īĒ
īĢ
īŠ
īē
īē
īģ
īš
īĒ
īĒ
īĢ
īŠ ī
2
1
1
1
1
10
2
1
10
x
x
37. 37
Example 2
Initialization step
ī ī
ī ī
4
3
2
1
L
Vector
Index
5
8
4
2
S
vector
Scale
1
1
1
1
3
5
2
4
3
6
8
5
4
1
2
3
1
2
1
1
4
3
2
1
īŊ
īŊ
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
ī
īŊ
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ ī
x
x
x
x
Scale vector:
disregard sign
find largest in
magnitude in
each row
38. 38
Why Index Vector?
ī° Index vectors are used because it is much
easier to exchange a single index element
compared to exchanging the values of a
complete row.
ī° In practical problems with very large N,
exchanging the contents of rows may not
be practical.
39. 39
Example 2
Forward Elimination-- Step 1: eliminate x1
]
1
3
2
4
[
Exchange
equation
pivot
first
the
is
4
equation
to
s
correspond
max
5
4
,
8
5
,
4
3
,
2
1
4
,
3
,
2
,
1
]
4
3
2
1
[
]
5
8
4
2
[
1
1
1
1
3
5
2
4
3
6
8
5
4
1
2
3
1
2
1
1
equation
pivot
the
of
Selection
1
4
4
1
,
4
3
2
1
īŊ
ī
īž
īŊ
īŧ
īŽ
ī
īŦ
īŊ
ī¯
īž
ī¯
īŊ
īŧ
ī¯
īŽ
ī¯
ī
īŦ
īŊ
īŊ
īŽ
ī
īŦ
īŊ
īŊ
ī
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
ī
īŊ
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ ī
L
l
and
l
l
i
S
a
Ratios
L
S
x
x
x
x
i
i
l
l
53. 53
How Do We Know If a Solution is
Good or Not
Given AX=B
X is a solution if AX-B=0
Compute the residual vector R= AX-B
Due to rounding error, R may not be zero
īĨ
īŖ
i
i
r
max
if
acceptable
is
solution
The
55. 55
Remarks:
ī° We use index vector to avoid the need to move
the rows which may not be practical for large
problems.
ī° If we order the equation as in the last value of
the index vector, we have a triangular form.
ī° Scale vector is formed by taking maximum in
magnitude in each row.
ī° Scale vector does not change.
ī° The original matrices A and B are used in
checking the residuals.
56. 56
Tridiagonal & Banded Systems
and Gauss-Jordan Method
ī° Tridiagonal Systems
ī° Diagonal Dominance
ī° Tridiagonal Algorithm
ī° Examples
ī° Gauss-Jordan Algorithm
57. 57
Tridiagonal Systems:
ī° The non-zero elements are
in the main diagonal,
super diagonal and
subdiagonal.
ī° aij=0 if |i-j| > 1
īē
īē
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
īŊ
īē
īē
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
īē
īē
īē
īē
īē
īē
īģ
īš
īĒ
īĒ
īĒ
īĒ
īĒ
īĒ
īĢ
īŠ
5
4
3
2
1
5
4
3
2
1
6
1
0
0
0
1
4
1
0
0
0
2
6
2
0
0
0
1
4
3
0
0
0
1
5
b
b
b
b
b
x
x
x
x
x
Tridiagonal Systems
58. 58
ī° Occur in many applications
ī° Needs less storage (4n-2 compared to n2 +n for the general cases)
ī° Selection of pivoting rows is unnecessary
(under some conditions)
ī° Efficiently solved by Gaussian elimination
Tridiagonal Systems
59. 59
ī° Based on Naive Gaussian elimination.
ī° As in previous Gaussian elimination algorithms
īŽ Forward elimination step
īŽ Backward substitution step
ī° Elements in the super diagonal are not affected.
ī° Elements in the main diagonal, and B need
updating
Algorithm to Solve Tridiagonal Systems
63. 63
Diagonally Dominant Tridiagonal System
ī° A tridiagonal system is diagonally dominant if
ī° Forward Elimination preserves diagonal dominance
)
1
(
1 n
i
a
c
d i
i
i īŖ
īŖ
īĢ
īž ī
64. 64
Solving Tridiagonal System
ī¨ īŠ 1
,...,
2
,
1
for
1
on
Substituti
Backward
2
n
Eliminatio
Forward
1
1
1
1
1
1
1
ī
ī
īŊ
ī
īŊ
īŊ
īŖ
īŖ
īˇ
īˇ
ī¸
īļ
ī§
ī§
ī¨
īĻ
ī
īŦ
īˇ
īˇ
ī¸
īļ
ī§
ī§
ī¨
īĻ
ī
īŦ
īĢ
ī
ī
ī
ī
ī
ī
n
n
i
x
c
b
d
x
d
b
x
n
i
b
d
a
b
b
c
d
a
d
d
i
i
i
i
i
n
n
n
i
i
i
i
i
i
i
i
i
i
67. 67
Example
Backward Substitution
ī° After the Forward Elimination:
ī° Backward Substitution:
ī ī ī ī
2
5
1
2
12
1
6
.
4
1
2
6
.
6
1
5652
.
4
1
2
5652
.
6
,
1
5619
.
4
5619
.
4
5619
.
4
5652
.
6
6
.
6
12
,
5619
.
4
5652
.
4
6
.
4
5
1
2
1
1
1
2
3
2
2
2
3
4
3
3
3
4
4
4
īŊ
ī´
ī
īŊ
ī
īŊ
īŊ
ī´
ī
īŊ
ī
īŊ
īŊ
ī´
ī
īŊ
ī
īŊ
īŊ
īŊ
īŊ
īŊ
īŊ
d
x
c
b
x
d
x
c
b
x
d
x
c
b
x
d
b
x
B
D T
T
68. 68
Gauss-Jordan Method
ī° The method reduces the general system of
equations AX=B to IX=B where I is an identity
matrix.
ī° Only Forward elimination is done and no
backward substitution is needed.
ī° It has the same problems as Naive Gaussian
elimination and can be modified to do partial
scaled pivoting.
ī° It takes 50% more time than Naive Gaussian
method.