SlideShare a Scribd company logo
1 of 29
Download to read offline
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[First Page]
[1005], (1)
Lines: 0
———
-15.10126pt
———
Normal Page
PgEnds:
[1005], (1)
APPENDIX B
MATRICES AND DETERMINANTS
B.1 BASIC CONCEPTS
A system of n linear algebraic equations in n unknowns x1, x2, x3, . . . , xn such as
a11x1 + a12x2 + a13x3 + · · · + a1nxn = y1
a21x1 + a22x2 + a23x3 + · · · + a2nxn = y2
a31x1 + a32x2 + a33x3 + · · · + a3nxn = y3
· · · · · ·
an1x1 + an2x2 + an3x3 + · · · + annxn = yn
can conveniently be represented by the matrix equation







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann














x1
x2
x3
· · ·
xn







=







y1
y2
y3
· · ·
yn







or more simply by
AX = Y
where A is a rectangular matrix (in this case square) having elements aij and where
X and Y are column vectors with elements xi and yi, respectively. The foregoing
representations imply that
n

j=1
aij xi = yi i = 1, 2, 3, . . . , n
1005
Extended Surface Heat Transfer. A. D. Kraus, A. Aziz and J. Welty
Copyright © 2001 John Wiley  Sons, Inc.
1006 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1006], (2)
Lines: 127
———
14.28009pt
———
Normal Page
PgEnds:
[1006], (2)
The matrix A is called the coefficient matrix. If it is desired to associate the elements
of Y with the coefficient matrix A, one may augment A and define an augmented
matrix







a11 a12 a13 · · · a1n y1
a21 a22 a23 · · · a2n y2
a31 a32 a33 · · · a3n y3
· · · · · · · · · · · ·
an1 an2 an3 · · · ann yn







which has n rows and n + 1 columns. This matrix may be written more simply as the
augmented matrix
Aa
= [A|Y]
where the superscript means augmented and where the idea of a partitioned matrix is
apparent. For example, in the system of linear algebraic equations
6x1 + 4x2 + x3 = 16
2x1 + 7x2 − 2x3 = 12
− 4x1 + x2 + 8x3 = − 22
the matrix


6 4 1
2 7 − 2
− 4 1 8


is called the coefficient matrix A of the system
AX = B
and the matrix


6 4 1 16
2 7 − 2 12
− 4 1 8 − 22


which contains the constant terms, in addition to the elements of A, is called the
augmented matrix of the system. Moreover, the unknowns and the constant terms
form two column vectors X and B.
In the representation
AX = B
A is said to premultiply X (A is a premultiplier) and X is said to postmultiply A (X is
a postmultiplier).
MATRIX AND VECTOR TERMINOLOGY 1007
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1007], (3)
Lines: 213
———
* 21.27504pt
———
Normal Page
PgEnds:
[1007], (3)
B.2 MATRIX AND VECTOR TERMINOLOGY
A matrix of order m × n,







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
am1 am2 am3 · · · amn







is a rectangular ordered array of a total of mn entries arranged in m rows and n
columns. The order of this matrix is m × n, which is often written as (m, n).
If m = n, the matrix is square of order n × n (or of n or of nth order)







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann







In both rectangular and square matrices, aij is called the (i, j)th element of A. If
the matrix is square and i = j, the element is said to define and be located on the
principal diagonal. The elements an1, a(n−1),2, a(n−2),3, . . . , a1n are located on and
constitute the secondary diagonal.
All elements where i = j are considered to be off-diagonal: subdiagonal if i  j,
and superdiagonal if i  j. The sum of the elements on the principal diagonal of A
is called the trace of A:
tr(A) =
n

k=1
akk
For example, the matrix
A =





6 3 0 1
− 1 4 1 1
− 1 1 8 − 2
2 5 2 11





is square and is of fourth order (4 × 4). The elements 6, 4, 8, and 11 constitute the
principal diagonal and the elements 2, 1, 1, and 1 constitute the secondary diagonal.
The element 1 is the a23 element, which lies at the intersection of the second row and
third column. The trace of A is
tr(A) = 6 + 4 + 8 + 11 = 29
1008 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1008], (4)
Lines: 280
———
5.11214pt
———
Normal Page
PgEnds:
[1008], (4)
A vector is a matrix containing a single row or a single column. If it is a 1 × n
matrix (a matrix of order 1 × n), it is a row vector:
V = [v1 v2 v3 · · · vn]
If the vector is an m × 1 vector (order m × 1), it is a column vector:
V =








v1
v2
v3
.
.
.
vn








This concept and the usual one regarding a vector have certain similarities.
These similarities are the reason why the elements of a vector are frequently called
components. However, caution is necessary because the usual three-dimensional
space does not imply that m or n (for column or row vectors, respectively) are limited
to an upper bound of 3.
B.3 SOME SPECIAL MATRICES
An m × n matrix such as the one displayed in Section B.2 is called a null matrix if
every element in the matrix is identically equal to zero. For example, the 3×4 matrix


0 0 0 0
0 0 0 0
0 0 0 0


is null.
The transpose of an m × n matrix is an n × m matrix with the rows and columns
of the original matrix interchanged. For the 3 × 4 matrix
A =


4 3 1 − 2
− 2 3 0 1
1 − 3 − 4 2


the transpose is 4 × 3
AT
=





4 − 2 1
3 3 − 3
1 0 − 4
− 2 1 2





Note the use of the superscript T to indicate the transpose and recognize that the
transpose of the transpose is the original matrix
MATRIX EQUALITY 1009
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1009], (5)
Lines: 360
———
3.77509pt
———
Normal Page
PgEnds:
[1009], (5)
[AT
]T
= A
The nth-order square matrix







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann







is said to be diagonal or a diagonal matrix if aij = 0 for all i = j:
A =







a11 0 0 · · · 0
0 a22 0 · · · 0
0 0 a33 · · · 0
· · · · · · · · ·
0 0 0 · · · ann







If all aij are equal for all i = j (that is, aij = α; i = j) and aij = 0; i = j,
the resulting matrix is said to be a scalar matrix, which is a diagonal matrix with all
elements (principal diagonal elements) equal:
A =







α 0 0 · · · 0
0 α 0 · · · 0
0 0 α · · · 0
· · · · · · · · ·
0 0 0 · · · α







If all α in the scalar matrix are equal to unity (α = 1), the scalar matrix becomes
the identity matrix:
I =







1 0 0 · · · 0
0 1 0 · · · 0
0 0 1 · · · 0
· · · · · · · · ·
0 0 0 · · · 1







B.4 MATRIX EQUALITY
A matrix A = [aij ]m×n will be equal to a matrix B = [bij ]m×n if and only if aij = bij
for all i and j. This essentially states that two matrices will be equal if and only if
they are of the same order and corresponding elements are equal.
1010 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1010], (6)
Lines: 434
———
3.0601pt
———
Normal Page
PgEnds:
[1010], (6)
B.5 MATRIX ADDITION AND SUBTRACTION
A matrix A = [aij ]m×n may be added to a matrix B = [bij ]m×n to form a matrix
C = [cij ]m×n = [aij + bij ]m×n. This points out that in order to form the sum of two
matrices, the matrices must be of the same order and that the elements of the sum are
determined by adding the corresponding elements of the matrices forming the sum.
Example B.1. If
A =

4 3 2
− 6 1 5
B =

3 − 1 4
1 0 3
and
C =

3 2
4 − 2
find A + B and A + C.
SOLUTION
A + B =

4 3 2
− 6 1 5
+

3 − 1 4
1 0 3
=

(4 + 3) (3 − 1) (2 + 4)
(− 6 + 1) (1 + 0) (5 + 3)
=

7 2 6
− 5 1 8
The sum A + C does not exist because the order of C does not equal the order of
A.
Matrix addition is both commutative and associative:
A + B = B + A
A + (B + C) = (A + B) + C
In addition, the sum A + C is equal to the sum B + C if and only if A = C. This is
the cancellation law for addition.
The matrix B = [bij ]m×n may be subtracted from the matrix A = [aij ]m×n to form
the matrix D = [dij ]m×n. This indicates that two matrices of the same order may
be subtracted by forming the difference between the corresponding elements of the
minuend and the subtrahend. Moreover, it is easy to see that if
A + B = C
then
A = C − B
MATRIX MULTIPLICATION 1011
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1011], (7)
Lines: 518
———
12.40999pt
———
Normal Page
PgEnds:
[1011], (7)
Finally, it may be observed that a square matrix possesses a unique decomposition
into a sum of a subdiagonal, a diagonal and a superdiagonal matrix. For example,
A =


1 2 3
4 5 6
9 8 7

 =


0 0 0
4 0 0
9 8 0

 +


1 0 0
0 5 0
0 0 7

 +


0 2 3
0 0 6
0 0 0


B.6 MATRIX MULTIPLICATION
A matrix may be multiplied by a scalar or by another matrix. If A = [aij ] and α is a
scalar, then
αA = [αaij ]
This shows that multiplication by a scalar is commutative and that multiplication
by a scalar involves the multiplication of each and every element of the matrix by the
scalar. In addition, it is easy to see that
(α + β)A = αA + βA
α(A + B) = αA + αB
and
α(βA) = (αβ)A
Observe that a scalar matrix is equal to the product of the scalar and the identity
matrix. For example,


3 0 0
0 3 0
0 0 3

 = 3


1 0 0
0 1 0
0 0 1


A modest effort must be expended to use the terminology multiplication by a scalar
in order to avoid confusion with the process known as scalar multiplication.
The product of a row vector of order 1 × n and a column vector of order n × 1
forms a 1×1 matrix which has no important property that is not possessed by a scalar.
This product is therefore called the scalar or dot product (some sources also use the
terminology inner product). It is called for through the use of a dot placed between
the two matrices in the product; that is, if A and B are column vectors
A · B = [aij ]1×n · [bij ]1×n = ABT
= BAT
= γ
and where γ is a scalar obtained from
γ =
n

k=1
akbk
1012 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1012], (8)
Lines: 608
———
-0.85991pt
———
Normal Page
PgEnds:
[1012], (8)
If the scalar product of two vectors is uniquely equal to zero, the vectors are said to
be orthogonal.
Example B.2. If
A =







2
4
3
1
2







and
B =







− 5
4
− 3
8
2







what is the dot product A · B?
SOLUTION
A · B = 2(− 5) + 4(4) + 3(− 3) + 1(8) + 2(− 2)
= −10 + 16 − 9 + 8 − 4 = 1
In Section B.2, a set of linear simultaneous algebraic equations was shown to be
represented by the notation
AX = Y
where A was the n × n coefficient matrix and A and X were n × 1 column vectors.
In order to obtain the original set of equations from a set where n = 3,


a11 a12 a13
a21 a22 a23
a31 a32 a33




x1
x2
x3




y1
y2
y3


a row by column element product and sum operation is clearly evident:


a11x1 + a12x2 + a13x3 = y1
a21x1 + a22x2 + a23x3 = y2
a31x1 + a32x2 + a33x3 = y3


and it is observed that each element of y is obtained by multiplying the corresponding
elements of A by the elements of X and adding the results. Notice that the foregoing
procedure will not be possible if the number of columns of A does not equal the
MATRIX MULTIPLICATION 1013
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1013], (9)
Lines: 696
———
12.73558pt
———
Normal Page
PgEnds:
[1013], (9)
number of rows of X. In this event there will not always be corresponding elements
to multiply. Moreover, it should be noted that Y contains the same number of rows
as both A and X.
This suggests a general definition for the multiplication of two matrices. If A is
m × n and B is p × q, AB = C will exist if n = p, in which case the matrix C will
be m × q with elements given by
[cij ]m×q =
n=p

k=1
aikbkj
i = 1, 2, 3, . . . , m
j = 1, 2, 3, . . . , q
When n = p, the matrices A and B are said to be conformable for multiplication.
Example B.3. If
A =

− 1 4 − 2 0
4 3 2 1
B =





− 1 1
3 2
− 2 4
0 3





and
C =

2 1
− 3 4
find AB, BA, and AC.
SOLUTION. The product AB exists because A is 2 × 4 and B is 4 × 2. The result P
will be 2 × 2:
P = AB =

− 1 4 − 2 0
4 3 2 1





− 1 1
3 2
− 2 4
0 3





=

(1 + 12 + 4 + 0) (− 1 + 8 − 8 + 0)
(− 4 + 9 − 4 + 0) (4 + 6 + 8 + 3)
=

17 − 1
1 21
The product BA also exists:
BA =





− 1 1
3 2
− 2 4
0 3






− 1 4 − 2 0
4 3 2 1
1014 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1014], (10)
Lines: 807
———
1.03004pt
———
Normal Page
PgEnds:
[1014], (10)





(1 + 4) (− 4 + 3) (2 + 2) (0 + 1)
(− 3 + 8) (12 + 6) (− 6 + 4) (0 + 2)
(2 + 16) (− 8 + 12) (4 + 8) (0 + 4)
(0 + 12) (0 + 9) (0 + 6) (0 + 3)





=





5 − 1 4 1
5 18 − 2 2
18 4 12 4
12 9 6 3





Notice that AB = BA, which shows that matrix multiplication, in general, is not
commutative. Notice also that the product AC will not exist because A and C are not
conformable for multiplication (A is 2 × 4 and C is 2 × 2).
Although the commutative law does not hold, the multiplication of matrices is
associative:
(AB)C = A(BC)
and matrix multiplication is distributive with respect to addition:
A(B + C) = AB + AC
assuming that conformability exists for both addition and multiplication.
If the product AB is null, that is, AB = 0, it cannot be concluded that either A or
B is null. Furthermore, if AB = AC or CA = BA, it cannot be concluded that B = C.
This means that, in general, cancellation of matrices is not permissible.
The transpose of a product of matrices is equal to the product of the individual
transposes taken in reverse order:
(AB)T
= BT
AT
B.7 MATRIX DIVISION AND MATRIX INVERSION
Matrixdivisionisnotdefined.Instead,useismadeofaprocesscalledmatrixinversion,
which relies on the existence of the identity matrix, which is related to a square matrix
A by
AI = IA = A
Consider the identity for addition, 0, so that has the property that for all scalars α,
α + 0 = 0 + α = α
and an identity element for multiplication, 1, so that
α1 = 1α = α
The scalar most certainly possesses a reciprocal or multiplicative inverse, 1/α, which
when multiplied by α yields the identity element for scalar multiplication:
1/α = α−1
α = α
DETERMINANTS 1015
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1015], (1
Lines: 869
———
6.48608pt
———
Normal Page
PgEnds:
[1015], (1
This reasoning may be extended to the n × n matrix A and the pair of identity
matrices: the n × n identity matrix for multiplication I and the n × n identity matrix
for addition 0 (a null matrix). Thus, as already noted,
AI = IA = A
and
A + 0 = 0 + A = A
If there is an n × n matrix A that pre- and postmultiplies A such that
A−1
A = AA−1
= I
then A−1
is an inverse of A with respect to matrix multiplication. The matrix A is said
to be invertible or nonsingular if A−1
exists and singular if A−1
does not exist.
For example, the 3 × 3 matrix
A =


4 − 2 − 1
− 2 8 − 5
− 1 − 5 8


can be shown to possess the inverse
A−1
=


13/32 7/32 3/16
7/32 31/96 11/48
3/16 11/48 7/24


A simple multiplication will produce the identity matrix:
AA−1
=


4 − 2 − 1
− 2 8 − 5
− 1 − 5 8




13/32 7/32 3/16
7/32 31/96 11/48
3/16 11/48 7/24

 =


1 0 0
0 1 0
0 0 1


It can also be verified that the identity is also produced if the product A−1
A is taken.
The inverse of a product of matrices is equal to the product of the individual
inverses taken in reverse order:
(AB)−1
= B−1
A−1
B.8 DETERMINANTS
B.8.1 Definitions and Terminology
A square matrix of order n (an n × n matrix)
1016 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1016], (12)
Lines: 985
———
* 34.89008pt
———
Normal Page
PgEnds:
[1016], (12)
A =







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann







possesses a uniquely defined scalar (a single number) which is designated as the
determinant of A or merely the determinant:
det A = |A|
where the order of the determinant is the same as the order of the matrix from
which it derives. Observe that only square matrices possess determinants, the use
of vertical lines and not brackets to designate determinants, and that the elements of
the determinant are identical to the elements of the matrix:
det A =
a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann
A determinant of the first order consists of a single element a and has, therefore,
the value det A = a. A determinant of the second order contains four elements in a
2 × 2 square array with the value
det A = |A| =
a11 a12
a21 a22
A determinant of the third order is described in similar fashion. It is a 3×3 square
array containing nine elements:
det A = |A| =
a11 a12 a13
a21 a22 a23
a31 a32 a33
One may deduce that a determinant of nth order consists of a square array of n×n
elements, aij , and that the total number of elements in an nth-order determinant is
n2
. Although this representation of the determinant looks to be purely abstract, the
determinant can be proven to be a very rational function which can be evaluated in
a number of ways. Moreover, the value of the use of determinants in the taking of
matrix inverses and in the solution of simultaneous linear algebraic equations cannot
and should not be underemphasized.
DETERMINANTS 1017
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1017], (13)
Lines: 1040
———
13.71014pt
———
Normal Page
PgEnds:
[1017], (13)
B.8.2 Determinant Evaluation
Consider, for example, the following pair of simultaneous linear algebraic equations,
which are presumed to be linearly independent:
a11x1 + a12x2 = b1 (B.1a)
a21x1 + a22x2 = b2 (B.1b)
and observe that they may also be written in the matrix form AX = B:

a11 a12
a21 a22

x1
x2
=

b1
b2
In eqs. (B.1), the x’s are the unknowns and the a’s form the coefficient matrix A.
If det A = 0, the equations are said to be linearly independent and one method of
solving this second-order system is to multiply eq. (B.1a) by a22 and eq. (B.1b) by
a12:
a22a11x1 + a22a12x2 = a22b1
a12a21x1 + a12a22x2 = a12b2
A subtraction then yields
(a22a11 − a12a21)x1 = a22b1 − a12b2
and then x1 is obtained:
x1 =
a22b1 − a12b2
a22a11 − a12a21
(B.2a)
A similar procedure yields x2
x2 =
a11b2 − a21b1
a22a11 − a12a21
(B.2b)
Observe that the denominators of the equations that yield x1 and x2 can be
represented by the determinant
a11 a12
a21 a22
= a22a11 − a12a21
and it is easy to see that the numerators of these equations can be represented by
b1 a12
b2 a22
= b1a22 − b2a21
and
a11 b1
a21 b2
= a11b2 − a21b1
1018 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1018], (14)
Lines: 1132
———
* 25.15227pt
———
Normal Page
PgEnds:
[1018], (14)
This must always hold unless the determinant in the denominators is equal to zero,
which is ruled out because the discussion began originally with the statement that the
two equations to be solved were linearly independent.
Thus one may write the solutions for x1 and x2 in eqs. (B.1) as
x1 =
b1 a12
b2 a22
a11 a12
a21 a22
(B.3a)
and
x2 =
a11 b1
a21 b2
a11 a12
a21 a22
(B.3b)
and this is a demonstration of a method of solution of simultaneous linear algebraic
equations known as Cramer’s rule.
The foregoing reasoning applies equally well to a set of n simultaneous algebraic
equations. For a set of three equations in three unknowns
a11x1 + a12x2 + a13x3 = b1
a21x1 + a22x2 + a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
which are assumed to be linearly independent and which may be written in matrix
form as


a11 a12 a13
a21 a22 a23
a31 a32 a33




x1
x2
x3

 =


b1
b2
b3


it can be shown that x1 can be evaluated from
x1 =
b1a22a33 + b3a12a23 + b2a13a32 − b3a22a13 − b1a32a23 − b2a12a33
a11a22a33 + a12a23a31 + a13a21a32 − a31a22a13 − a32a23a11 − a33a21a12
Both the numerator and denominator can be rearranged by employing a little
algebra:
x1 =
b1(a22a33 − a32a23) − b2(a12a33 − a32a13) + b3(a12a23 − a22a13)
a11(a22a33 − a32a23) − a21(a12a33 − a13a32) + a31(a12a23 − a13a22)
(B.4)
and an inspection of the terms within parentheses shows that the solution for x1 cannot
only be written (Cramer’s rule) as the quotient of two determinants, but each of the
determinants can be represented in terms of three second-order determinants:
DETERMINANTS 1019
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1019], (15)
Lines: 1229
———
-8.92982pt
———
Normal Page
PgEnds:
[1019], (15)
x1 =
b1 a12 a13
b2 a22 a23
b3 a32 a33
a11 a12 a13
a21 a22 a23
a31 a32 a33
or
x1 =
b1
a22 a23
a32 a33
− b2
a12 a13
a32 a33
+ b3
a12 a13
a22 a23
a11
a22 a23
a32 a33
− a21
a12 a13
a32 a23
+ a31
a12 a13
a22 a23
(B.5)
This expansion is known as the Laplace expansion or Laplace development.
The method of evaluating second-order determinants is suggested in eqs. (B.2)
and (B.3). The second-order determinant is evaluated as the remainder of the product
resulting from the multiplication of the upper left and lower right elements (the
principal diagonal elements) minus the product of the lower left and the upper
right elements (the secondary diagonal elements). This procedure is demonstrated
in Fig. B.1a.
The third-order determinant may be evaluated by taking the products and then the
sums and differences of the elements shown in Fig. B.1b. This procedure may be
assisted by rewriting the first two columns of the determinant and then proceding as
indicated in Fig. B.1c. It is important to note that for this purpose, the diagonals of
the third-order determinant are continuous that is, the last column is followed by the
first column.
Caution is necessary: Fourth- and higher-order determinants may not be evaluated
by the following the procedures displayed in Fig. B.1. The Laplace expansion or
pivotal condensation, to be discussed presently, must be employed in these cases.
Example B.4. Evaluate the determinants
|A| =
3 4
2 5
and
|B| =
4 3 2
0 3 1
1 2 1
SOLUTION. The second-order determinant is evaluated by the procedure indicated
in Fig. B.1a:
|A| =
3 4
2 5
= 3(5) − 2(4) = 15 − 8 = 7
1020 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1020], (16)
Lines: 1310
———
-5.06296pt
———
Normal Page
PgEnds:
[1020], (16)
Figure B.1 (a) Procedure for evaluating a second-order determinant; (b) and (c) equivalent
procedures for evaluating a third-order determinant.
The third-order determinant is evaluated in accordance with Fig. B.1b or c as
|B| =
4 3 2
0 3 1
1 2 1
= 4(3)(1) + 3(1)(1) + 2(0)(2) − 1(3)(2) − 2(1)(4) − 1(0)(3)
or
DETERMINANTS 1021
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1021], (17)
Lines: 1326
———
-5.78294pt
———
Normal Page
PgEnds:
[1021], (17)
Figure B.2 Checkerboard rule for finding the sign of a cofactor of an nth-order determinant
(a) for n odd and (b) for n even.
B = 12 + 3 + 0 − 6 − 8 − 0 = 1
B.8.3 Pivotal Condensation
The evaluation of a determinant by the Laplace expansion can be a long, tedious,
and laborious procedure. Assuming that third-order determinants can be evaluated
quickly, a fifth-order determinant containing no zero elements requires the evaluation
of 5 × 4 = 20 third-order determinants. For a sixth-order determinant, this number
becomes 6 × 5 × 4 = 120. In general, if n  6, the evaluation of an nth-order
determinant can require the evaluation of (n − 1)! third-order determinants.
Pivotal condensation is a much more efficient process. Take the determinant
det A =
a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann
The element a11 is selected as the element in the pivotal position. It is called the pivotal
element or merely the pivot in the following development. The objective is to find a
determinant |B| that is one order less than |A| by operating on |A| in such a manner
as to produce a column of zeros in the column containing the pivot. If a11 = 0, a
row or column interchange can be performed to put a nonzero element in the pivotal
position.
Thecondensationprocessthatbringsannth orderdeterminantdowntoan(n−1)th-
order determinant is continued until the order is reduced to three or two. Then the
evaluation can be accomplished by the methods provided in the preceding section.
The entire condensation procedure can be handled by the computationally efficient
matrix relationship
|A| =
1
an−2
11
det





a11





a22 a23 · · · a2n
a32 a33 · · · a3n
· · · · · ·
an2 an3 · · · ann





1022 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1022], (18)
Lines: 1376
———
-9.86978pt
———
Normal Page
PgEnds:
[1022], (18)
−





a21
a31
.
.
.
an1





[ a12 a13 · · · a1n ]





(B.6)
Example B.5. Use pivotal condensation to evaluate the determinant
|A| =
2 −1 1 1 2
0 2 3 2 1
0 1 2 1 2
0 1 −1 −1 3
0 2 1 1 −2
SOLUTION. By pivotal condensation
|A| =
1
23
det





2





2 3 2 1
1 2 1 2
1 −1 −1 3
2 1 1 −2





−





0
0
0
0





[ −1 1 1 2 ]





or
|A| =
1
8
det





4 6 4 2
2 4 2 4
2 −2 −2 6
4 2 2 −4





Then
|A| =
1
(8)(4)2
det

4


4 2 4
−2 −2 6
2 2 −4

 −


2
2
4

 [ 6 4 2 ]


or
|A| =
1
128
det




16 8 16
−8 −8 24
8 8 −16

 −


12 8 4
12 8 4
24 16 8




=
1
128
det


4 0 12
−20 −16 20
−16 −8 −24


The third-order determinant is easily evaluated:
|A| =
1
128
(1536 + 0 + 1920 − 3072 − 0 + 640) =
1024
128
= 8
MINORS AND COFACTORS 1023
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1023], (19)
Lines: 1482
———
* 20.85pt
———
Normal Page
PgEnds:
[1023], (19)
B.8.4 Additional Properties
Several rules pertaining to the simplification and manipulation of determinants are
presented below without formal proof.
• Interchanging any row (or column) of a determinant with its immediately
adjacent row (or column) alters the sign of the determinant.
• The multiplication of any single row (column) of a determinant by a scalar
constant is equivalent to the multiplication of the entire determinant by the
scalar. Observe that this differs from the multiplication of a matrix by a scalar;
the multiplication of a matrix by a scalar results in the multiplication of each and
every element of the matrix by the scalar.
• If every element in an nth-order determinant is multiplied by the same scalar, α,
the value of the determinant is multiplied by αn
.
• If any two rows (columns) of a determinant are identical, the value of the
determinant is zero and the matrix from which the determinant derives is said to
be singular.
• If any row (or column) of a determinant contains nothing but zeros, the value of
the determinant is zero.
• If any two rows (columns) of a determinant are proportional, the determinant
is equal to zero. In this case, the two rows (columns) are said to be linearly
dependent.
• If the elements of any row (column) of a determinant are added to or subtracted
from the corresponding elements of another row (column), the value of the
determinant is unchanged.
• If the elements of any row (column) of a determinant are multiplied by a constant
and then added to or subtracted from the corresponding elements of another row
(column), the value of the determinant is unchanged.
• The value of the determinant of a diagonal matrix is equal to the product of the
terms on the diagonal.
• The value of the determinant of a matrix is equal to the value of the determinant
of the transpose of the matrix.
• The determinant of the product of two matrices is equal to the product of the
determinants of the two matrices.
• If the determinant of the product of two square matrices is zero, then at least one
of the matrices is singular, that is, the value of its determinant is equal to zero.
B.9 MINORS AND COFACTORS
Consider the nth-order determinant
1024 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1024], (20)
Lines: 1528
———
* 30.14009pt
———
Normal Page
PgEnds:
[1024], (20)
det A =
a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann
(B.7)
which will be used for proposing two useful quantities.
The (n − 1)th-order minor of an nth-order determinant |A| is the determinant
formed by deleting one row and one column from |A|. The minor, designated by
|M|ij , is the determinant formed by deleting the ith row and the jth column from |A|.
The cofactor, designated as Aij without vertical rules and with a double subscript,
is the signed (n − 1)th-order minor formed from the nth-order determinant. If the
minor has been formed by deleting the ith row and the jth column from |A|, then
Aij = (−1)i+j
Mij (B.8)
The sign of the cofactor can be determined from eq. (B.8) or from the checkerboard
rule summarized in Fig. B.2.
Example B.6. Consider the fourth-order determinant
|A| = det





1 3 −1 2
4 1 1 3
3 1 −2 1
1 3 2 5





What is the minor and cofactor formed by deleting the third row and fourth column?
SOLUTION
|M|34 = det


1 3 −1
4 1 1
1 3 2

 = 2 + 3 − 12 + 1 − 3 − 24 = −33
The cofactor is the signed minor. By the checkerboard rule of Fig. B.2 or by
eq. (B.8),
A34 = (−1)3+4
(−33) = −(−33) = 33
B.10 COFACTOR MATRIX
A square nth-order matrix
COFACTOR MATRIX 1025
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1025], (21)
Lines: 1599
———
-6.16997pt
———
Normal Page
PgEnds:
[1025], (21)
A =







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann







possesses a cofactor matrix with elements indicated by capital letters with double
subscripts:
Ac
=







A11 A12 A13 · · · A1n
A21 A22 A23 · · · A2n
A31 A32 A33 · · · A3n
· · · · · · · · ·
An1 An2 An3 · · · Ann







Example B.7. Determine the cofactor matrix for the third-order symmetrical matrix


3 −2 0
−2 4 −1
0 −1 6


SOLUTION. The nine cofactors with signs determined by eq. (B.8) or from the
checkerboard rule in Fig. B.2 are formed from the nine possible second-order minors.
A11 = +|M|11 =
4 −1
−1 6
= 24 − 1 = 23
A12 = −|M|12 =
−2 −1
0 6
= −(−12) = 12
A13 = +|M|13 =
−2 4
0 −1
= 2
A21 = −|M|21 =
−2 0
−1 6
= −(−12) = 12
A22 = +|M|22 =
3 0
0 6
= 18
A23 = −|M|23 =
3 −2
0 −1
= −(−3) = 3
A31 = +|M|31 =
−2 0
4 −1
= 2
A32 = −|M|32 =
3 0
−2 −1
= −(−3) = 3
1026 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1026], (22)
Lines: 1688
———
6.66289pt
———
Normal Page
PgEnds:
[1026], (22)
A33 = +|M|33 =
3 −2
−2 4
= 12 − 4 = 8
Thus
Ac
=


23 12 2
12 18 3
2 3 8


and this confirms that symmetrical matrices possess symmetrical cofactor matrices.
B.11 LAPLACE EXPANSION
In the denominator of eq. (B.5), the third-order determinant of a matrix A was shown
to be equal to some function of three second-order determinants
a11 a12 a13
a21 a22 a23
a31 a32 a33
= a11
a22 a23
a32 a33
− a21
a12 a13
a32 a33
+ a31
a12 a13
a22 a23
(B.9)
Notice that each of the second-order determinants is a second-order minor of A.
Thismeansthatthreecofactorsexist,andhenceeq.(B.9)givesarulefortheevaluation
of a third-order determinant which can be extended to an nth-order determinant. For
the ith row,
|A| =
j=n

j=1
(−1)i+j
aij |M|ij
or
|A| =
j=n

j=1
aij Aij (B.10a)
and for the jth column,
|A| =
i=n

i=1
(−1)i+j
aij |M|ij
or
|A| =
i=n

i=1
aij Aij (B.10b)
Equations (B.10) describe a procedure known as the Laplace development or Laplace
expansion.
MATRIX INVERSION 1027
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1027], (23)
Lines: 1773
———
-1.36171pt
———
Normal Page
PgEnds:
[1027], (23)
Example B.8. Evaluate the determinant
|A| =
1 2 3 4
1 4 3 2
2 0 3 1
1 0 2 3
SOLUTION. Expand using the second column to reduce the labor (two zeros occur
in this column):
|A| = a21|M|21 + a22|M|22
The cofactors derive from the appropriate minors with their sign determined from
eq. (B.8) or from the checkerboard rule illustrated in Fig. B.2.
A21 = −|M|21 =
1 3 2
2 3 1
1 2 3
= −(9 + 3 + 8 − 6 − 2 − 18) = −(20 − 26) = 6
and
A22 = +|M|22 =
1 3 4
2 3 1
1 2 3
= (9 + 3 + 16 − 12 − 2 − 18) = 28 − 32 = −4
The value of the determinant is
|A| = a21A21 + a22A22 = 2(6) + 4(− 4) = 12 − 16 = −4
It should be noted that if the elements of a row or column of a determinant are
multiplied by cofactors of the corresponding elements of a different row or column,
the resulting sum of these products is zero:
i=n

i=1
(−1)k+j
akj |M|kj (i = k) (B.11a)
and
j=n

j=1
(−1)i+k
aik|M|ik (j = k) (B.11b)
B.12 MATRIX INVERSION
An nth-order set of simultaneous linear algebraic equations in n unknowns, x1, x2,
x3, . . ., xn, can be represented conveniently by the matrix equation
AX = Y (B.12)
1028 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1028], (24)
Lines: 1843
———
-7.42316pt
———
Normal Page
PgEnds:
[1028], (24)
where A, as indicated in Section B.2, is a square matrix of coefficients having elements
aij and where X and Y are n×1 column vectors with elements xi and yi, respectively.
Because division of matrices is not permitted, one method for the solution of matrix
equations, such as the one shown in eq. (B.12), is called matrix inversion.
If eq. (B.12) is premultiplied by an n × n square matrix B so that
BAX = BY
a solution for the unknowns X will evolve if the product BA is equal to the identity
matrix I:
BAX = IX = BY
or
X = BY (B.13)
If
BA = AB = I
the matrix B is said to be the inverse of A:
B = A−1
(B.14a)
and, of course, the inverse of the inverse is the matrix itself:
A = B−1
(B.14b)
or
(A−1
)−1
= A
It may be recalled that in general, matrix multiplication is not commutative.
The multiplication of a matrix by its inverse is one specific case where matrix
multiplication is commutative:
AA−1
= A−1
A = I
B.12.1 Properties of the Inverse
The inverse of a product of two matrices is the product of the inverses taken in
reverse order. This is easily proved. Consider the product AB and postmultiply by
BA. Because matrix multiplication is associative, this product can be taken with
a rearrangement of the parentheses and then by straightforward application of the
definition of the matrix inverse:
AB(B−1
A−1
) = A(BB−1
A−1
) = AIA−1
= AA−1
= I
In addition, the inverse of the transpose of a matrix is equal to the transpose of its
inverse:
MATRIX INVERSION 1029
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1029], (25)
Lines: 1915
———
* 39.9469pt
———
Normal Page
PgEnds:
[1029], (25)
(AT
)−1
= (A−1
)T
negative powers of a matrix are related to its inverse:
A−n
= (A−1
)n
and the determinant of the product of a matrix and its inverse must be equal to unity:
det(AA−1
) = det I = 1
If a matrix does not possess an inverse, it is said to be singular, but if a matrix
does possess an inverse, the inverse is unique.
The inverse of a product of matrices is equal to the product of the inverses taken
in reverse order:
(AB)−1
= B−1
A−1
B.12.2 Adjoint Matrix
The adjoint matrix, which is sometimes called the adjugate matrix and which here
will be referred to merely as the adjoint, applies only to a square matrix and is the
transpose of the cofactor matrix:
adj A = A = ((Ac
)T
(B.15)
and because symmetrical matrices possess symmetrical cofactor matrices, the adjoint
of a symmetrical matrix is the cofactor matrix itself.
The matrix that is of nth order,
A =







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann







has been observed to possess a cofactor matrix,
Ac
=







A11 A12 A13 · · · A1n
A21 A22 A23 · · · A2n
A31 A32 A33 · · · A3n
· · · · · · · · ·
An1 An2 An3 · · · Ann







and this cofactor matrix has an adjoint,
1030 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1030], (26)
Lines: 1996
———
-1.11307pt
———
Normal Page
PgEnds:
[1030], (26)
adj A = (Ac
)T
=







A11 A21 A31 · · · An1
A12 A22 A32 · · · An2
A13 A23 A33 · · · An3
· · · · · · · · ·
A1n A2n A3n · · · Ann







B.12.3 One Method for the Determination of the Inverse
Suppose that an n × n matrix A is postmultiplied by its adjoint and that the product
is designated as P:
A(adj A) =







a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
a31 a32 a33 · · · a3n
· · · · · · · · ·
an1 an2 an3 · · · ann














A11 A21 A31 · · · An1
A12 A22 A32 · · · An2
A13 A23 A33 · · · An3
· · · · · · · · ·
A1n A2n A3n · · · Ann







= P
The elements of P may be divided into two categories: those that lie on its principal
diagonal of which p22 is typical and those that do not. For the principal diagonal
element p22,
p22 = a21A21 + a22A22 + a23A23 + · · · + a2nA2n
and by eq. (B.10a), it is seen that
p22 = |A|
For the off-diagonal element of which p13 is typical,
p13 = a11A31 + a12A32 + a13A33 + · · · + a1nA3n
and by eq. (B.11a), it is seen that
p13 = 0
Thus the product of A and its adjoint is
A(adj A) =







|A| 0 0 · · · 0
0 |A| 0 · · · 0
0 0 |A| · · · 0
· · · · · · · · ·
0 0 0 · · · |A|







= |A|I
If this is put into the form
A
adj A
det A
= A
MATRIX INVERSION 1031
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1031], (27)
Lines: 2084
———
* 47.02194pt
———
Normal Page
* PgEnds:
[1031], (27)
and compared with
AA−1
= I
it becomes evident that the inverse of the matrix A is equal to its adjoint divided by
its determinant:
A−1
=
adj A
det A
(B.16)
Observe that if det A = 0, the inverse of A cannot exist and is therefore singular.
Thus the necessary condition for the matrix A to be singular is for det A = 0.
Example B.9. Determine the inverse of the third-order symmetrical matrix
A =


3 −2 0
−2 4 −1
0 −1 6


SOLUTION. In Example B.7 it was shown that the given matrix possesses a cofactor
matrix:
Ac
=


23 12 2
12 18 3
2 3 8


and the reader may verify that the given matrix has a determinant det A = 45.
The given matrix is symmetrical, as is the cofactor matrix. The adjoint (the
transpose of the cofactor matrix) is also symmetrical and is equal to the cofactor
matrix. Thus, by eq. (B.16), the inverse is
A−1
=


23/45 4/15 2/45
4/15 2/5 1/15
2/45 1/15 8/45


which is also observed to be symmetrical.
It is important to note that symmetrical matrices possess symmetrical transposes,
symmetrical cofactor matrices, symmetrical adjoints, and symmetrical inverses.
The evaluation of the inverse can always be concluded with a check on its validity.
In the example just concluded,
AA−1
=


3 −2 0
−2 4 −1
0 −1 6




23/45 4/15 2/45
4/15 2/5 1/15
2/45 1/15 8/45

 =


1 0 0
0 1 0
0 0 1


1032 MATRICES AND DETERMINANTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[1032], (28)
Lines: 2166
———
2.55508pt
———
Normal Page
PgEnds:
[1032], (28)
Example B.10. Determine the inverse of the nonsymmetrical second-order matrix
A =

4 −1
1 6
SOLUTION. The matrix has a determinant
det A = 24 + 1 = 25
a cofactor matrix
Ac
=

6 −1
1 4
and an adjoint
adj A =

6 1
−1 4
Its inverse is
A−1
=
adj A
det A
=

6/25 1/25
−1/25 4/25
This can be verified by the reader, and it is observed that the inverse of a second-
order determinant is obtained by swapping the elements that lie on the principal
diagonal, changing the sign of the off-diagonal elements, and then dividing all
elements by the determinant.
B.13 NOMENCLATURE
Roman Letter Symbols
A matrix, dimensionless
a element of A, dimensionless
B vector or matrix, dimensionless
b element of B, dimensionless
C matrix, dimensionless
c element of C, dimensionless
det determinant, dimensionless
I identity matrix, dimensionless
i element of I, dimensionless
j counter, dimensionless
m number of rows, dimensionless
n number of columns, dimensionless; order of square matrix
P matrix, dimensionless
p element of P, dimensionless
tr trace of a matrix, dimensionless
V vector, dimensionless
NOMENCLATURE 1033
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[Last Page]
[1033], (29)
Lines: 2269
———
321.25pt
———
Normal Page
PgEnds:
[1033], (29)
v element of V, dimensionless
X vector, dimensionless
x element of X, dimensionless
Y vector, dimensionless
y element of Y, dimensionless
Greek Letter Symbols
α scalar, dimensionless
β scalar, dimensionless
γ scalar, dimensionless
Roman Letter Superscripts
a augmented
c cofactor
T transpose
Symbolic Superscript
−1 inverse

More Related Content

Similar to Appendix B Matrices And Determinants

For the following matrices, determine a cot of basis vectors for the.pdf
For the following matrices, determine a cot of basis vectors for  the.pdfFor the following matrices, determine a cot of basis vectors for  the.pdf
For the following matrices, determine a cot of basis vectors for the.pdfeyebolloptics
 
Chapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/SlidesChapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/SlidesChaimae Baroudi
 
Determinants - Mathematics
Determinants - MathematicsDeterminants - Mathematics
Determinants - MathematicsDrishti Bhalla
 
1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear SystemsCeni Babaoglu, PhD
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrixitutor
 
Introduction To Matrix
Introduction To MatrixIntroduction To Matrix
Introduction To MatrixAnnie Koh
 
0.3.e,ine,det.
0.3.e,ine,det.0.3.e,ine,det.
0.3.e,ine,det.m2699
 
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Abdullaا Hajy
 
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Rwan Kamal
 
University of duhok
University of duhokUniversity of duhok
University of duhokRwan Kamal
 
Engg maths k notes(4)
Engg maths k notes(4)Engg maths k notes(4)
Engg maths k notes(4)Ranjay Kumar
 

Similar to Appendix B Matrices And Determinants (20)

For the following matrices, determine a cot of basis vectors for the.pdf
For the following matrices, determine a cot of basis vectors for  the.pdfFor the following matrices, determine a cot of basis vectors for  the.pdf
For the following matrices, determine a cot of basis vectors for the.pdf
 
1560 mathematics for economists
1560 mathematics for economists1560 mathematics for economists
1560 mathematics for economists
 
Matrices
MatricesMatrices
Matrices
 
Matrices
MatricesMatrices
Matrices
 
Chapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/SlidesChapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/Slides
 
Matrices & Determinants
Matrices & DeterminantsMatrices & Determinants
Matrices & Determinants
 
Determinants - Mathematics
Determinants - MathematicsDeterminants - Mathematics
Determinants - Mathematics
 
1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
Presentation on matrix
Presentation on matrixPresentation on matrix
Presentation on matrix
 
Matrices & Determinants.pdf
Matrices & Determinants.pdfMatrices & Determinants.pdf
Matrices & Determinants.pdf
 
Linear algebra
Linear algebraLinear algebra
Linear algebra
 
Introduction To Matrix
Introduction To MatrixIntroduction To Matrix
Introduction To Matrix
 
Matrix_PPT.pptx
Matrix_PPT.pptxMatrix_PPT.pptx
Matrix_PPT.pptx
 
0.3.e,ine,det.
0.3.e,ine,det.0.3.e,ine,det.
0.3.e,ine,det.
 
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
 
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
Matrices and its Applications to Solve Some Methods of Systems of Linear Equa...
 
Matrix Algebra seminar ppt
Matrix Algebra seminar pptMatrix Algebra seminar ppt
Matrix Algebra seminar ppt
 
University of duhok
University of duhokUniversity of duhok
University of duhok
 
Engg maths k notes(4)
Engg maths k notes(4)Engg maths k notes(4)
Engg maths k notes(4)
 

More from Angie Miller

Writing Poetry In The Upper Grades Poetry Lessons,
Writing Poetry In The Upper Grades Poetry Lessons,Writing Poetry In The Upper Grades Poetry Lessons,
Writing Poetry In The Upper Grades Poetry Lessons,Angie Miller
 
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts At
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts AtReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts At
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts AtAngie Miller
 
Printable Lined Paper For Kids That Are Soft Harper Blog
Printable Lined Paper For Kids That Are Soft Harper BlogPrintable Lined Paper For Kids That Are Soft Harper Blog
Printable Lined Paper For Kids That Are Soft Harper BlogAngie Miller
 
Writing Your Introduction, Transitions, And Conclusion
Writing Your Introduction, Transitions, And ConclusionWriting Your Introduction, Transitions, And Conclusion
Writing Your Introduction, Transitions, And ConclusionAngie Miller
 
Groundhog Day Writing Paper
Groundhog Day Writing PaperGroundhog Day Writing Paper
Groundhog Day Writing PaperAngie Miller
 
5 Writing Tips To Help Overcome Anxiety Youn
5 Writing Tips To Help Overcome Anxiety Youn5 Writing Tips To Help Overcome Anxiety Youn
5 Writing Tips To Help Overcome Anxiety YounAngie Miller
 
How To Write An Essay In 6 Simple Steps ScoolWork
How To Write An Essay In 6 Simple Steps ScoolWorkHow To Write An Essay In 6 Simple Steps ScoolWork
How To Write An Essay In 6 Simple Steps ScoolWorkAngie Miller
 
Scroll Paper - Cliparts.Co
Scroll Paper - Cliparts.CoScroll Paper - Cliparts.Co
Scroll Paper - Cliparts.CoAngie Miller
 
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, Ngi
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, NgiHnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, Ngi
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, NgiAngie Miller
 
Recycling Essay Essay On Re
Recycling Essay Essay On ReRecycling Essay Essay On Re
Recycling Essay Essay On ReAngie Miller
 
Pin On PAPER SHEETS
Pin On PAPER SHEETSPin On PAPER SHEETS
Pin On PAPER SHEETSAngie Miller
 
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, Essa
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, EssaPin By Cloe Einam On Referencing Harvard Referencing, Essay, Essa
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, EssaAngie Miller
 
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,Pin Von Carmen Perez De La Cruz Auf German-BRIEF,
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,Angie Miller
 
Powerful Quotes To Start Essays. QuotesGram
Powerful Quotes To Start Essays. QuotesGramPowerful Quotes To Start Essays. QuotesGram
Powerful Quotes To Start Essays. QuotesGramAngie Miller
 
Can Essay Writing Services Be Trusted - UK Writing Experts Blog
Can Essay Writing Services Be Trusted - UK Writing Experts BlogCan Essay Writing Services Be Trusted - UK Writing Experts Blog
Can Essay Writing Services Be Trusted - UK Writing Experts BlogAngie Miller
 
The SmARTteacher Resource Writing An Essa
The SmARTteacher Resource Writing An EssaThe SmARTteacher Resource Writing An Essa
The SmARTteacher Resource Writing An EssaAngie Miller
 
Order Paper Writing Help 24
Order Paper Writing Help 24Order Paper Writing Help 24
Order Paper Writing Help 24Angie Miller
 
How To Format A College Application Essay
How To Format A College Application EssayHow To Format A College Application Essay
How To Format A College Application EssayAngie Miller
 
Thanksgiving Printable Worksheets Colorful Fall,
Thanksgiving Printable Worksheets Colorful Fall,Thanksgiving Printable Worksheets Colorful Fall,
Thanksgiving Printable Worksheets Colorful Fall,Angie Miller
 
Writing Paper, Notebook Paper, , (2)
Writing Paper, Notebook Paper, ,  (2)Writing Paper, Notebook Paper, ,  (2)
Writing Paper, Notebook Paper, , (2)Angie Miller
 

More from Angie Miller (20)

Writing Poetry In The Upper Grades Poetry Lessons,
Writing Poetry In The Upper Grades Poetry Lessons,Writing Poetry In The Upper Grades Poetry Lessons,
Writing Poetry In The Upper Grades Poetry Lessons,
 
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts At
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts AtReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts At
ReMarkable 2 Is A 10.3-Inch E-Paper Tablet With A Stylus, Starts At
 
Printable Lined Paper For Kids That Are Soft Harper Blog
Printable Lined Paper For Kids That Are Soft Harper BlogPrintable Lined Paper For Kids That Are Soft Harper Blog
Printable Lined Paper For Kids That Are Soft Harper Blog
 
Writing Your Introduction, Transitions, And Conclusion
Writing Your Introduction, Transitions, And ConclusionWriting Your Introduction, Transitions, And Conclusion
Writing Your Introduction, Transitions, And Conclusion
 
Groundhog Day Writing Paper
Groundhog Day Writing PaperGroundhog Day Writing Paper
Groundhog Day Writing Paper
 
5 Writing Tips To Help Overcome Anxiety Youn
5 Writing Tips To Help Overcome Anxiety Youn5 Writing Tips To Help Overcome Anxiety Youn
5 Writing Tips To Help Overcome Anxiety Youn
 
How To Write An Essay In 6 Simple Steps ScoolWork
How To Write An Essay In 6 Simple Steps ScoolWorkHow To Write An Essay In 6 Simple Steps ScoolWork
How To Write An Essay In 6 Simple Steps ScoolWork
 
Scroll Paper - Cliparts.Co
Scroll Paper - Cliparts.CoScroll Paper - Cliparts.Co
Scroll Paper - Cliparts.Co
 
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, Ngi
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, NgiHnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, Ngi
Hnh Nh Bn, S Tay, Vit, Cng Vic, Ang Lm Vic, Sch, Ngi
 
Recycling Essay Essay On Re
Recycling Essay Essay On ReRecycling Essay Essay On Re
Recycling Essay Essay On Re
 
Pin On PAPER SHEETS
Pin On PAPER SHEETSPin On PAPER SHEETS
Pin On PAPER SHEETS
 
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, Essa
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, EssaPin By Cloe Einam On Referencing Harvard Referencing, Essay, Essa
Pin By Cloe Einam On Referencing Harvard Referencing, Essay, Essa
 
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,Pin Von Carmen Perez De La Cruz Auf German-BRIEF,
Pin Von Carmen Perez De La Cruz Auf German-BRIEF,
 
Powerful Quotes To Start Essays. QuotesGram
Powerful Quotes To Start Essays. QuotesGramPowerful Quotes To Start Essays. QuotesGram
Powerful Quotes To Start Essays. QuotesGram
 
Can Essay Writing Services Be Trusted - UK Writing Experts Blog
Can Essay Writing Services Be Trusted - UK Writing Experts BlogCan Essay Writing Services Be Trusted - UK Writing Experts Blog
Can Essay Writing Services Be Trusted - UK Writing Experts Blog
 
The SmARTteacher Resource Writing An Essa
The SmARTteacher Resource Writing An EssaThe SmARTteacher Resource Writing An Essa
The SmARTteacher Resource Writing An Essa
 
Order Paper Writing Help 24
Order Paper Writing Help 24Order Paper Writing Help 24
Order Paper Writing Help 24
 
How To Format A College Application Essay
How To Format A College Application EssayHow To Format A College Application Essay
How To Format A College Application Essay
 
Thanksgiving Printable Worksheets Colorful Fall,
Thanksgiving Printable Worksheets Colorful Fall,Thanksgiving Printable Worksheets Colorful Fall,
Thanksgiving Printable Worksheets Colorful Fall,
 
Writing Paper, Notebook Paper, , (2)
Writing Paper, Notebook Paper, ,  (2)Writing Paper, Notebook Paper, ,  (2)
Writing Paper, Notebook Paper, , (2)
 

Recently uploaded

Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxsocialsciencegdgrohi
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 

Recently uploaded (20)

Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 

Appendix B Matrices And Determinants

  • 1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [First Page] [1005], (1) Lines: 0 ——— -15.10126pt ——— Normal Page PgEnds: [1005], (1) APPENDIX B MATRICES AND DETERMINANTS B.1 BASIC CONCEPTS A system of n linear algebraic equations in n unknowns x1, x2, x3, . . . , xn such as a11x1 + a12x2 + a13x3 + · · · + a1nxn = y1 a21x1 + a22x2 + a23x3 + · · · + a2nxn = y2 a31x1 + a32x2 + a33x3 + · · · + a3nxn = y3 · · · · · · an1x1 + an2x2 + an3x3 + · · · + annxn = yn can conveniently be represented by the matrix equation        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann               x1 x2 x3 · · · xn        =        y1 y2 y3 · · · yn        or more simply by AX = Y where A is a rectangular matrix (in this case square) having elements aij and where X and Y are column vectors with elements xi and yi, respectively. The foregoing representations imply that n j=1 aij xi = yi i = 1, 2, 3, . . . , n 1005 Extended Surface Heat Transfer. A. D. Kraus, A. Aziz and J. Welty Copyright © 2001 John Wiley Sons, Inc.
  • 2. 1006 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1006], (2) Lines: 127 ——— 14.28009pt ——— Normal Page PgEnds: [1006], (2) The matrix A is called the coefficient matrix. If it is desired to associate the elements of Y with the coefficient matrix A, one may augment A and define an augmented matrix        a11 a12 a13 · · · a1n y1 a21 a22 a23 · · · a2n y2 a31 a32 a33 · · · a3n y3 · · · · · · · · · · · · an1 an2 an3 · · · ann yn        which has n rows and n + 1 columns. This matrix may be written more simply as the augmented matrix Aa = [A|Y] where the superscript means augmented and where the idea of a partitioned matrix is apparent. For example, in the system of linear algebraic equations 6x1 + 4x2 + x3 = 16 2x1 + 7x2 − 2x3 = 12 − 4x1 + x2 + 8x3 = − 22 the matrix   6 4 1 2 7 − 2 − 4 1 8   is called the coefficient matrix A of the system AX = B and the matrix   6 4 1 16 2 7 − 2 12 − 4 1 8 − 22   which contains the constant terms, in addition to the elements of A, is called the augmented matrix of the system. Moreover, the unknowns and the constant terms form two column vectors X and B. In the representation AX = B A is said to premultiply X (A is a premultiplier) and X is said to postmultiply A (X is a postmultiplier).
  • 3. MATRIX AND VECTOR TERMINOLOGY 1007 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1007], (3) Lines: 213 ——— * 21.27504pt ——— Normal Page PgEnds: [1007], (3) B.2 MATRIX AND VECTOR TERMINOLOGY A matrix of order m × n,        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · am1 am2 am3 · · · amn        is a rectangular ordered array of a total of mn entries arranged in m rows and n columns. The order of this matrix is m × n, which is often written as (m, n). If m = n, the matrix is square of order n × n (or of n or of nth order)        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann        In both rectangular and square matrices, aij is called the (i, j)th element of A. If the matrix is square and i = j, the element is said to define and be located on the principal diagonal. The elements an1, a(n−1),2, a(n−2),3, . . . , a1n are located on and constitute the secondary diagonal. All elements where i = j are considered to be off-diagonal: subdiagonal if i j, and superdiagonal if i j. The sum of the elements on the principal diagonal of A is called the trace of A: tr(A) = n k=1 akk For example, the matrix A =      6 3 0 1 − 1 4 1 1 − 1 1 8 − 2 2 5 2 11      is square and is of fourth order (4 × 4). The elements 6, 4, 8, and 11 constitute the principal diagonal and the elements 2, 1, 1, and 1 constitute the secondary diagonal. The element 1 is the a23 element, which lies at the intersection of the second row and third column. The trace of A is tr(A) = 6 + 4 + 8 + 11 = 29
  • 4. 1008 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1008], (4) Lines: 280 ——— 5.11214pt ——— Normal Page PgEnds: [1008], (4) A vector is a matrix containing a single row or a single column. If it is a 1 × n matrix (a matrix of order 1 × n), it is a row vector: V = [v1 v2 v3 · · · vn] If the vector is an m × 1 vector (order m × 1), it is a column vector: V =         v1 v2 v3 . . . vn         This concept and the usual one regarding a vector have certain similarities. These similarities are the reason why the elements of a vector are frequently called components. However, caution is necessary because the usual three-dimensional space does not imply that m or n (for column or row vectors, respectively) are limited to an upper bound of 3. B.3 SOME SPECIAL MATRICES An m × n matrix such as the one displayed in Section B.2 is called a null matrix if every element in the matrix is identically equal to zero. For example, the 3×4 matrix   0 0 0 0 0 0 0 0 0 0 0 0   is null. The transpose of an m × n matrix is an n × m matrix with the rows and columns of the original matrix interchanged. For the 3 × 4 matrix A =   4 3 1 − 2 − 2 3 0 1 1 − 3 − 4 2   the transpose is 4 × 3 AT =      4 − 2 1 3 3 − 3 1 0 − 4 − 2 1 2      Note the use of the superscript T to indicate the transpose and recognize that the transpose of the transpose is the original matrix
  • 5. MATRIX EQUALITY 1009 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1009], (5) Lines: 360 ——— 3.77509pt ——— Normal Page PgEnds: [1009], (5) [AT ]T = A The nth-order square matrix        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann        is said to be diagonal or a diagonal matrix if aij = 0 for all i = j: A =        a11 0 0 · · · 0 0 a22 0 · · · 0 0 0 a33 · · · 0 · · · · · · · · · 0 0 0 · · · ann        If all aij are equal for all i = j (that is, aij = α; i = j) and aij = 0; i = j, the resulting matrix is said to be a scalar matrix, which is a diagonal matrix with all elements (principal diagonal elements) equal: A =        α 0 0 · · · 0 0 α 0 · · · 0 0 0 α · · · 0 · · · · · · · · · 0 0 0 · · · α        If all α in the scalar matrix are equal to unity (α = 1), the scalar matrix becomes the identity matrix: I =        1 0 0 · · · 0 0 1 0 · · · 0 0 0 1 · · · 0 · · · · · · · · · 0 0 0 · · · 1        B.4 MATRIX EQUALITY A matrix A = [aij ]m×n will be equal to a matrix B = [bij ]m×n if and only if aij = bij for all i and j. This essentially states that two matrices will be equal if and only if they are of the same order and corresponding elements are equal.
  • 6. 1010 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1010], (6) Lines: 434 ——— 3.0601pt ——— Normal Page PgEnds: [1010], (6) B.5 MATRIX ADDITION AND SUBTRACTION A matrix A = [aij ]m×n may be added to a matrix B = [bij ]m×n to form a matrix C = [cij ]m×n = [aij + bij ]m×n. This points out that in order to form the sum of two matrices, the matrices must be of the same order and that the elements of the sum are determined by adding the corresponding elements of the matrices forming the sum. Example B.1. If A = 4 3 2 − 6 1 5 B = 3 − 1 4 1 0 3 and C = 3 2 4 − 2 find A + B and A + C. SOLUTION A + B = 4 3 2 − 6 1 5 + 3 − 1 4 1 0 3 = (4 + 3) (3 − 1) (2 + 4) (− 6 + 1) (1 + 0) (5 + 3) = 7 2 6 − 5 1 8 The sum A + C does not exist because the order of C does not equal the order of A. Matrix addition is both commutative and associative: A + B = B + A A + (B + C) = (A + B) + C In addition, the sum A + C is equal to the sum B + C if and only if A = C. This is the cancellation law for addition. The matrix B = [bij ]m×n may be subtracted from the matrix A = [aij ]m×n to form the matrix D = [dij ]m×n. This indicates that two matrices of the same order may be subtracted by forming the difference between the corresponding elements of the minuend and the subtrahend. Moreover, it is easy to see that if A + B = C then A = C − B
  • 7. MATRIX MULTIPLICATION 1011 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1011], (7) Lines: 518 ——— 12.40999pt ——— Normal Page PgEnds: [1011], (7) Finally, it may be observed that a square matrix possesses a unique decomposition into a sum of a subdiagonal, a diagonal and a superdiagonal matrix. For example, A =   1 2 3 4 5 6 9 8 7   =   0 0 0 4 0 0 9 8 0   +   1 0 0 0 5 0 0 0 7   +   0 2 3 0 0 6 0 0 0   B.6 MATRIX MULTIPLICATION A matrix may be multiplied by a scalar or by another matrix. If A = [aij ] and α is a scalar, then αA = [αaij ] This shows that multiplication by a scalar is commutative and that multiplication by a scalar involves the multiplication of each and every element of the matrix by the scalar. In addition, it is easy to see that (α + β)A = αA + βA α(A + B) = αA + αB and α(βA) = (αβ)A Observe that a scalar matrix is equal to the product of the scalar and the identity matrix. For example,   3 0 0 0 3 0 0 0 3   = 3   1 0 0 0 1 0 0 0 1   A modest effort must be expended to use the terminology multiplication by a scalar in order to avoid confusion with the process known as scalar multiplication. The product of a row vector of order 1 × n and a column vector of order n × 1 forms a 1×1 matrix which has no important property that is not possessed by a scalar. This product is therefore called the scalar or dot product (some sources also use the terminology inner product). It is called for through the use of a dot placed between the two matrices in the product; that is, if A and B are column vectors A · B = [aij ]1×n · [bij ]1×n = ABT = BAT = γ and where γ is a scalar obtained from γ = n k=1 akbk
  • 8. 1012 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1012], (8) Lines: 608 ——— -0.85991pt ——— Normal Page PgEnds: [1012], (8) If the scalar product of two vectors is uniquely equal to zero, the vectors are said to be orthogonal. Example B.2. If A =        2 4 3 1 2        and B =        − 5 4 − 3 8 2        what is the dot product A · B? SOLUTION A · B = 2(− 5) + 4(4) + 3(− 3) + 1(8) + 2(− 2) = −10 + 16 − 9 + 8 − 4 = 1 In Section B.2, a set of linear simultaneous algebraic equations was shown to be represented by the notation AX = Y where A was the n × n coefficient matrix and A and X were n × 1 column vectors. In order to obtain the original set of equations from a set where n = 3,   a11 a12 a13 a21 a22 a23 a31 a32 a33     x1 x2 x3     y1 y2 y3   a row by column element product and sum operation is clearly evident:   a11x1 + a12x2 + a13x3 = y1 a21x1 + a22x2 + a23x3 = y2 a31x1 + a32x2 + a33x3 = y3   and it is observed that each element of y is obtained by multiplying the corresponding elements of A by the elements of X and adding the results. Notice that the foregoing procedure will not be possible if the number of columns of A does not equal the
  • 9. MATRIX MULTIPLICATION 1013 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1013], (9) Lines: 696 ——— 12.73558pt ——— Normal Page PgEnds: [1013], (9) number of rows of X. In this event there will not always be corresponding elements to multiply. Moreover, it should be noted that Y contains the same number of rows as both A and X. This suggests a general definition for the multiplication of two matrices. If A is m × n and B is p × q, AB = C will exist if n = p, in which case the matrix C will be m × q with elements given by [cij ]m×q = n=p k=1 aikbkj i = 1, 2, 3, . . . , m j = 1, 2, 3, . . . , q When n = p, the matrices A and B are said to be conformable for multiplication. Example B.3. If A = − 1 4 − 2 0 4 3 2 1 B =      − 1 1 3 2 − 2 4 0 3      and C = 2 1 − 3 4 find AB, BA, and AC. SOLUTION. The product AB exists because A is 2 × 4 and B is 4 × 2. The result P will be 2 × 2: P = AB = − 1 4 − 2 0 4 3 2 1      − 1 1 3 2 − 2 4 0 3      = (1 + 12 + 4 + 0) (− 1 + 8 − 8 + 0) (− 4 + 9 − 4 + 0) (4 + 6 + 8 + 3) = 17 − 1 1 21 The product BA also exists: BA =      − 1 1 3 2 − 2 4 0 3      − 1 4 − 2 0 4 3 2 1
  • 10. 1014 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1014], (10) Lines: 807 ——— 1.03004pt ——— Normal Page PgEnds: [1014], (10)      (1 + 4) (− 4 + 3) (2 + 2) (0 + 1) (− 3 + 8) (12 + 6) (− 6 + 4) (0 + 2) (2 + 16) (− 8 + 12) (4 + 8) (0 + 4) (0 + 12) (0 + 9) (0 + 6) (0 + 3)      =      5 − 1 4 1 5 18 − 2 2 18 4 12 4 12 9 6 3      Notice that AB = BA, which shows that matrix multiplication, in general, is not commutative. Notice also that the product AC will not exist because A and C are not conformable for multiplication (A is 2 × 4 and C is 2 × 2). Although the commutative law does not hold, the multiplication of matrices is associative: (AB)C = A(BC) and matrix multiplication is distributive with respect to addition: A(B + C) = AB + AC assuming that conformability exists for both addition and multiplication. If the product AB is null, that is, AB = 0, it cannot be concluded that either A or B is null. Furthermore, if AB = AC or CA = BA, it cannot be concluded that B = C. This means that, in general, cancellation of matrices is not permissible. The transpose of a product of matrices is equal to the product of the individual transposes taken in reverse order: (AB)T = BT AT B.7 MATRIX DIVISION AND MATRIX INVERSION Matrixdivisionisnotdefined.Instead,useismadeofaprocesscalledmatrixinversion, which relies on the existence of the identity matrix, which is related to a square matrix A by AI = IA = A Consider the identity for addition, 0, so that has the property that for all scalars α, α + 0 = 0 + α = α and an identity element for multiplication, 1, so that α1 = 1α = α The scalar most certainly possesses a reciprocal or multiplicative inverse, 1/α, which when multiplied by α yields the identity element for scalar multiplication: 1/α = α−1 α = α
  • 11. DETERMINANTS 1015 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1015], (1 Lines: 869 ——— 6.48608pt ——— Normal Page PgEnds: [1015], (1 This reasoning may be extended to the n × n matrix A and the pair of identity matrices: the n × n identity matrix for multiplication I and the n × n identity matrix for addition 0 (a null matrix). Thus, as already noted, AI = IA = A and A + 0 = 0 + A = A If there is an n × n matrix A that pre- and postmultiplies A such that A−1 A = AA−1 = I then A−1 is an inverse of A with respect to matrix multiplication. The matrix A is said to be invertible or nonsingular if A−1 exists and singular if A−1 does not exist. For example, the 3 × 3 matrix A =   4 − 2 − 1 − 2 8 − 5 − 1 − 5 8   can be shown to possess the inverse A−1 =   13/32 7/32 3/16 7/32 31/96 11/48 3/16 11/48 7/24   A simple multiplication will produce the identity matrix: AA−1 =   4 − 2 − 1 − 2 8 − 5 − 1 − 5 8     13/32 7/32 3/16 7/32 31/96 11/48 3/16 11/48 7/24   =   1 0 0 0 1 0 0 0 1   It can also be verified that the identity is also produced if the product A−1 A is taken. The inverse of a product of matrices is equal to the product of the individual inverses taken in reverse order: (AB)−1 = B−1 A−1 B.8 DETERMINANTS B.8.1 Definitions and Terminology A square matrix of order n (an n × n matrix)
  • 12. 1016 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1016], (12) Lines: 985 ——— * 34.89008pt ——— Normal Page PgEnds: [1016], (12) A =        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann        possesses a uniquely defined scalar (a single number) which is designated as the determinant of A or merely the determinant: det A = |A| where the order of the determinant is the same as the order of the matrix from which it derives. Observe that only square matrices possess determinants, the use of vertical lines and not brackets to designate determinants, and that the elements of the determinant are identical to the elements of the matrix: det A = a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann A determinant of the first order consists of a single element a and has, therefore, the value det A = a. A determinant of the second order contains four elements in a 2 × 2 square array with the value det A = |A| = a11 a12 a21 a22 A determinant of the third order is described in similar fashion. It is a 3×3 square array containing nine elements: det A = |A| = a11 a12 a13 a21 a22 a23 a31 a32 a33 One may deduce that a determinant of nth order consists of a square array of n×n elements, aij , and that the total number of elements in an nth-order determinant is n2 . Although this representation of the determinant looks to be purely abstract, the determinant can be proven to be a very rational function which can be evaluated in a number of ways. Moreover, the value of the use of determinants in the taking of matrix inverses and in the solution of simultaneous linear algebraic equations cannot and should not be underemphasized.
  • 13. DETERMINANTS 1017 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1017], (13) Lines: 1040 ——— 13.71014pt ——— Normal Page PgEnds: [1017], (13) B.8.2 Determinant Evaluation Consider, for example, the following pair of simultaneous linear algebraic equations, which are presumed to be linearly independent: a11x1 + a12x2 = b1 (B.1a) a21x1 + a22x2 = b2 (B.1b) and observe that they may also be written in the matrix form AX = B: a11 a12 a21 a22 x1 x2 = b1 b2 In eqs. (B.1), the x’s are the unknowns and the a’s form the coefficient matrix A. If det A = 0, the equations are said to be linearly independent and one method of solving this second-order system is to multiply eq. (B.1a) by a22 and eq. (B.1b) by a12: a22a11x1 + a22a12x2 = a22b1 a12a21x1 + a12a22x2 = a12b2 A subtraction then yields (a22a11 − a12a21)x1 = a22b1 − a12b2 and then x1 is obtained: x1 = a22b1 − a12b2 a22a11 − a12a21 (B.2a) A similar procedure yields x2 x2 = a11b2 − a21b1 a22a11 − a12a21 (B.2b) Observe that the denominators of the equations that yield x1 and x2 can be represented by the determinant a11 a12 a21 a22 = a22a11 − a12a21 and it is easy to see that the numerators of these equations can be represented by b1 a12 b2 a22 = b1a22 − b2a21 and a11 b1 a21 b2 = a11b2 − a21b1
  • 14. 1018 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1018], (14) Lines: 1132 ——— * 25.15227pt ——— Normal Page PgEnds: [1018], (14) This must always hold unless the determinant in the denominators is equal to zero, which is ruled out because the discussion began originally with the statement that the two equations to be solved were linearly independent. Thus one may write the solutions for x1 and x2 in eqs. (B.1) as x1 = b1 a12 b2 a22 a11 a12 a21 a22 (B.3a) and x2 = a11 b1 a21 b2 a11 a12 a21 a22 (B.3b) and this is a demonstration of a method of solution of simultaneous linear algebraic equations known as Cramer’s rule. The foregoing reasoning applies equally well to a set of n simultaneous algebraic equations. For a set of three equations in three unknowns a11x1 + a12x2 + a13x3 = b1 a21x1 + a22x2 + a23x3 = b2 a31x1 + a32x2 + a33x3 = b3 which are assumed to be linearly independent and which may be written in matrix form as   a11 a12 a13 a21 a22 a23 a31 a32 a33     x1 x2 x3   =   b1 b2 b3   it can be shown that x1 can be evaluated from x1 = b1a22a33 + b3a12a23 + b2a13a32 − b3a22a13 − b1a32a23 − b2a12a33 a11a22a33 + a12a23a31 + a13a21a32 − a31a22a13 − a32a23a11 − a33a21a12 Both the numerator and denominator can be rearranged by employing a little algebra: x1 = b1(a22a33 − a32a23) − b2(a12a33 − a32a13) + b3(a12a23 − a22a13) a11(a22a33 − a32a23) − a21(a12a33 − a13a32) + a31(a12a23 − a13a22) (B.4) and an inspection of the terms within parentheses shows that the solution for x1 cannot only be written (Cramer’s rule) as the quotient of two determinants, but each of the determinants can be represented in terms of three second-order determinants:
  • 15. DETERMINANTS 1019 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1019], (15) Lines: 1229 ——— -8.92982pt ——— Normal Page PgEnds: [1019], (15) x1 = b1 a12 a13 b2 a22 a23 b3 a32 a33 a11 a12 a13 a21 a22 a23 a31 a32 a33 or x1 = b1 a22 a23 a32 a33 − b2 a12 a13 a32 a33 + b3 a12 a13 a22 a23 a11 a22 a23 a32 a33 − a21 a12 a13 a32 a23 + a31 a12 a13 a22 a23 (B.5) This expansion is known as the Laplace expansion or Laplace development. The method of evaluating second-order determinants is suggested in eqs. (B.2) and (B.3). The second-order determinant is evaluated as the remainder of the product resulting from the multiplication of the upper left and lower right elements (the principal diagonal elements) minus the product of the lower left and the upper right elements (the secondary diagonal elements). This procedure is demonstrated in Fig. B.1a. The third-order determinant may be evaluated by taking the products and then the sums and differences of the elements shown in Fig. B.1b. This procedure may be assisted by rewriting the first two columns of the determinant and then proceding as indicated in Fig. B.1c. It is important to note that for this purpose, the diagonals of the third-order determinant are continuous that is, the last column is followed by the first column. Caution is necessary: Fourth- and higher-order determinants may not be evaluated by the following the procedures displayed in Fig. B.1. The Laplace expansion or pivotal condensation, to be discussed presently, must be employed in these cases. Example B.4. Evaluate the determinants |A| = 3 4 2 5 and |B| = 4 3 2 0 3 1 1 2 1 SOLUTION. The second-order determinant is evaluated by the procedure indicated in Fig. B.1a: |A| = 3 4 2 5 = 3(5) − 2(4) = 15 − 8 = 7
  • 16. 1020 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1020], (16) Lines: 1310 ——— -5.06296pt ——— Normal Page PgEnds: [1020], (16) Figure B.1 (a) Procedure for evaluating a second-order determinant; (b) and (c) equivalent procedures for evaluating a third-order determinant. The third-order determinant is evaluated in accordance with Fig. B.1b or c as |B| = 4 3 2 0 3 1 1 2 1 = 4(3)(1) + 3(1)(1) + 2(0)(2) − 1(3)(2) − 2(1)(4) − 1(0)(3) or
  • 17. DETERMINANTS 1021 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1021], (17) Lines: 1326 ——— -5.78294pt ——— Normal Page PgEnds: [1021], (17) Figure B.2 Checkerboard rule for finding the sign of a cofactor of an nth-order determinant (a) for n odd and (b) for n even. B = 12 + 3 + 0 − 6 − 8 − 0 = 1 B.8.3 Pivotal Condensation The evaluation of a determinant by the Laplace expansion can be a long, tedious, and laborious procedure. Assuming that third-order determinants can be evaluated quickly, a fifth-order determinant containing no zero elements requires the evaluation of 5 × 4 = 20 third-order determinants. For a sixth-order determinant, this number becomes 6 × 5 × 4 = 120. In general, if n 6, the evaluation of an nth-order determinant can require the evaluation of (n − 1)! third-order determinants. Pivotal condensation is a much more efficient process. Take the determinant det A = a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann The element a11 is selected as the element in the pivotal position. It is called the pivotal element or merely the pivot in the following development. The objective is to find a determinant |B| that is one order less than |A| by operating on |A| in such a manner as to produce a column of zeros in the column containing the pivot. If a11 = 0, a row or column interchange can be performed to put a nonzero element in the pivotal position. Thecondensationprocessthatbringsannth orderdeterminantdowntoan(n−1)th- order determinant is continued until the order is reduced to three or two. Then the evaluation can be accomplished by the methods provided in the preceding section. The entire condensation procedure can be handled by the computationally efficient matrix relationship |A| = 1 an−2 11 det      a11      a22 a23 · · · a2n a32 a33 · · · a3n · · · · · · an2 an3 · · · ann     
  • 18. 1022 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1022], (18) Lines: 1376 ——— -9.86978pt ——— Normal Page PgEnds: [1022], (18) −      a21 a31 . . . an1      [ a12 a13 · · · a1n ]      (B.6) Example B.5. Use pivotal condensation to evaluate the determinant |A| = 2 −1 1 1 2 0 2 3 2 1 0 1 2 1 2 0 1 −1 −1 3 0 2 1 1 −2 SOLUTION. By pivotal condensation |A| = 1 23 det      2      2 3 2 1 1 2 1 2 1 −1 −1 3 2 1 1 −2      −      0 0 0 0      [ −1 1 1 2 ]      or |A| = 1 8 det      4 6 4 2 2 4 2 4 2 −2 −2 6 4 2 2 −4      Then |A| = 1 (8)(4)2 det  4   4 2 4 −2 −2 6 2 2 −4   −   2 2 4   [ 6 4 2 ]   or |A| = 1 128 det     16 8 16 −8 −8 24 8 8 −16   −   12 8 4 12 8 4 24 16 8     = 1 128 det   4 0 12 −20 −16 20 −16 −8 −24   The third-order determinant is easily evaluated: |A| = 1 128 (1536 + 0 + 1920 − 3072 − 0 + 640) = 1024 128 = 8
  • 19. MINORS AND COFACTORS 1023 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1023], (19) Lines: 1482 ——— * 20.85pt ——— Normal Page PgEnds: [1023], (19) B.8.4 Additional Properties Several rules pertaining to the simplification and manipulation of determinants are presented below without formal proof. • Interchanging any row (or column) of a determinant with its immediately adjacent row (or column) alters the sign of the determinant. • The multiplication of any single row (column) of a determinant by a scalar constant is equivalent to the multiplication of the entire determinant by the scalar. Observe that this differs from the multiplication of a matrix by a scalar; the multiplication of a matrix by a scalar results in the multiplication of each and every element of the matrix by the scalar. • If every element in an nth-order determinant is multiplied by the same scalar, α, the value of the determinant is multiplied by αn . • If any two rows (columns) of a determinant are identical, the value of the determinant is zero and the matrix from which the determinant derives is said to be singular. • If any row (or column) of a determinant contains nothing but zeros, the value of the determinant is zero. • If any two rows (columns) of a determinant are proportional, the determinant is equal to zero. In this case, the two rows (columns) are said to be linearly dependent. • If the elements of any row (column) of a determinant are added to or subtracted from the corresponding elements of another row (column), the value of the determinant is unchanged. • If the elements of any row (column) of a determinant are multiplied by a constant and then added to or subtracted from the corresponding elements of another row (column), the value of the determinant is unchanged. • The value of the determinant of a diagonal matrix is equal to the product of the terms on the diagonal. • The value of the determinant of a matrix is equal to the value of the determinant of the transpose of the matrix. • The determinant of the product of two matrices is equal to the product of the determinants of the two matrices. • If the determinant of the product of two square matrices is zero, then at least one of the matrices is singular, that is, the value of its determinant is equal to zero. B.9 MINORS AND COFACTORS Consider the nth-order determinant
  • 20. 1024 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1024], (20) Lines: 1528 ——— * 30.14009pt ——— Normal Page PgEnds: [1024], (20) det A = a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann (B.7) which will be used for proposing two useful quantities. The (n − 1)th-order minor of an nth-order determinant |A| is the determinant formed by deleting one row and one column from |A|. The minor, designated by |M|ij , is the determinant formed by deleting the ith row and the jth column from |A|. The cofactor, designated as Aij without vertical rules and with a double subscript, is the signed (n − 1)th-order minor formed from the nth-order determinant. If the minor has been formed by deleting the ith row and the jth column from |A|, then Aij = (−1)i+j Mij (B.8) The sign of the cofactor can be determined from eq. (B.8) or from the checkerboard rule summarized in Fig. B.2. Example B.6. Consider the fourth-order determinant |A| = det      1 3 −1 2 4 1 1 3 3 1 −2 1 1 3 2 5      What is the minor and cofactor formed by deleting the third row and fourth column? SOLUTION |M|34 = det   1 3 −1 4 1 1 1 3 2   = 2 + 3 − 12 + 1 − 3 − 24 = −33 The cofactor is the signed minor. By the checkerboard rule of Fig. B.2 or by eq. (B.8), A34 = (−1)3+4 (−33) = −(−33) = 33 B.10 COFACTOR MATRIX A square nth-order matrix
  • 21. COFACTOR MATRIX 1025 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1025], (21) Lines: 1599 ——— -6.16997pt ——— Normal Page PgEnds: [1025], (21) A =        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann        possesses a cofactor matrix with elements indicated by capital letters with double subscripts: Ac =        A11 A12 A13 · · · A1n A21 A22 A23 · · · A2n A31 A32 A33 · · · A3n · · · · · · · · · An1 An2 An3 · · · Ann        Example B.7. Determine the cofactor matrix for the third-order symmetrical matrix   3 −2 0 −2 4 −1 0 −1 6   SOLUTION. The nine cofactors with signs determined by eq. (B.8) or from the checkerboard rule in Fig. B.2 are formed from the nine possible second-order minors. A11 = +|M|11 = 4 −1 −1 6 = 24 − 1 = 23 A12 = −|M|12 = −2 −1 0 6 = −(−12) = 12 A13 = +|M|13 = −2 4 0 −1 = 2 A21 = −|M|21 = −2 0 −1 6 = −(−12) = 12 A22 = +|M|22 = 3 0 0 6 = 18 A23 = −|M|23 = 3 −2 0 −1 = −(−3) = 3 A31 = +|M|31 = −2 0 4 −1 = 2 A32 = −|M|32 = 3 0 −2 −1 = −(−3) = 3
  • 22. 1026 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1026], (22) Lines: 1688 ——— 6.66289pt ——— Normal Page PgEnds: [1026], (22) A33 = +|M|33 = 3 −2 −2 4 = 12 − 4 = 8 Thus Ac =   23 12 2 12 18 3 2 3 8   and this confirms that symmetrical matrices possess symmetrical cofactor matrices. B.11 LAPLACE EXPANSION In the denominator of eq. (B.5), the third-order determinant of a matrix A was shown to be equal to some function of three second-order determinants a11 a12 a13 a21 a22 a23 a31 a32 a33 = a11 a22 a23 a32 a33 − a21 a12 a13 a32 a33 + a31 a12 a13 a22 a23 (B.9) Notice that each of the second-order determinants is a second-order minor of A. Thismeansthatthreecofactorsexist,andhenceeq.(B.9)givesarulefortheevaluation of a third-order determinant which can be extended to an nth-order determinant. For the ith row, |A| = j=n j=1 (−1)i+j aij |M|ij or |A| = j=n j=1 aij Aij (B.10a) and for the jth column, |A| = i=n i=1 (−1)i+j aij |M|ij or |A| = i=n i=1 aij Aij (B.10b) Equations (B.10) describe a procedure known as the Laplace development or Laplace expansion.
  • 23. MATRIX INVERSION 1027 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1027], (23) Lines: 1773 ——— -1.36171pt ——— Normal Page PgEnds: [1027], (23) Example B.8. Evaluate the determinant |A| = 1 2 3 4 1 4 3 2 2 0 3 1 1 0 2 3 SOLUTION. Expand using the second column to reduce the labor (two zeros occur in this column): |A| = a21|M|21 + a22|M|22 The cofactors derive from the appropriate minors with their sign determined from eq. (B.8) or from the checkerboard rule illustrated in Fig. B.2. A21 = −|M|21 = 1 3 2 2 3 1 1 2 3 = −(9 + 3 + 8 − 6 − 2 − 18) = −(20 − 26) = 6 and A22 = +|M|22 = 1 3 4 2 3 1 1 2 3 = (9 + 3 + 16 − 12 − 2 − 18) = 28 − 32 = −4 The value of the determinant is |A| = a21A21 + a22A22 = 2(6) + 4(− 4) = 12 − 16 = −4 It should be noted that if the elements of a row or column of a determinant are multiplied by cofactors of the corresponding elements of a different row or column, the resulting sum of these products is zero: i=n i=1 (−1)k+j akj |M|kj (i = k) (B.11a) and j=n j=1 (−1)i+k aik|M|ik (j = k) (B.11b) B.12 MATRIX INVERSION An nth-order set of simultaneous linear algebraic equations in n unknowns, x1, x2, x3, . . ., xn, can be represented conveniently by the matrix equation AX = Y (B.12)
  • 24. 1028 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1028], (24) Lines: 1843 ——— -7.42316pt ——— Normal Page PgEnds: [1028], (24) where A, as indicated in Section B.2, is a square matrix of coefficients having elements aij and where X and Y are n×1 column vectors with elements xi and yi, respectively. Because division of matrices is not permitted, one method for the solution of matrix equations, such as the one shown in eq. (B.12), is called matrix inversion. If eq. (B.12) is premultiplied by an n × n square matrix B so that BAX = BY a solution for the unknowns X will evolve if the product BA is equal to the identity matrix I: BAX = IX = BY or X = BY (B.13) If BA = AB = I the matrix B is said to be the inverse of A: B = A−1 (B.14a) and, of course, the inverse of the inverse is the matrix itself: A = B−1 (B.14b) or (A−1 )−1 = A It may be recalled that in general, matrix multiplication is not commutative. The multiplication of a matrix by its inverse is one specific case where matrix multiplication is commutative: AA−1 = A−1 A = I B.12.1 Properties of the Inverse The inverse of a product of two matrices is the product of the inverses taken in reverse order. This is easily proved. Consider the product AB and postmultiply by BA. Because matrix multiplication is associative, this product can be taken with a rearrangement of the parentheses and then by straightforward application of the definition of the matrix inverse: AB(B−1 A−1 ) = A(BB−1 A−1 ) = AIA−1 = AA−1 = I In addition, the inverse of the transpose of a matrix is equal to the transpose of its inverse:
  • 25. MATRIX INVERSION 1029 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1029], (25) Lines: 1915 ——— * 39.9469pt ——— Normal Page PgEnds: [1029], (25) (AT )−1 = (A−1 )T negative powers of a matrix are related to its inverse: A−n = (A−1 )n and the determinant of the product of a matrix and its inverse must be equal to unity: det(AA−1 ) = det I = 1 If a matrix does not possess an inverse, it is said to be singular, but if a matrix does possess an inverse, the inverse is unique. The inverse of a product of matrices is equal to the product of the inverses taken in reverse order: (AB)−1 = B−1 A−1 B.12.2 Adjoint Matrix The adjoint matrix, which is sometimes called the adjugate matrix and which here will be referred to merely as the adjoint, applies only to a square matrix and is the transpose of the cofactor matrix: adj A = A = ((Ac )T (B.15) and because symmetrical matrices possess symmetrical cofactor matrices, the adjoint of a symmetrical matrix is the cofactor matrix itself. The matrix that is of nth order, A =        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann        has been observed to possess a cofactor matrix, Ac =        A11 A12 A13 · · · A1n A21 A22 A23 · · · A2n A31 A32 A33 · · · A3n · · · · · · · · · An1 An2 An3 · · · Ann        and this cofactor matrix has an adjoint,
  • 26. 1030 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1030], (26) Lines: 1996 ——— -1.11307pt ——— Normal Page PgEnds: [1030], (26) adj A = (Ac )T =        A11 A21 A31 · · · An1 A12 A22 A32 · · · An2 A13 A23 A33 · · · An3 · · · · · · · · · A1n A2n A3n · · · Ann        B.12.3 One Method for the Determination of the Inverse Suppose that an n × n matrix A is postmultiplied by its adjoint and that the product is designated as P: A(adj A) =        a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n a31 a32 a33 · · · a3n · · · · · · · · · an1 an2 an3 · · · ann               A11 A21 A31 · · · An1 A12 A22 A32 · · · An2 A13 A23 A33 · · · An3 · · · · · · · · · A1n A2n A3n · · · Ann        = P The elements of P may be divided into two categories: those that lie on its principal diagonal of which p22 is typical and those that do not. For the principal diagonal element p22, p22 = a21A21 + a22A22 + a23A23 + · · · + a2nA2n and by eq. (B.10a), it is seen that p22 = |A| For the off-diagonal element of which p13 is typical, p13 = a11A31 + a12A32 + a13A33 + · · · + a1nA3n and by eq. (B.11a), it is seen that p13 = 0 Thus the product of A and its adjoint is A(adj A) =        |A| 0 0 · · · 0 0 |A| 0 · · · 0 0 0 |A| · · · 0 · · · · · · · · · 0 0 0 · · · |A|        = |A|I If this is put into the form A adj A det A = A
  • 27. MATRIX INVERSION 1031 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1031], (27) Lines: 2084 ——— * 47.02194pt ——— Normal Page * PgEnds: [1031], (27) and compared with AA−1 = I it becomes evident that the inverse of the matrix A is equal to its adjoint divided by its determinant: A−1 = adj A det A (B.16) Observe that if det A = 0, the inverse of A cannot exist and is therefore singular. Thus the necessary condition for the matrix A to be singular is for det A = 0. Example B.9. Determine the inverse of the third-order symmetrical matrix A =   3 −2 0 −2 4 −1 0 −1 6   SOLUTION. In Example B.7 it was shown that the given matrix possesses a cofactor matrix: Ac =   23 12 2 12 18 3 2 3 8   and the reader may verify that the given matrix has a determinant det A = 45. The given matrix is symmetrical, as is the cofactor matrix. The adjoint (the transpose of the cofactor matrix) is also symmetrical and is equal to the cofactor matrix. Thus, by eq. (B.16), the inverse is A−1 =   23/45 4/15 2/45 4/15 2/5 1/15 2/45 1/15 8/45   which is also observed to be symmetrical. It is important to note that symmetrical matrices possess symmetrical transposes, symmetrical cofactor matrices, symmetrical adjoints, and symmetrical inverses. The evaluation of the inverse can always be concluded with a check on its validity. In the example just concluded, AA−1 =   3 −2 0 −2 4 −1 0 −1 6     23/45 4/15 2/45 4/15 2/5 1/15 2/45 1/15 8/45   =   1 0 0 0 1 0 0 0 1  
  • 28. 1032 MATRICES AND DETERMINANTS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [1032], (28) Lines: 2166 ——— 2.55508pt ——— Normal Page PgEnds: [1032], (28) Example B.10. Determine the inverse of the nonsymmetrical second-order matrix A = 4 −1 1 6 SOLUTION. The matrix has a determinant det A = 24 + 1 = 25 a cofactor matrix Ac = 6 −1 1 4 and an adjoint adj A = 6 1 −1 4 Its inverse is A−1 = adj A det A = 6/25 1/25 −1/25 4/25 This can be verified by the reader, and it is observed that the inverse of a second- order determinant is obtained by swapping the elements that lie on the principal diagonal, changing the sign of the off-diagonal elements, and then dividing all elements by the determinant. B.13 NOMENCLATURE Roman Letter Symbols A matrix, dimensionless a element of A, dimensionless B vector or matrix, dimensionless b element of B, dimensionless C matrix, dimensionless c element of C, dimensionless det determinant, dimensionless I identity matrix, dimensionless i element of I, dimensionless j counter, dimensionless m number of rows, dimensionless n number of columns, dimensionless; order of square matrix P matrix, dimensionless p element of P, dimensionless tr trace of a matrix, dimensionless V vector, dimensionless
  • 29. NOMENCLATURE 1033 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [Last Page] [1033], (29) Lines: 2269 ——— 321.25pt ——— Normal Page PgEnds: [1033], (29) v element of V, dimensionless X vector, dimensionless x element of X, dimensionless Y vector, dimensionless y element of Y, dimensionless Greek Letter Symbols α scalar, dimensionless β scalar, dimensionless γ scalar, dimensionless Roman Letter Superscripts a augmented c cofactor T transpose Symbolic Superscript −1 inverse