Matrix and
Determinants
BY DR. CHITRITA
DASGUPTA
1
Tentative syllabus
-
▶ Module 1: Calculus (Differentiation): Rolle’s Theorem, Mean
Value Theorems, Taylor’s and Maclaurin’s Theorems with
Remainders; Taylor's Series, Series for Exponential,
Trigonometric and Logarithm Functions; Indeterminate forms
and L' Hospital's Rule; Maxima and Minima; Evolutes and
Involutes.
▶ Module 2: Calculus (Integration): Evaluation of Definite and
Improper Integrals; Beta and Gamma Functions and their
properties; Applications of Definite Integrals to evaluate
surface areas and volumes of revolutions.
2
Tentative
syllabus
▶ Module 3: Multivariable Calculus (Differentiation): Limit,
Continuity and Partial Derivatives; Homogeneous Functions,
Euler’s Theorem of first and second order (Statement only);
Change of variables, Composite function, Derivative of
implicit functions, Total Derivative; Jacobian; Maxima,
Minima and Saddle points; Method of Lagrange multipliers;
Gradient, Directional Derivatives, Tangent Plane and Normal
Line, Curl and Divergence.
3
Tentative
syllabus
▶ Module 4: Matrices and Determinants: Matrices, Addition
and Scalar Multiplication, Matrix Multiplication; Symmetric
and Skew -symmetric Matrices; Hermitian and Skew -
Hermitian Matrices; Determinants, Cramer’s Rule; Inverse
of a Matrix; Orthogonal Matrices; Gauss -Jordan Method
to find the inverse of a matrix; Linear Systems of
Equations, Rank of a Matrix. Eigenvalues and
Eigenvectors; Eigenvalues of some special matrices;
Cayley -Hamilton Theorem; Similarity Matrix,
Diagonalization of matrices.
4
Tentative
syllabus
▶ Module
5:
Sequences and Series: Basic ideas on
Sequence; Concept of Monotonic and Bounded
sequence; Convergence and Divergence of Sequence;
Algebra of Sequences (Statement only). Basic idea of an
Infinite Series; Notion of Convergence and Divergence;
Series of Positive T erms - Convergence of infinite G.P.
series and p-series (Statement only); T
ests of
Convergence [Statement only] – Comparison Test,
Integral Test, D’Alembert’s Ratio Test, Raabe’s Test and
Cauchy’s Root test. Alternating Series - Leibnitz’s test
[Statement only], Absolute and Conditional
Convergence.
5
Module-4: Matrices
and Determinants
▶ Definition: Matrix is a representation of a rectangular
array of 𝑚𝑛 elements 𝑎𝑖 𝑗 (𝜖 𝑅) into 𝑚 rows and 𝑛
columns.
▶ Representation of 𝑚 × 𝑛
matrix:
𝑎𝑖
𝑗
=
𝑚,𝑛
Or,
𝑎𝑖
𝑗
=
𝑚,𝑛
6
▶ Row Matrix: The matrix with one row (𝑚 =
1)
Example: 1 20 0 5
,
1 5
.
▶ Column Matrix: The matrix with one column (𝑛 =
1)
Example
:
0
2
0
5
,
2
100
.
▶ Zero Matrix: The matrix with all zero entries (𝑎𝑖 𝑗 = 0, for
all 𝑖,
𝑗).
Example
:
0 ⋯
0
⋮ ⋱ ⋮
0 ⋯
0
.
7
Example:
1
0
5
2
with order
2.
▶ Principle diagonal: The diagonal from the top left corner of
the square matrix to the down right corner. The elements
in the principal diagonal is called diagonal elements.
Example:
𝟏 0
,
5 𝟐
𝟏 3 5
0 𝟎 5
1 5
𝟏𝟎𝟎
8
▶ Square Matrix: The matrix with same number of rows and
columns (𝑚 = 𝑛)
Different types of
matrix:
 Equal matrices: 𝐴 (= (𝑎𝑖𝑗)) and 𝐵 (= (𝑏𝑖𝑗)) are called
equal if
i) The size (No. of rows and columns) of the two matrices A
and B are equal,
ii)
𝑎𝑖
𝑗
= 𝑏𝑖 𝑗 for each 𝑖
and 𝑗.
 Diagonal matrices: 𝑎𝑖𝑗 = 0 for 𝑖
≠ 𝑗.
Example
:
𝟏
𝟎 ,
𝟎 𝟐
𝟏 𝟎 𝟎
𝟎 𝟎 𝟎
𝟎 𝟎
𝟏𝟎𝟎
9
▶ Identity matrix: 𝑎𝑖 𝑗
𝑎𝑖𝑗
= 0 for 𝑖 ≠
𝑗.
= 1 for 𝑖 =
𝑗.
Example
:
1 0
0
0 1
0
0 0
1
,
1 0
.
0 1
 Triangular matrix:
Upper (Lower) triangular
matrix: 𝑎𝑖 𝑗
= 0 for 𝑖 > (<)
𝑗.
Example
:
1 50
1
𝟎 1
0
𝟎 𝟎
,
1
5
𝟎
0
1
0
Operation on
matrices:
▶ Matrix addition: Addition between two matrices is
possible
when the order of the two matrices are same. Let 𝐶 = 𝐴 +
𝐵,
where 𝐴 = and 𝐵
=
𝑎𝑖𝑗
𝑏𝑖𝑗
𝑚,𝑛
𝑚,𝑛
, then 𝑖𝑡ℎ, 𝑗𝑡ℎ elements
of
𝐶, 𝑐𝑖 𝑗 = 𝑎𝑖 𝑗 + 𝑏𝑖 𝑗
for all 𝑖, 𝑗.
Ex- 𝐴
=
1
5
1
0
, 𝐵
=
2
0
0
8
; 𝐶 = 𝐴
+ 𝐵 =
3
5
1
8
.
𝑎𝑖
𝑗
𝑚,
𝑛
be a
matrix,
▶ Scalar multiplication with matrix: Let 𝐴
=
and 𝐵 = 𝑐𝐴. Then 𝑏𝑖 𝑗 =𝑐𝑎𝑖 𝑗 , for
𝑖, 𝑗.
Ex- 6𝐴
=
6
30
6 0
.
1
1
Operation on
matrices:
▶ Matrix multiplication: Matrix multiplication is possible only if
the number of column of the first matrix is equal to the
number of
𝑎𝑖
𝑗
𝑚,
𝑛
row of the second matrix. Let 𝐶 = 𝐴𝐵 where 𝐴
= and
𝑏𝑖
𝑗
𝑛 ,
𝑝
𝐵 =
then
𝑖𝑗𝑡ℎ element of 𝐶 is equal to 𝑐𝑖𝑗
=
𝑘=
1
σ 𝑘 =𝑛 σ
𝑎𝑖 𝑘 𝑏𝑘 𝑗 ,.
▶ Note: Here order of C will be m x
p.
▶ Example: let 𝐴
=
1
2
2
0
and 𝐵
=
1 1
, then 𝐴𝐵
= 0 2
1 5
.
2 2
 Remark 1: Two matrices A and B are commutative if 𝐴𝐵 =
𝐵𝐴. In general matrix multiplication is not commutative.
 Remark 2: Matrix multiplication is associative; 𝑨. 𝑩. 𝑪 =
𝑨. 𝑩 . 𝑪
1
2
𝑎𝑖
𝑗
𝑚,
𝑛
be a matrix,
then
or 𝐴𝑇. If 𝐵 =
𝐴𝑡, then 𝑏𝑖𝑗 =
 Transpose of a matrix: Let
𝐴 =
transpose of A is denoted as
𝐴𝑡
𝑎𝑖 𝑗 for all 𝑖, 𝑗.
Ex - 𝐴
=
1
5
1
6
0
8
�
�
, 𝐴
=
1 1
0
5 6
8
.
 Symmetric matrix: The matrix 𝐴 is
said to be symmetric matrix if 𝐴
= 𝐴𝑡 . Therefore, if 𝐴 is symmetric
then 𝑎𝑖 𝑗 =
𝑎𝑗𝑖.
Example
:
1 3 , 1 0 , 0 0
.
3 1 0 1 0 0
1
3
Example
:
−3 0 −20
0
0 3 , 0
20 .
 Every square matrix A can be expressed uniquely as a
sum
2
2
𝑡
𝑡
of a Symmetric (𝐴+𝐴
) and a Skew-symmetric matrix
(𝐴−𝐴
).
Problem: Express 𝑨
=
𝟏 𝟓𝟎
𝟏
𝟐𝟎 𝟏
𝟖
as a sum of
symmetric
𝟎
𝟓
𝟏
14
 Skew symmetric matrix: The matrix 𝐴 is said to be skew
symmetric matrix if 𝐴 = −𝐴𝑡. Therefore, if 𝐴 is skew
symmetric then 𝑎𝑖 𝑗 = −𝑎𝑗 𝑖 .
15
 All positive integral powers of a symmetric matrix A
are symmetric (Ex- 𝑨, 𝑨𝟐, 𝑨𝟑, …).
 All positive odd integral powers of a skew-symmetric
matrix are skew-symmetric (Ex- 𝐀, 𝑨𝟑, 𝑨𝟓, …
) and all positive even
integral powers of a skew-symmetric matrix is symmetric (Ex-
𝑨𝟐, 𝑨𝟒, …).
 The matrix 𝑩𝒕𝑨𝑩 is symmetric or skew-symmetric
according as
A is symmetric or skew-symmetric.
𝐴
𝑡
× 𝐴 = 𝐴 × 𝐴𝑡 =𝐼 (Unit matrix). Orthogonal matrices are
non-singular.
Ex
:
−1
0 .
0 1
 Idempotent matrix: A square matrix A is called Idempotent
if
A2 = 𝐴.
Ex
:
1
0 ,
0 0
2 −2
−4
−1 3 4
1 −2
−3
.
16
 Orthogonal matrix: A square matrix A is said to be
Orthogonal if
Ex
-
1
√
2
1 1
𝑖
−𝑖
.
 Hermitian and Skew Hermitian Matrices: A complex square
matrix A is said to be Hermitian if A = 𝐴𝑡and Skew Hermitian
if A = −𝐴𝑡 where 𝐴ҧ denotes the conjugate of A.
Ex
-
- Hermitian
matrix
3 3
+ 𝑖
3 − 𝑖 2
5𝑖 3
+ 𝑖
- Skew Hermitian
matrix
17
 Unitary Matrix: A complex n × n matrix A is said to be
Unitary if
𝐴𝑡 × 𝐴 = 𝐴 × 𝐴𝑡 =𝐼 (the identity matrix of order n).
 Show
that
3
7 + 4𝑖
−2 −
5𝑖
7 − 4𝑖 −2 +
5𝑖
−2 3 + 𝑖
3 − 𝑖 4
is a Hermitian
matrix.
𝛼 +
𝑖𝛾
𝛽 +
𝑖𝛿
−𝛽 +
𝑖𝛿
𝛼 −
𝑖𝛾
is a unitary matrix,
if
 Show that the
matrix
𝛼2 + 𝛽2 + 𝛾2 + 𝛿2 =
1.
 Express
A=
2𝑖
−2
+ 𝑖
2 + 𝑖 1 −
𝑖
−𝑖
3𝑖
as 𝑃 + 𝑖𝑄, where P is real
and
−1 − 𝑖 3𝑖 0
skew-symmetric and Q is real and symmetric
matrix.
Problems:
1
8
Determinan
t
Minors and Cofactors: The Minor of an element of a 𝑛
×
𝑛 determinant is the (𝑛 − 1) × (𝑛 − 1)
determinant obtained after suppressing the row and
column containing the element.
Let, Δ
=
𝑎1 𝑏1
𝑐1
𝑎2 𝑏2
𝑎 𝑏
�
�
3 3
3
𝑐2 . Minor of 𝑎1
=
2
. Minor of a2
=
𝑏 𝑐 𝑏
𝑐
𝑏3 𝑐3 𝑏3
𝑐3
2 1
1 .
−1
𝑖+𝑗
Cofactor of an element = × (Minor of that
element),
where 𝑖&𝑗 are the row and columnwhere
that element is placed.
Ex: Δ =
1
0
5
4
7
5 .
Minor
of 10 =
10 8 1
5
7
4
5
. Cofactor of
10= −1 3+1 × 5
4
7
1
9
Determinan
t
Now, Δ
= 3 3
𝑎1 𝑏1
𝑐1
𝑎2 𝑏2
𝑐2
𝑎 𝑏 𝑐
3
=
𝑎1
2
𝑏
𝑐
2
𝑏3
𝑐3
+ −1
1+2b1
3
𝑎2
𝑐2
𝑎 𝑐
3
+
c1
𝑎2
𝑏2
𝑎3
𝑏3
=𝑎1 × 𝐴1 + 𝑏1 × 𝐵1 + 𝑐1 × 𝐶1,
Where 𝐴1, 𝐵1 and 𝐶1 are the cofactor of 𝑎1, b1 and 𝑐1.
Note: If all the elements of a row (or a column) be
multiplied by their own cofactors and then added, the
result is the value of the determinant.
2
0
Product of two determinants: The product of two
determinants Δ1 and Δ2 of equal order is another
determinant of the same order. The (𝑖, 𝑗) −th element of
the product determinant Δ1 ×Δ2
= sum of the products of the elements of the i-th row (or
column) of Δ1 and the corresponding elements of the j-th
row (or column) of Δ2.
2
1
Properties of
Determinant:
 The value of a determinant remains unchanged if its rows
are changed to the corresponding columns and vice-
versa.
1 5 8 1 0 0
0 0 6 = 5 0 2
0 2 8 8 6 8
 If any two rows (or any two columns) of a determinant
are interchanged, the value of the new determinant
become negative of its original value.
1 5 8 1 5 8 8 5 1
0 0 6 = − 0 2 8 = − 6 0 0
0 2 8 0 0 6 8 2 0
2
2
1 5 8 1 5 8
0 5 9 = 0 = 0 5 9
1 5 8 3 15 24
𝑎11
𝑎21
𝑎12
𝑎22
𝑎13
𝑎23 =
𝑎11
𝑎21 +
𝑘𝑎31
𝑎11
𝑎22 + 𝑘𝑎32
𝑎1
3
𝑎23 +
𝑎31 𝑎32 𝑎33 𝑎31 𝑎32 𝑎3
23
 If the corresponding elements of any two rows (or any
two columns) of a det. are either identical or
proportional,
then the determinant is 0.
 If to any row (or column) of a determinant k (k ≠ 0)
times of any other row (or column) be added then the
value of the determinant is not changed i.e. the value
of the determinant remains unaltered.
𝑘𝑎33

𝑎11 +
𝑥1
𝑎21 +
𝑥2
𝑎31 +
𝑥3
𝑎12
𝑎2
2
𝑎3
2
𝑎13
𝑎2
3
𝑎3
3
=
𝑎11
𝑎2
1
𝑎3
1
𝑎12
𝑎2
2
𝑎3
2
𝑎13
𝑎2
3
𝑎3
3
𝑥
1
𝑎1
3
+
𝑥2
𝑥
3
𝑎12
𝑎2
2
𝑎3
2
𝑎3
3
𝑎23
.

0 ⋯
0
⋮ ⋱ ⋮
0 ⋯
0
=
0
24
 If a determinant vanishes after putting x = a in all the
elements of any row (or column), then (x – a) is a
factor of the determinant.
 Algebraic complement of
minor:
2
The algebraic complement of 𝑀
=
𝑎11
𝑎2
1
𝑎12
𝑎2
2
i
s
−1
1+2+1+2
𝑎3
3
�
�
3
4
𝑎43
𝑎44
2
, where the elements of 𝑀 are taken
from first and second rows and first and second columns.
Laplace’s method of expansion of determinant: In a fourth
order determinant if any two rows and two columns are
selected, then can be expressed as the sum of the
products of all minors of order2 from those two selected
rows and columns and their respective algebraic
complements.
2
5
Singular matrices: det 𝑨 = 𝟎; Non-singular matrices:
det 𝑨 ≠
𝟎.
Adjoint of a matrix:
I
f 𝑨 =
𝑎12
𝑎2
2
𝑎3
2
,
Then 𝐴𝑑𝑗
𝐴 =
𝐴1
1
𝐴2
1
𝐴3
1
𝐴1
2
𝐴2
2
𝐴3
2
𝑎11
𝑎2
1
𝑎3
1
𝐴1
3
𝐴2
3
𝐴3
3
�
�
=
𝑎13
𝑎2
3
𝑎3
3
𝐴11
𝐴12
𝐴13
𝐴2
1
𝐴2
2
𝐴2
3
𝐴3
1
𝐴3
2
𝐴3
3
, where 𝐴𝑖 𝑗
=
Cofactor of 𝑎𝑖 𝑗 in det
A.
Ex- Adjoint of
1 1
3
1 3
−3
−2 −4
−4
i
s
−
8
−1
2
3 −3 1
−3
−4 −4
−
−2
−4
2
6
2
2
2
�
�
=
−24 −8
−12
10 2 6
2
6
If A be a non-singular matrix of order n (i. e., detA ≠ 0) and B
be another matrix of the same order as A such that AB = BA
= In then B is called the inverse of A and is denoted by
𝐴−1. 𝐴−1 =
𝑨𝒅𝒋 𝑨
.
𝐝𝐞𝐭 𝑨
1 1
3
−
1
�
�
−𝟐𝟒 −𝟖
−𝟏𝟐
Ex- 1 3 −3 = − 𝟏
𝟏𝟎 𝟐 𝟔
−2 −4 −4 𝟐 𝟐 𝟐
Inverse of a matrix: 2
7
 Elementary row operations on
matrices:
1. Interchange of ith and jth rows of A
– 𝑅𝑖𝑗,
2.Multiplication of ith row of A by a non-zero real k – 𝑘𝑅𝑖.
3. Replace ith row by adding ith row with scalar multiple
of 𝑖
jth row by a nonzero constant c – 𝑅′
= 𝑅𝑖 +
𝑐𝑅𝑗 .
Example
:
1
2
1
5
1
0
1 0
→ 1 5
(𝑅13)
1 2
→
1 0
3
15
1 2
(3𝑅2
)
→
4 6
3
15
1
(𝑅′ = 𝑅1 + 3
× 𝑅3)
Note: The matrix B is said to
be row-equivalent matrix to
A if B can be obtained
from A by finite number of
elementary row
operations.
2
8
Gauss Jordan method to find
𝑨−𝟏
I
f
𝑨𝒏,𝒏 ⋮ 𝑰𝒏 𝑰𝒏 ⋮
𝑩𝒏,𝒏
then 𝑩 =
𝑨−𝟏.
Problem: Find 𝑨−𝟏, for 𝑨
=
𝟑
𝟓
𝟗
𝟏
Solution:
𝟑 𝟓 ⋮ 𝟏
𝟎
𝟗 𝟏 𝟎
𝟏
𝟗 𝟏
𝟎
using Gauss-Jordan method.
𝟏 𝟓/𝟑 ⋮ 𝟏/𝟑 𝟎
𝟏
𝟏 𝟓/𝟑 𝟏/𝟑
𝟎
𝟎 −𝟏𝟒
⋮
−𝟑
𝟏
𝑨−𝟏
=
−𝟏/𝟒
𝟐
𝟑/𝟏𝟒
𝟓/𝟒𝟐
−𝟏/𝟏
𝟒
𝟏
𝟎
𝟏
𝟓/𝟑
⋮
𝟏/𝟑
𝟑/𝟏
𝟒
𝟎
−𝟏/𝟏
𝟒
𝟏
𝟎
𝟎
⋮ 𝟑/𝟏
𝟓/𝟒𝟐
−𝟏/𝟏
Elementary row
operation
3
1
�
�
1
2
𝑅 −
9𝑅
1
1
4
1
−
𝑅
2
5
𝑅1 −
3
𝑅2
2
9
Properties of
matrices:
 The inverse of a matrix, if exists, is
unique.
 For a non-singular matrix 𝐴, 𝐴−1 −1
= 𝐴.
 For two non-singular matrices 𝐴 & 𝐵,
𝐴𝐵
−1 =
𝐵−1𝐴−1.
 For an orthogonal matrix 𝐴, det 𝐴 = ±1 and hence it is
non- singular.
 For a non- singular matrix 𝐴, 𝐴. 𝐴′ is a symmetric matrix.
 For a non-singular matrix 𝐴, 𝐴𝐵 = 𝐴𝐶 ⇒ 𝐵 = 𝐶.
 Divisors of zero exist in Matrix Algebra i.e., 𝐴𝐵 = 𝑂 does
not always imply either 𝐴 = 𝑂 or 𝐵 = 𝑂.
.
3
0
where y1, y2, … , ym and Aij, 1 ≤ i ≤ m, 1 ≤ j ≤ n are
given elements. The above system is known as m linear
equations with n unknowns.
31
 Linear systems of equations: We consider the problem of
finding n unknown variables from the following system of m
equations
the following form: 𝐴𝑋 =
𝑌, where 𝐴 =
𝐴11
⋮
𝐴𝑚
1
⋯
⋱
⋯
𝐴1𝑛
⋮
𝐴𝑚
𝑛
, 𝑋
=
𝑥1
⋮
𝑥
𝑛
and 𝑌
=
𝑦1
⋮
𝑦
𝑚
. The above matrix A is called co-
efficient
matrix.
 Example: 3𝑥 + 2𝑦 + 𝑧
= 1 𝑥 + 𝑦 =
5
can be represented by 𝐴𝑋 =
𝑌,
where 𝐴
=
3 2
1 1
0
1
, 𝑋
=
�
�
�
�
�
and 𝑌
=
5
1
.
32
 Row Operations: The system of equation can be
written in
Cramer’s Rule
This rule can be applied only when the coefficient matrix is a
square matrix and non-singular. It is explained by considering the
following system of equations
i.e., 𝐴𝑋 = 𝐵, where the coefficient matrix
A =
𝑎11
𝑎2
1
𝑎3
1
𝑎12
𝑎2
2
𝑎3
2
𝑎13
𝑎2
3
𝑎3
3
is nonsingular.
Then Δ
=
𝑎11
𝑎2
1
𝑎3
𝑎12
𝑎2
2
𝑎3
𝑎13
𝑎2
3
𝑎3
≠ 0. Let, Δ1
=
𝑎1
3
𝑏
1
𝑏
𝑎12
𝑎2
2
𝑎3
𝑎23 , Δ2
=
𝑎11
𝑎2
1
𝑏
1
𝑏
𝑎13
𝑎2
3
3
3
3
Δ
=
𝑎1
1
𝑎1
2
�
�
�
�
21
22
𝑏
1
𝑏
𝑎31
𝑎32
2
𝑏
3
1 2
3
Δ Δ
Δ
. Then 𝑥 = Δ1
, 𝑥 = Δ2
, 𝑥 = Δ3
.
Problem: Solve the system: x + y = 2
x – y + z = 4
x + y – z = 6, by Crmmer’s
rule.
Solution: The system is AX=B, where 𝐴 = �
�
1 1 0 𝑥 2
1 −1 1 , X
=
𝑦 , 𝐵
=
4
6
1 1
−1
1 0
Δ = det 𝐴
=
1
1
1
−1
0
1 = 2, Δ1
=
2
4 −1 1 = 10, Δ2
=
1
1
2
4
0
1 =
1 1 −1 6 1 −1 1 6 −1
3
− 6, Δ
=
1 1
2
1 −1
4
= −8. So, 𝑥 = Δ1
= 5, 𝑦 = Δ2
= −3, 𝑧 =
Δ3
= −4.
Δ Δ
Δ
3
4
Rank of a matrix: The positive integer r is said to be the rank of a
matrix A if there exists at least one minor of A of order r that
does not vanish and each minor of orders (r + 1), (r + 2),---
are 0. Or, the matrix A has r independent rows/columns. The
rank of a matrix A is denoted as r(A).
Properties: If A be a
i) Non-singular (Singular) matrix (including Identity matrix) of
order
n, then r(A) = n (<n)
ii)Null- matrix of any order, then r(A) = 0
iii)Rank of a matrix does not change due to
elementary transformations i.e., row/column transformations.
3
5
36
 Determination of the rank of a matrix: The rank of a
matrix can be determined
i)Using the definition of rank
ii)by elementary row transformation
iii)by reducing to triangular matrix.
 Echelon form: A matrix is in echelon form when it satisfies
the following conditions.
The first non-zero element in each row, called the
leading entry, is 1.
Each leading entry is in a column to the right of the
leading entry in the previous row.
Rows with all zero elements, if any, are below rows having
a non-zero element.
0 1 0 3 5 2
Example: 0 0 0 1 0 5 is in Echelon form, but
0 0 0 0 0 0
0 1
0
3 5 2 0 1 0 3 5 2 0 1 0 3 5 2
0 0 0 5 0 5 , 1 0 0 1 0 5 , 0 0 0 0 0 0 are
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
not in Echelon form.
 Rank of an echelon matrix= No. of nonzero
rows
0 1 0 3 5 2
Ex- Rank
of
0 0 0 1 0 5 is
2.
0 0 0 0 0 0
3
7
5 0
8
0 5 6 (by reducing it to echelon
5 5 14
Problem: Find rank of 𝐴
=
form)
5
0
8
Solution: 𝐴 = 0 5 6
5 5 14
1 0
8/5
0 5 6
5 5 14
1 0
8/5
0 5 6
0 5 6
1 0
8/5
0 1
6/5
0 5 6
Rank A=2 (no. of nonzero
rows in echelon form)
1 0
8/5
0 1
6/5
3
8
𝟏
𝑹
𝟏
𝟓
𝑹𝟑 −
𝟓𝑹𝟏
𝟏
𝟓
𝑹𝟐
𝑹𝟑 −
𝟓𝑹𝟐
3
9
Solving system of linear equation using
rank
𝑨𝑿 = 𝑩,
where
𝑨
=
= Coefficient matrix, 𝑿
=
𝒙
𝟏
𝒙
𝟐
𝒙
𝟑
, 𝑩
=
𝒃
𝟏
𝒃
𝟐
𝒃
𝟑
,
𝑨
ഥ
=
𝒂𝟏𝟏
𝒂𝟐𝟏
𝒂𝟑𝟏
𝒂𝟏𝟏
𝒂𝟐𝟏
𝒂𝟑
𝟏
𝒂𝟏𝟐
𝒂𝟐𝟐
𝒂𝟑𝟐
𝒂𝟏𝟐
𝒂𝟐
𝟐
𝒂𝟑
𝟐
𝒂𝟏𝟑
𝒂𝟐𝟑
𝒂𝟑𝟑
𝒂𝟏
𝟑
𝒂𝟐
𝟑
𝒂𝟑
𝒃
𝟏
𝒃
𝟐
𝒃
= Augmented
matrix
4
0
Condition:
𝑟 𝐴 ≠
𝑟(𝑨ഥ)
𝑟 𝐴 = 𝑟
𝑨ഥ
No
solution
= 𝑁𝑜. 𝑜𝑓 𝑐𝑜𝑙𝑢𝑚𝑛
𝑖𝑛 𝐴
Unique
solution
𝑟 𝐴
= 𝑟𝑨ഥ
< 𝑁𝑜. 𝑜𝑓
𝑐𝑜𝑙𝑢𝑚𝑛 𝑖𝑛 𝐴
Infinite
solution
Ex- 2x + 3y + 5z = 9, 7x + 3y − 2z = 8, 2x + 3y
+ 𝜆𝑧 = 𝜇
𝑨ഥ
=
9
2 3 5
7 3 −2 8
2 3 𝜆 �
�
1 0 3/5
0 1
0 0 𝜆 −
5
17/
5
19/15
11/15
𝜇 −
9
Case 1: If 𝜆 = 5, 𝜇
≠ 9
Case 2: If 𝜆 ≠ 5 𝑟
𝐴
(Unique
solution)
Case 3: If 𝜆 = 5, 𝜇
= 9
𝑟 𝐴 = 2 ≠ 𝑟 𝑨ഥ = 3 (No
solution)
= 3 = 𝑟𝑨ഥ =No. of column in
A
Elementary
row
operation
41
𝒓 𝑨 = 𝟑 = 𝒓 𝑨ഥ =No. of
column in A
Solution for Case 2: If 𝝀
≠ 𝟓
(Unique solution)
1 0 3/5
0 1
0 0 𝜆 −
5
17/
5
𝜇 −
9
5 5
15
19/15 11/15 is equivalent to the system: 𝑥 + 3
𝑧 = 17
, y +
19
z =
11
, 𝜆 − 5 z = (𝜇
− 9)
𝑧 =
𝜇 −9
, y =
11
−
19 𝜇 −9
, x =
17
−
3 𝜇 −9
15 𝜆−5 15 15 𝜆−5 5 5
𝜆−5
Solution for Case 3: If 𝝀 = 𝟓, 𝝁 = 𝟗 𝒓 𝑨 = 𝒓 𝑨ഥ = 𝟐
<3(Infinite solution)
1 0 3/5
0 1
19/15
0 0 𝜆 −
5
17/5
11/1
5
𝜇 −
9
∼ 𝑥 + 𝑧
=
3
17
19
11
5 5 15
15
, y + z = Let, z=a, an
3
15 15 5
5
arbitrary constant. Then y = 11
− 19
𝑎, 𝑥 = 17
−
4
2
Problem: Test the consistency and solve, if possible:
i) 2x-3y+7z=5, 3x+y-3z=13, 2x+19y-47z=32
ii) X+2y+z=3, 2x+3y+2z=5, 3x-5y+5z=2, 3x+9y-z=4 iii)
2x+6y+11=0, 6x+20y-6z+3=0,6y-18z+1=0
iv) 3x+3y+2z=1, x+2y=4, 10y+3z=-2, 2x-3y-z=5
Eigen Value of a Matrix
 Characteristic equation: Let A be nxn matrix over a field F
. Then
det(𝐴 − 𝜆 𝐼𝑛) is said to be the characteristic polynomial of A
and is denoted by ᴪ𝐴(𝜆). The equation ᴪ𝐴(𝜆) = 0 is said to
be the characteristic equation of A.
 Eigen value of a matrix: A root of the characteristic equation of
a square matrix A is said to be an eigen value (or a
characteristic value) of A.
Ex- 𝐴
=
2
1
1
2
1
1 . ᴪ𝐴(𝜆) = 0
2
− 𝜆
1
1
2 − 𝜆
1
1 =
0
1 1 2 1 1 2
− 𝜆
Roots:
1,1,4
𝜆3 − 6𝜆2 + 9𝜆 − 4
= 0
Eigen values of A.
4
3
Let 𝐴 =
(𝑎𝑖𝑗).
Then ᴪ𝐴 𝜆 =
𝑎11 − 𝜆 ⋯
⋮ ⋱
𝑎𝑛1 ⋯
𝑎1𝑛
⋮
𝑎𝑛𝑛
− 𝜆
of the principal
= 𝑐0𝜆𝑛 + 𝑐1𝜆𝑛 −1 + ⋯ + 𝑐𝑛
where 𝑐0 = −1 𝑛 and 𝑐𝑟 = −1 𝑛−𝑟 ×
[sum minors of A order r].
Ex- 𝒄𝟏 = −𝟏 𝒏−𝟏(𝒂𝟏𝟏 + 𝒂𝟐𝟐 + ⋯ + 𝒂𝒏𝒏) = −𝟏
𝒏−𝟏 𝒕𝒓𝒂𝒄𝒆 𝑨 , And
𝒄𝒏 = 𝒅𝒆𝒕𝑨.
 The degree of the characteristic equation is same as
4
4
 If 𝐴
=
2 1
1
1 2
1
1 1
2
, characteristic equation is 𝜆3 − 6𝜆2 + 9𝜆 −
4 = 0
By Cayley-Hamilton Theorem, 𝐴3 − 6𝐴2 + 9𝐴 − 4𝐼 = 𝑂.
 A root of ᴪ𝐴(𝑥) = 0 of multiplicity r is said to be the
algebraic multiplicity of that eigen value.
 Ex- A.M. of 1 is 2, A.M. of 4 is 1.
45
 Cayley Hamilton theorem: Every square matrix satisfies it
own characteristic equation.
2 1
1
 Ex- For 𝐴 = 1 2 1 , let 𝑋1
=
1 1 2
be the eigen
vector
𝑥1
𝑦1
𝑧1
corresponding to 𝜆 = 1. Then 𝐴𝑋1
=
1. 𝑋1
46
Theorem: If x be an eigen value of a non-singular matrix A,
then
𝑥−1 is an eigen value of 𝐴−1 .
Theorem: If A and P be both nxn matrices and P be non-
singular, then A and 𝑃−1𝐴𝑃 have the same eigen values.
 Eigen Vector: Let A be nxn matrix over a field F
. A non-null
vector X is said to be an eigen vector or a characteristic
vector of A if there exists a scalar λ belong to F such that
𝐴𝑋 =
𝜆𝑋 holds.
2 1
1
1 2
1
1 1
2
=
𝑥1
𝑥1
𝑦1
𝑦1
𝑧1
𝑧1
𝑥1 + 𝑦1 + 𝑧1
= 0.
If 𝑦1 = 𝑎, 𝑧1 = 𝑏 𝑥1 = −𝑎 −
𝑏.
Eigen vector corresponding to 𝜆 = 1 is −𝑎 −
𝑏, 𝑎, 𝑏
= 𝑎 −1,1,0
+
𝑏(−1,0,1), where a, b are nonzero constants.
Geometric multiplicity:The no. of independent eigen
vectors corresponding to an eigen value is its G.M.
Ex- GM of 𝜆 = 1 is 2. Then independent eigen
vectors corresponding to 𝜆 = 1 are −1,1,0 and
(−1,0,1).
Similarly, Eigen vector corresponding to 𝜆 = 4 is 𝑘 1,1,1 ,
where k is nonzero constant.
4
7
Theorem: Let A be an nxn matrix over a field F
. To an eigen
vector of A there corresponds a unique eigen value of A.
Theorem: Let A be an nxn matrix over a field F and b λ be an
eigen value belonging to F
. To each such eigen value of A there
corresponds atleast one eigen vector.
 Remarks: 1 ≤ geometric multiplicity ≤ algebraic multiplicity.
 An eigen value λ is said to be regular if the geometric
multiplicity of λ is equal to its algebraic multiplicity.
 Ex
-
2 1
1
1 2
1
1 1
2
is regular as AM=GM= 2 for 𝜆 = 1 and
Am=GM=1
for 𝜆 =
4.
𝐴𝑛,
𝑛
 If has n distinct eigen value, then A is regular,
so is
diagonalizable.
4
8
1 0
0
0 6
0
0 0
8
- eigen values- 1,6
8
1 5
60
0 6 5
- eigen values- 1,6
80
49
 Theorem: Eigen values of a real symmetric matrix are all
real.
 Theorem: Eigen values of a real skew symmetric matrix
are purely imaginary, or zero.
 Theorem: Eigen value of a real orthogonal matrix has
unit modulus.
 Theorem: Eigen values of a diagonal or triangular matrix
has are its diagonal entries.
 Similar Matrices: An nxn matrix A is said to be similar to an
nxn
50
matrix B if there exists a non-singular nxn matrix P such that
𝑩 = 𝑷−𝟏𝑨𝑷. If A is similar to B then B is similar to A and two
matrices A and B are said to be similar
. (Two similar matrices
have the same eigenvalues).
 Diagonalization: Let us consider the set of all nxn matrices
over a field F
. An nxn matrix A is said to be diagonalizable if
A is similar to an nxn diagonal matrix. If A is similar to a
diagonal matrix 𝑫 = 𝒅𝒊𝒂𝒈(𝝀𝟏, 𝝀𝟐, … , 𝝀𝒏) then 𝜆1, 𝜆2,
… , 𝜆𝑛 are the eigen values of A.
 Theorem: Let A be an nxn matrix over a field F with eigen
values 𝑑1, 𝑑2, … , 𝑑𝑛 𝜖 𝐹, di are not necessarily all distinct.
Let D
−1
51
 Note: if the column vectors of P be linearly independent
then P becomes non-singular and in that case P-1AP = D,
i.e., A is diagonalizable. Consequently, we obtain a
necessary and sufficient condition for diagonalisability of an
nxn matrix:
 Theorem: An nxn matrix A over a field F is diagonalizable if
and only if there exist n eigen vectors of A which are
linearly independent.
 Theorem: Let A be nxn matrix over a field F
. If the eigen
values of A be all distinct and belong to F
, then A is
diagonalizable.
 Theorem: An nxn matrix A is diagonalizable if and only if all
its eigen values are regular.
▶ Since
2 1
1
1 2
1
1 1
2
𝜆1 0
0
is regular, it is
diagonalizable.
1 0 0
▶ 𝐷 = 0 𝜆2 0 = 0 1 0 .
0 0 𝜆3 0 0 4
▶ 𝑃
=
−1 −1
1
1 0
1
1
1
[1s
t column = eigen vector
corresponding
0
to 𝜆1,
2nd
column = eigen vector corresponding to 𝜆2,
3rd
column = eigen vector corresponding to 𝜆3].
References: B.S. Grewal, STUDY MATERIAL, MATHEMATICS-I (BSCM103), BSH, UEM kolkata
5
2
5
3
Solved problem (a)
5
4
5
5
5
6
Solved problem (b)
5
7
Solved problem (c)
Thank
you
5
8
Contact: chitrita.dasgupta@uem.ed
u.in
Whatsapp: 8902524242 (9am-
5pm)

Mathematics I - BSCM103 -Module 4_copy.pptx

  • 1.
    Matrix and Determinants BY DR.CHITRITA DASGUPTA 1
  • 2.
    Tentative syllabus - ▶ Module1: Calculus (Differentiation): Rolle’s Theorem, Mean Value Theorems, Taylor’s and Maclaurin’s Theorems with Remainders; Taylor's Series, Series for Exponential, Trigonometric and Logarithm Functions; Indeterminate forms and L' Hospital's Rule; Maxima and Minima; Evolutes and Involutes. ▶ Module 2: Calculus (Integration): Evaluation of Definite and Improper Integrals; Beta and Gamma Functions and their properties; Applications of Definite Integrals to evaluate surface areas and volumes of revolutions. 2
  • 3.
    Tentative syllabus ▶ Module 3:Multivariable Calculus (Differentiation): Limit, Continuity and Partial Derivatives; Homogeneous Functions, Euler’s Theorem of first and second order (Statement only); Change of variables, Composite function, Derivative of implicit functions, Total Derivative; Jacobian; Maxima, Minima and Saddle points; Method of Lagrange multipliers; Gradient, Directional Derivatives, Tangent Plane and Normal Line, Curl and Divergence. 3
  • 4.
    Tentative syllabus ▶ Module 4:Matrices and Determinants: Matrices, Addition and Scalar Multiplication, Matrix Multiplication; Symmetric and Skew -symmetric Matrices; Hermitian and Skew - Hermitian Matrices; Determinants, Cramer’s Rule; Inverse of a Matrix; Orthogonal Matrices; Gauss -Jordan Method to find the inverse of a matrix; Linear Systems of Equations, Rank of a Matrix. Eigenvalues and Eigenvectors; Eigenvalues of some special matrices; Cayley -Hamilton Theorem; Similarity Matrix, Diagonalization of matrices. 4
  • 5.
    Tentative syllabus ▶ Module 5: Sequences andSeries: Basic ideas on Sequence; Concept of Monotonic and Bounded sequence; Convergence and Divergence of Sequence; Algebra of Sequences (Statement only). Basic idea of an Infinite Series; Notion of Convergence and Divergence; Series of Positive T erms - Convergence of infinite G.P. series and p-series (Statement only); T ests of Convergence [Statement only] – Comparison Test, Integral Test, D’Alembert’s Ratio Test, Raabe’s Test and Cauchy’s Root test. Alternating Series - Leibnitz’s test [Statement only], Absolute and Conditional Convergence. 5
  • 6.
    Module-4: Matrices and Determinants ▶Definition: Matrix is a representation of a rectangular array of 𝑚𝑛 elements 𝑎𝑖 𝑗 (𝜖 𝑅) into 𝑚 rows and 𝑛 columns. ▶ Representation of 𝑚 × 𝑛 matrix: 𝑎𝑖 𝑗 = 𝑚,𝑛 Or, 𝑎𝑖 𝑗 = 𝑚,𝑛 6
  • 7.
    ▶ Row Matrix:The matrix with one row (𝑚 = 1) Example: 1 20 0 5 , 1 5 . ▶ Column Matrix: The matrix with one column (𝑛 = 1) Example : 0 2 0 5 , 2 100 . ▶ Zero Matrix: The matrix with all zero entries (𝑎𝑖 𝑗 = 0, for all 𝑖, 𝑗). Example : 0 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 0 . 7
  • 8.
    Example: 1 0 5 2 with order 2. ▶ Principlediagonal: The diagonal from the top left corner of the square matrix to the down right corner. The elements in the principal diagonal is called diagonal elements. Example: 𝟏 0 , 5 𝟐 𝟏 3 5 0 𝟎 5 1 5 𝟏𝟎𝟎 8 ▶ Square Matrix: The matrix with same number of rows and columns (𝑚 = 𝑛)
  • 9.
    Different types of matrix: Equal matrices: 𝐴 (= (𝑎𝑖𝑗)) and 𝐵 (= (𝑏𝑖𝑗)) are called equal if i) The size (No. of rows and columns) of the two matrices A and B are equal, ii) 𝑎𝑖 𝑗 = 𝑏𝑖 𝑗 for each 𝑖 and 𝑗.  Diagonal matrices: 𝑎𝑖𝑗 = 0 for 𝑖 ≠ 𝑗. Example : 𝟏 𝟎 , 𝟎 𝟐 𝟏 𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 𝟏𝟎𝟎 9
  • 10.
    ▶ Identity matrix:𝑎𝑖 𝑗 𝑎𝑖𝑗 = 0 for 𝑖 ≠ 𝑗. = 1 for 𝑖 = 𝑗. Example : 1 0 0 0 1 0 0 0 1 , 1 0 . 0 1  Triangular matrix: Upper (Lower) triangular matrix: 𝑎𝑖 𝑗 = 0 for 𝑖 > (<) 𝑗. Example : 1 50 1 𝟎 1 0 𝟎 𝟎 , 1 5 𝟎 0 1 0
  • 11.
    Operation on matrices: ▶ Matrixaddition: Addition between two matrices is possible when the order of the two matrices are same. Let 𝐶 = 𝐴 + 𝐵, where 𝐴 = and 𝐵 = 𝑎𝑖𝑗 𝑏𝑖𝑗 𝑚,𝑛 𝑚,𝑛 , then 𝑖𝑡ℎ, 𝑗𝑡ℎ elements of 𝐶, 𝑐𝑖 𝑗 = 𝑎𝑖 𝑗 + 𝑏𝑖 𝑗 for all 𝑖, 𝑗. Ex- 𝐴 = 1 5 1 0 , 𝐵 = 2 0 0 8 ; 𝐶 = 𝐴 + 𝐵 = 3 5 1 8 . 𝑎𝑖 𝑗 𝑚, 𝑛 be a matrix, ▶ Scalar multiplication with matrix: Let 𝐴 = and 𝐵 = 𝑐𝐴. Then 𝑏𝑖 𝑗 =𝑐𝑎𝑖 𝑗 , for 𝑖, 𝑗. Ex- 6𝐴 = 6 30 6 0 . 1 1
  • 12.
    Operation on matrices: ▶ Matrixmultiplication: Matrix multiplication is possible only if the number of column of the first matrix is equal to the number of 𝑎𝑖 𝑗 𝑚, 𝑛 row of the second matrix. Let 𝐶 = 𝐴𝐵 where 𝐴 = and 𝑏𝑖 𝑗 𝑛 , 𝑝 𝐵 = then 𝑖𝑗𝑡ℎ element of 𝐶 is equal to 𝑐𝑖𝑗 = 𝑘= 1 σ 𝑘 =𝑛 σ 𝑎𝑖 𝑘 𝑏𝑘 𝑗 ,. ▶ Note: Here order of C will be m x p. ▶ Example: let 𝐴 = 1 2 2 0 and 𝐵 = 1 1 , then 𝐴𝐵 = 0 2 1 5 . 2 2  Remark 1: Two matrices A and B are commutative if 𝐴𝐵 = 𝐵𝐴. In general matrix multiplication is not commutative.  Remark 2: Matrix multiplication is associative; 𝑨. 𝑩. 𝑪 = 𝑨. 𝑩 . 𝑪 1 2
  • 13.
    𝑎𝑖 𝑗 𝑚, 𝑛 be a matrix, then or𝐴𝑇. If 𝐵 = 𝐴𝑡, then 𝑏𝑖𝑗 =  Transpose of a matrix: Let 𝐴 = transpose of A is denoted as 𝐴𝑡 𝑎𝑖 𝑗 for all 𝑖, 𝑗. Ex - 𝐴 = 1 5 1 6 0 8 � � , 𝐴 = 1 1 0 5 6 8 .  Symmetric matrix: The matrix 𝐴 is said to be symmetric matrix if 𝐴 = 𝐴𝑡 . Therefore, if 𝐴 is symmetric then 𝑎𝑖 𝑗 = 𝑎𝑗𝑖. Example : 1 3 , 1 0 , 0 0 . 3 1 0 1 0 0 1 3
  • 14.
    Example : −3 0 −20 0 03 , 0 20 .  Every square matrix A can be expressed uniquely as a sum 2 2 𝑡 𝑡 of a Symmetric (𝐴+𝐴 ) and a Skew-symmetric matrix (𝐴−𝐴 ). Problem: Express 𝑨 = 𝟏 𝟓𝟎 𝟏 𝟐𝟎 𝟏 𝟖 as a sum of symmetric 𝟎 𝟓 𝟏 14  Skew symmetric matrix: The matrix 𝐴 is said to be skew symmetric matrix if 𝐴 = −𝐴𝑡. Therefore, if 𝐴 is skew symmetric then 𝑎𝑖 𝑗 = −𝑎𝑗 𝑖 .
  • 15.
    15  All positiveintegral powers of a symmetric matrix A are symmetric (Ex- 𝑨, 𝑨𝟐, 𝑨𝟑, …).  All positive odd integral powers of a skew-symmetric matrix are skew-symmetric (Ex- 𝐀, 𝑨𝟑, 𝑨𝟓, … ) and all positive even integral powers of a skew-symmetric matrix is symmetric (Ex- 𝑨𝟐, 𝑨𝟒, …).  The matrix 𝑩𝒕𝑨𝑩 is symmetric or skew-symmetric according as A is symmetric or skew-symmetric.
  • 16.
    𝐴 𝑡 × 𝐴 =𝐴 × 𝐴𝑡 =𝐼 (Unit matrix). Orthogonal matrices are non-singular. Ex : −1 0 . 0 1  Idempotent matrix: A square matrix A is called Idempotent if A2 = 𝐴. Ex : 1 0 , 0 0 2 −2 −4 −1 3 4 1 −2 −3 . 16  Orthogonal matrix: A square matrix A is said to be Orthogonal if
  • 17.
    Ex - 1 √ 2 1 1 𝑖 −𝑖 .  Hermitianand Skew Hermitian Matrices: A complex square matrix A is said to be Hermitian if A = 𝐴𝑡and Skew Hermitian if A = −𝐴𝑡 where 𝐴ҧ denotes the conjugate of A. Ex - - Hermitian matrix 3 3 + 𝑖 3 − 𝑖 2 5𝑖 3 + 𝑖 - Skew Hermitian matrix 17  Unitary Matrix: A complex n × n matrix A is said to be Unitary if 𝐴𝑡 × 𝐴 = 𝐴 × 𝐴𝑡 =𝐼 (the identity matrix of order n).
  • 18.
     Show that 3 7 +4𝑖 −2 − 5𝑖 7 − 4𝑖 −2 + 5𝑖 −2 3 + 𝑖 3 − 𝑖 4 is a Hermitian matrix. 𝛼 + 𝑖𝛾 𝛽 + 𝑖𝛿 −𝛽 + 𝑖𝛿 𝛼 − 𝑖𝛾 is a unitary matrix, if  Show that the matrix 𝛼2 + 𝛽2 + 𝛾2 + 𝛿2 = 1.  Express A= 2𝑖 −2 + 𝑖 2 + 𝑖 1 − 𝑖 −𝑖 3𝑖 as 𝑃 + 𝑖𝑄, where P is real and −1 − 𝑖 3𝑖 0 skew-symmetric and Q is real and symmetric matrix. Problems: 1 8
  • 19.
    Determinan t Minors and Cofactors:The Minor of an element of a 𝑛 × 𝑛 determinant is the (𝑛 − 1) × (𝑛 − 1) determinant obtained after suppressing the row and column containing the element. Let, Δ = 𝑎1 𝑏1 𝑐1 𝑎2 𝑏2 𝑎 𝑏 � � 3 3 3 𝑐2 . Minor of 𝑎1 = 2 . Minor of a2 = 𝑏 𝑐 𝑏 𝑐 𝑏3 𝑐3 𝑏3 𝑐3 2 1 1 . −1 𝑖+𝑗 Cofactor of an element = × (Minor of that element), where 𝑖&𝑗 are the row and columnwhere that element is placed. Ex: Δ = 1 0 5 4 7 5 . Minor of 10 = 10 8 1 5 7 4 5 . Cofactor of 10= −1 3+1 × 5 4 7 1 9
  • 20.
    Determinan t Now, Δ = 33 𝑎1 𝑏1 𝑐1 𝑎2 𝑏2 𝑐2 𝑎 𝑏 𝑐 3 = 𝑎1 2 𝑏 𝑐 2 𝑏3 𝑐3 + −1 1+2b1 3 𝑎2 𝑐2 𝑎 𝑐 3 + c1 𝑎2 𝑏2 𝑎3 𝑏3 =𝑎1 × 𝐴1 + 𝑏1 × 𝐵1 + 𝑐1 × 𝐶1, Where 𝐴1, 𝐵1 and 𝐶1 are the cofactor of 𝑎1, b1 and 𝑐1. Note: If all the elements of a row (or a column) be multiplied by their own cofactors and then added, the result is the value of the determinant. 2 0
  • 21.
    Product of twodeterminants: The product of two determinants Δ1 and Δ2 of equal order is another determinant of the same order. The (𝑖, 𝑗) −th element of the product determinant Δ1 ×Δ2 = sum of the products of the elements of the i-th row (or column) of Δ1 and the corresponding elements of the j-th row (or column) of Δ2. 2 1
  • 22.
    Properties of Determinant:  Thevalue of a determinant remains unchanged if its rows are changed to the corresponding columns and vice- versa. 1 5 8 1 0 0 0 0 6 = 5 0 2 0 2 8 8 6 8  If any two rows (or any two columns) of a determinant are interchanged, the value of the new determinant become negative of its original value. 1 5 8 1 5 8 8 5 1 0 0 6 = − 0 2 8 = − 6 0 0 0 2 8 0 0 6 8 2 0 2 2
  • 23.
    1 5 81 5 8 0 5 9 = 0 = 0 5 9 1 5 8 3 15 24 𝑎11 𝑎21 𝑎12 𝑎22 𝑎13 𝑎23 = 𝑎11 𝑎21 + 𝑘𝑎31 𝑎11 𝑎22 + 𝑘𝑎32 𝑎1 3 𝑎23 + 𝑎31 𝑎32 𝑎33 𝑎31 𝑎32 𝑎3 23  If the corresponding elements of any two rows (or any two columns) of a det. are either identical or proportional, then the determinant is 0.  If to any row (or column) of a determinant k (k ≠ 0) times of any other row (or column) be added then the value of the determinant is not changed i.e. the value of the determinant remains unaltered. 𝑘𝑎33
  • 24.
     𝑎11 + 𝑥1 𝑎21 + 𝑥2 𝑎31+ 𝑥3 𝑎12 𝑎2 2 𝑎3 2 𝑎13 𝑎2 3 𝑎3 3 = 𝑎11 𝑎2 1 𝑎3 1 𝑎12 𝑎2 2 𝑎3 2 𝑎13 𝑎2 3 𝑎3 3 𝑥 1 𝑎1 3 + 𝑥2 𝑥 3 𝑎12 𝑎2 2 𝑎3 2 𝑎3 3 𝑎23 .  0 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 0 = 0 24  If a determinant vanishes after putting x = a in all the elements of any row (or column), then (x – a) is a factor of the determinant.
  • 25.
     Algebraic complementof minor: 2 The algebraic complement of 𝑀 = 𝑎11 𝑎2 1 𝑎12 𝑎2 2 i s −1 1+2+1+2 𝑎3 3 � � 3 4 𝑎43 𝑎44 2 , where the elements of 𝑀 are taken from first and second rows and first and second columns. Laplace’s method of expansion of determinant: In a fourth order determinant if any two rows and two columns are selected, then can be expressed as the sum of the products of all minors of order2 from those two selected rows and columns and their respective algebraic complements. 2 5
  • 26.
    Singular matrices: det𝑨 = 𝟎; Non-singular matrices: det 𝑨 ≠ 𝟎. Adjoint of a matrix: I f 𝑨 = 𝑎12 𝑎2 2 𝑎3 2 , Then 𝐴𝑑𝑗 𝐴 = 𝐴1 1 𝐴2 1 𝐴3 1 𝐴1 2 𝐴2 2 𝐴3 2 𝑎11 𝑎2 1 𝑎3 1 𝐴1 3 𝐴2 3 𝐴3 3 � � = 𝑎13 𝑎2 3 𝑎3 3 𝐴11 𝐴12 𝐴13 𝐴2 1 𝐴2 2 𝐴2 3 𝐴3 1 𝐴3 2 𝐴3 3 , where 𝐴𝑖 𝑗 = Cofactor of 𝑎𝑖 𝑗 in det A. Ex- Adjoint of 1 1 3 1 3 −3 −2 −4 −4 i s − 8 −1 2 3 −3 1 −3 −4 −4 − −2 −4 2 6 2 2 2 � � = −24 −8 −12 10 2 6 2 6
  • 27.
    If A bea non-singular matrix of order n (i. e., detA ≠ 0) and B be another matrix of the same order as A such that AB = BA = In then B is called the inverse of A and is denoted by 𝐴−1. 𝐴−1 = 𝑨𝒅𝒋 𝑨 . 𝐝𝐞𝐭 𝑨 1 1 3 − 1 � � −𝟐𝟒 −𝟖 −𝟏𝟐 Ex- 1 3 −3 = − 𝟏 𝟏𝟎 𝟐 𝟔 −2 −4 −4 𝟐 𝟐 𝟐 Inverse of a matrix: 2 7
  • 28.
     Elementary rowoperations on matrices: 1. Interchange of ith and jth rows of A – 𝑅𝑖𝑗, 2.Multiplication of ith row of A by a non-zero real k – 𝑘𝑅𝑖. 3. Replace ith row by adding ith row with scalar multiple of 𝑖 jth row by a nonzero constant c – 𝑅′ = 𝑅𝑖 + 𝑐𝑅𝑗 . Example : 1 2 1 5 1 0 1 0 → 1 5 (𝑅13) 1 2 → 1 0 3 15 1 2 (3𝑅2 ) → 4 6 3 15 1 (𝑅′ = 𝑅1 + 3 × 𝑅3) Note: The matrix B is said to be row-equivalent matrix to A if B can be obtained from A by finite number of elementary row operations. 2 8
  • 29.
    Gauss Jordan methodto find 𝑨−𝟏 I f 𝑨𝒏,𝒏 ⋮ 𝑰𝒏 𝑰𝒏 ⋮ 𝑩𝒏,𝒏 then 𝑩 = 𝑨−𝟏. Problem: Find 𝑨−𝟏, for 𝑨 = 𝟑 𝟓 𝟗 𝟏 Solution: 𝟑 𝟓 ⋮ 𝟏 𝟎 𝟗 𝟏 𝟎 𝟏 𝟗 𝟏 𝟎 using Gauss-Jordan method. 𝟏 𝟓/𝟑 ⋮ 𝟏/𝟑 𝟎 𝟏 𝟏 𝟓/𝟑 𝟏/𝟑 𝟎 𝟎 −𝟏𝟒 ⋮ −𝟑 𝟏 𝑨−𝟏 = −𝟏/𝟒 𝟐 𝟑/𝟏𝟒 𝟓/𝟒𝟐 −𝟏/𝟏 𝟒 𝟏 𝟎 𝟏 𝟓/𝟑 ⋮ 𝟏/𝟑 𝟑/𝟏 𝟒 𝟎 −𝟏/𝟏 𝟒 𝟏 𝟎 𝟎 ⋮ 𝟑/𝟏 𝟓/𝟒𝟐 −𝟏/𝟏 Elementary row operation 3 1 � � 1 2 𝑅 − 9𝑅 1 1 4 1 − 𝑅 2 5 𝑅1 − 3 𝑅2 2 9
  • 30.
    Properties of matrices:  Theinverse of a matrix, if exists, is unique.  For a non-singular matrix 𝐴, 𝐴−1 −1 = 𝐴.  For two non-singular matrices 𝐴 & 𝐵, 𝐴𝐵 −1 = 𝐵−1𝐴−1.  For an orthogonal matrix 𝐴, det 𝐴 = ±1 and hence it is non- singular.  For a non- singular matrix 𝐴, 𝐴. 𝐴′ is a symmetric matrix.  For a non-singular matrix 𝐴, 𝐴𝐵 = 𝐴𝐶 ⇒ 𝐵 = 𝐶.  Divisors of zero exist in Matrix Algebra i.e., 𝐴𝐵 = 𝑂 does not always imply either 𝐴 = 𝑂 or 𝐵 = 𝑂. . 3 0
  • 31.
    where y1, y2,… , ym and Aij, 1 ≤ i ≤ m, 1 ≤ j ≤ n are given elements. The above system is known as m linear equations with n unknowns. 31  Linear systems of equations: We consider the problem of finding n unknown variables from the following system of m equations
  • 32.
    the following form:𝐴𝑋 = 𝑌, where 𝐴 = 𝐴11 ⋮ 𝐴𝑚 1 ⋯ ⋱ ⋯ 𝐴1𝑛 ⋮ 𝐴𝑚 𝑛 , 𝑋 = 𝑥1 ⋮ 𝑥 𝑛 and 𝑌 = 𝑦1 ⋮ 𝑦 𝑚 . The above matrix A is called co- efficient matrix.  Example: 3𝑥 + 2𝑦 + 𝑧 = 1 𝑥 + 𝑦 = 5 can be represented by 𝐴𝑋 = 𝑌, where 𝐴 = 3 2 1 1 0 1 , 𝑋 = � � � � � and 𝑌 = 5 1 . 32  Row Operations: The system of equation can be written in
  • 33.
    Cramer’s Rule This rulecan be applied only when the coefficient matrix is a square matrix and non-singular. It is explained by considering the following system of equations i.e., 𝐴𝑋 = 𝐵, where the coefficient matrix A = 𝑎11 𝑎2 1 𝑎3 1 𝑎12 𝑎2 2 𝑎3 2 𝑎13 𝑎2 3 𝑎3 3 is nonsingular. Then Δ = 𝑎11 𝑎2 1 𝑎3 𝑎12 𝑎2 2 𝑎3 𝑎13 𝑎2 3 𝑎3 ≠ 0. Let, Δ1 = 𝑎1 3 𝑏 1 𝑏 𝑎12 𝑎2 2 𝑎3 𝑎23 , Δ2 = 𝑎11 𝑎2 1 𝑏 1 𝑏 𝑎13 𝑎2 3 3 3
  • 34.
    3 Δ = 𝑎1 1 𝑎1 2 � � � � 21 22 𝑏 1 𝑏 𝑎31 𝑎32 2 𝑏 3 1 2 3 Δ Δ Δ .Then 𝑥 = Δ1 , 𝑥 = Δ2 , 𝑥 = Δ3 . Problem: Solve the system: x + y = 2 x – y + z = 4 x + y – z = 6, by Crmmer’s rule. Solution: The system is AX=B, where 𝐴 = � � 1 1 0 𝑥 2 1 −1 1 , X = 𝑦 , 𝐵 = 4 6 1 1 −1 1 0 Δ = det 𝐴 = 1 1 1 −1 0 1 = 2, Δ1 = 2 4 −1 1 = 10, Δ2 = 1 1 2 4 0 1 = 1 1 −1 6 1 −1 1 6 −1 3 − 6, Δ = 1 1 2 1 −1 4 = −8. So, 𝑥 = Δ1 = 5, 𝑦 = Δ2 = −3, 𝑧 = Δ3 = −4. Δ Δ Δ 3 4
  • 35.
    Rank of amatrix: The positive integer r is said to be the rank of a matrix A if there exists at least one minor of A of order r that does not vanish and each minor of orders (r + 1), (r + 2),--- are 0. Or, the matrix A has r independent rows/columns. The rank of a matrix A is denoted as r(A). Properties: If A be a i) Non-singular (Singular) matrix (including Identity matrix) of order n, then r(A) = n (<n) ii)Null- matrix of any order, then r(A) = 0 iii)Rank of a matrix does not change due to elementary transformations i.e., row/column transformations. 3 5
  • 36.
    36  Determination ofthe rank of a matrix: The rank of a matrix can be determined i)Using the definition of rank ii)by elementary row transformation iii)by reducing to triangular matrix.  Echelon form: A matrix is in echelon form when it satisfies the following conditions. The first non-zero element in each row, called the leading entry, is 1. Each leading entry is in a column to the right of the leading entry in the previous row. Rows with all zero elements, if any, are below rows having a non-zero element.
  • 37.
    0 1 03 5 2 Example: 0 0 0 1 0 5 is in Echelon form, but 0 0 0 0 0 0 0 1 0 3 5 2 0 1 0 3 5 2 0 1 0 3 5 2 0 0 0 5 0 5 , 1 0 0 1 0 5 , 0 0 0 0 0 0 are 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 not in Echelon form.  Rank of an echelon matrix= No. of nonzero rows 0 1 0 3 5 2 Ex- Rank of 0 0 0 1 0 5 is 2. 0 0 0 0 0 0 3 7
  • 38.
    5 0 8 0 56 (by reducing it to echelon 5 5 14 Problem: Find rank of 𝐴 = form) 5 0 8 Solution: 𝐴 = 0 5 6 5 5 14 1 0 8/5 0 5 6 5 5 14 1 0 8/5 0 5 6 0 5 6 1 0 8/5 0 1 6/5 0 5 6 Rank A=2 (no. of nonzero rows in echelon form) 1 0 8/5 0 1 6/5 3 8 𝟏 𝑹 𝟏 𝟓 𝑹𝟑 − 𝟓𝑹𝟏 𝟏 𝟓 𝑹𝟐 𝑹𝟑 − 𝟓𝑹𝟐
  • 39.
    3 9 Solving system oflinear equation using rank 𝑨𝑿 = 𝑩, where 𝑨 = = Coefficient matrix, 𝑿 = 𝒙 𝟏 𝒙 𝟐 𝒙 𝟑 , 𝑩 = 𝒃 𝟏 𝒃 𝟐 𝒃 𝟑 , 𝑨 ഥ = 𝒂𝟏𝟏 𝒂𝟐𝟏 𝒂𝟑𝟏 𝒂𝟏𝟏 𝒂𝟐𝟏 𝒂𝟑 𝟏 𝒂𝟏𝟐 𝒂𝟐𝟐 𝒂𝟑𝟐 𝒂𝟏𝟐 𝒂𝟐 𝟐 𝒂𝟑 𝟐 𝒂𝟏𝟑 𝒂𝟐𝟑 𝒂𝟑𝟑 𝒂𝟏 𝟑 𝒂𝟐 𝟑 𝒂𝟑 𝒃 𝟏 𝒃 𝟐 𝒃 = Augmented matrix
  • 40.
    4 0 Condition: 𝑟 𝐴 ≠ 𝑟(𝑨ഥ) 𝑟𝐴 = 𝑟 𝑨ഥ No solution = 𝑁𝑜. 𝑜𝑓 𝑐𝑜𝑙𝑢𝑚𝑛 𝑖𝑛 𝐴 Unique solution 𝑟 𝐴 = 𝑟𝑨ഥ < 𝑁𝑜. 𝑜𝑓 𝑐𝑜𝑙𝑢𝑚𝑛 𝑖𝑛 𝐴 Infinite solution Ex- 2x + 3y + 5z = 9, 7x + 3y − 2z = 8, 2x + 3y + 𝜆𝑧 = 𝜇 𝑨ഥ = 9 2 3 5 7 3 −2 8 2 3 𝜆 � � 1 0 3/5 0 1 0 0 𝜆 − 5 17/ 5 19/15 11/15 𝜇 − 9 Case 1: If 𝜆 = 5, 𝜇 ≠ 9 Case 2: If 𝜆 ≠ 5 𝑟 𝐴 (Unique solution) Case 3: If 𝜆 = 5, 𝜇 = 9 𝑟 𝐴 = 2 ≠ 𝑟 𝑨ഥ = 3 (No solution) = 3 = 𝑟𝑨ഥ =No. of column in A Elementary row operation
  • 41.
    41 𝒓 𝑨 =𝟑 = 𝒓 𝑨ഥ =No. of column in A Solution for Case 2: If 𝝀 ≠ 𝟓 (Unique solution) 1 0 3/5 0 1 0 0 𝜆 − 5 17/ 5 𝜇 − 9 5 5 15 19/15 11/15 is equivalent to the system: 𝑥 + 3 𝑧 = 17 , y + 19 z = 11 , 𝜆 − 5 z = (𝜇 − 9) 𝑧 = 𝜇 −9 , y = 11 − 19 𝜇 −9 , x = 17 − 3 𝜇 −9 15 𝜆−5 15 15 𝜆−5 5 5 𝜆−5 Solution for Case 3: If 𝝀 = 𝟓, 𝝁 = 𝟗 𝒓 𝑨 = 𝒓 𝑨ഥ = 𝟐 <3(Infinite solution) 1 0 3/5 0 1 19/15 0 0 𝜆 − 5 17/5 11/1 5 𝜇 − 9 ∼ 𝑥 + 𝑧 = 3 17 19 11 5 5 15 15 , y + z = Let, z=a, an 3 15 15 5 5 arbitrary constant. Then y = 11 − 19 𝑎, 𝑥 = 17 −
  • 42.
    4 2 Problem: Test theconsistency and solve, if possible: i) 2x-3y+7z=5, 3x+y-3z=13, 2x+19y-47z=32 ii) X+2y+z=3, 2x+3y+2z=5, 3x-5y+5z=2, 3x+9y-z=4 iii) 2x+6y+11=0, 6x+20y-6z+3=0,6y-18z+1=0 iv) 3x+3y+2z=1, x+2y=4, 10y+3z=-2, 2x-3y-z=5
  • 43.
    Eigen Value ofa Matrix  Characteristic equation: Let A be nxn matrix over a field F . Then det(𝐴 − 𝜆 𝐼𝑛) is said to be the characteristic polynomial of A and is denoted by ᴪ𝐴(𝜆). The equation ᴪ𝐴(𝜆) = 0 is said to be the characteristic equation of A.  Eigen value of a matrix: A root of the characteristic equation of a square matrix A is said to be an eigen value (or a characteristic value) of A. Ex- 𝐴 = 2 1 1 2 1 1 . ᴪ𝐴(𝜆) = 0 2 − 𝜆 1 1 2 − 𝜆 1 1 = 0 1 1 2 1 1 2 − 𝜆 Roots: 1,1,4 𝜆3 − 6𝜆2 + 9𝜆 − 4 = 0 Eigen values of A. 4 3
  • 44.
    Let 𝐴 = (𝑎𝑖𝑗). Thenᴪ𝐴 𝜆 = 𝑎11 − 𝜆 ⋯ ⋮ ⋱ 𝑎𝑛1 ⋯ 𝑎1𝑛 ⋮ 𝑎𝑛𝑛 − 𝜆 of the principal = 𝑐0𝜆𝑛 + 𝑐1𝜆𝑛 −1 + ⋯ + 𝑐𝑛 where 𝑐0 = −1 𝑛 and 𝑐𝑟 = −1 𝑛−𝑟 × [sum minors of A order r]. Ex- 𝒄𝟏 = −𝟏 𝒏−𝟏(𝒂𝟏𝟏 + 𝒂𝟐𝟐 + ⋯ + 𝒂𝒏𝒏) = −𝟏 𝒏−𝟏 𝒕𝒓𝒂𝒄𝒆 𝑨 , And 𝒄𝒏 = 𝒅𝒆𝒕𝑨.  The degree of the characteristic equation is same as 4 4
  • 45.
     If 𝐴 = 21 1 1 2 1 1 1 2 , characteristic equation is 𝜆3 − 6𝜆2 + 9𝜆 − 4 = 0 By Cayley-Hamilton Theorem, 𝐴3 − 6𝐴2 + 9𝐴 − 4𝐼 = 𝑂.  A root of ᴪ𝐴(𝑥) = 0 of multiplicity r is said to be the algebraic multiplicity of that eigen value.  Ex- A.M. of 1 is 2, A.M. of 4 is 1. 45  Cayley Hamilton theorem: Every square matrix satisfies it own characteristic equation.
  • 46.
    2 1 1  Ex-For 𝐴 = 1 2 1 , let 𝑋1 = 1 1 2 be the eigen vector 𝑥1 𝑦1 𝑧1 corresponding to 𝜆 = 1. Then 𝐴𝑋1 = 1. 𝑋1 46 Theorem: If x be an eigen value of a non-singular matrix A, then 𝑥−1 is an eigen value of 𝐴−1 . Theorem: If A and P be both nxn matrices and P be non- singular, then A and 𝑃−1𝐴𝑃 have the same eigen values.  Eigen Vector: Let A be nxn matrix over a field F . A non-null vector X is said to be an eigen vector or a characteristic vector of A if there exists a scalar λ belong to F such that 𝐴𝑋 = 𝜆𝑋 holds.
  • 47.
    2 1 1 1 2 1 11 2 = 𝑥1 𝑥1 𝑦1 𝑦1 𝑧1 𝑧1 𝑥1 + 𝑦1 + 𝑧1 = 0. If 𝑦1 = 𝑎, 𝑧1 = 𝑏 𝑥1 = −𝑎 − 𝑏. Eigen vector corresponding to 𝜆 = 1 is −𝑎 − 𝑏, 𝑎, 𝑏 = 𝑎 −1,1,0 + 𝑏(−1,0,1), where a, b are nonzero constants. Geometric multiplicity:The no. of independent eigen vectors corresponding to an eigen value is its G.M. Ex- GM of 𝜆 = 1 is 2. Then independent eigen vectors corresponding to 𝜆 = 1 are −1,1,0 and (−1,0,1). Similarly, Eigen vector corresponding to 𝜆 = 4 is 𝑘 1,1,1 , where k is nonzero constant. 4 7
  • 48.
    Theorem: Let Abe an nxn matrix over a field F . To an eigen vector of A there corresponds a unique eigen value of A. Theorem: Let A be an nxn matrix over a field F and b λ be an eigen value belonging to F . To each such eigen value of A there corresponds atleast one eigen vector.  Remarks: 1 ≤ geometric multiplicity ≤ algebraic multiplicity.  An eigen value λ is said to be regular if the geometric multiplicity of λ is equal to its algebraic multiplicity.  Ex - 2 1 1 1 2 1 1 1 2 is regular as AM=GM= 2 for 𝜆 = 1 and Am=GM=1 for 𝜆 = 4. 𝐴𝑛, 𝑛  If has n distinct eigen value, then A is regular, so is diagonalizable. 4 8
  • 49.
    1 0 0 0 6 0 00 8 - eigen values- 1,6 8 1 5 60 0 6 5 - eigen values- 1,6 80 49  Theorem: Eigen values of a real symmetric matrix are all real.  Theorem: Eigen values of a real skew symmetric matrix are purely imaginary, or zero.  Theorem: Eigen value of a real orthogonal matrix has unit modulus.  Theorem: Eigen values of a diagonal or triangular matrix has are its diagonal entries.
  • 50.
     Similar Matrices:An nxn matrix A is said to be similar to an nxn 50 matrix B if there exists a non-singular nxn matrix P such that 𝑩 = 𝑷−𝟏𝑨𝑷. If A is similar to B then B is similar to A and two matrices A and B are said to be similar . (Two similar matrices have the same eigenvalues).  Diagonalization: Let us consider the set of all nxn matrices over a field F . An nxn matrix A is said to be diagonalizable if A is similar to an nxn diagonal matrix. If A is similar to a diagonal matrix 𝑫 = 𝒅𝒊𝒂𝒈(𝝀𝟏, 𝝀𝟐, … , 𝝀𝒏) then 𝜆1, 𝜆2, … , 𝜆𝑛 are the eigen values of A.  Theorem: Let A be an nxn matrix over a field F with eigen values 𝑑1, 𝑑2, … , 𝑑𝑛 𝜖 𝐹, di are not necessarily all distinct. Let D −1
  • 51.
    51  Note: ifthe column vectors of P be linearly independent then P becomes non-singular and in that case P-1AP = D, i.e., A is diagonalizable. Consequently, we obtain a necessary and sufficient condition for diagonalisability of an nxn matrix:  Theorem: An nxn matrix A over a field F is diagonalizable if and only if there exist n eigen vectors of A which are linearly independent.  Theorem: Let A be nxn matrix over a field F . If the eigen values of A be all distinct and belong to F , then A is diagonalizable.  Theorem: An nxn matrix A is diagonalizable if and only if all its eigen values are regular.
  • 52.
    ▶ Since 2 1 1 12 1 1 1 2 𝜆1 0 0 is regular, it is diagonalizable. 1 0 0 ▶ 𝐷 = 0 𝜆2 0 = 0 1 0 . 0 0 𝜆3 0 0 4 ▶ 𝑃 = −1 −1 1 1 0 1 1 1 [1s t column = eigen vector corresponding 0 to 𝜆1, 2nd column = eigen vector corresponding to 𝜆2, 3rd column = eigen vector corresponding to 𝜆3]. References: B.S. Grewal, STUDY MATERIAL, MATHEMATICS-I (BSCM103), BSH, UEM kolkata 5 2
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.