SlideShare a Scribd company logo
1 of 445
Download to read offline
AN INTRODUCTION



LINEAR ALGEBRA
                  TO




                    BY


           L. MIRSKY
    I.EGTURER   IN MATHEMATICS   IN   THE

      UNIVERSITY    OF   SHEFFIELD




                OXFORD
  AT THE CLARENDON PRESS
                   1955
f
Ox ord University Press, Amen House, London E.O.4
    GLASGOW NEW YORK TORONTO MELBOURNE WELLINGTON




  Geoffrey   Oumberlege, Publisher to the University
    BOMBAY OALCUTTA MADRAS KARAOm CAPE TOWN IBADAN




                 PRINTED IN GREAT BRITAIN
PREFACE


My object in writing this book has been to provide an elementary
and easily readable account of linear algebra. The book is intended
mainly for students pursuing an honours course in mathematics,
but I hope that the exposition is sufficiently simple to make it
equally useful to readers whose principal interests lie in the fields
of physics or technology. The material dealt with here is not
extensive and, broadly speaking, only those topics are discussed
which normally form part of the honours mathematics syllabus in
British universities. Within this compass I have attempted to
present a systematic and rigorous development of the subject.
The account is self-contained, and the reader is not assumed to
have any previous knowledge of linear algebra, although some
slight acquaintance with the elementary theory of determinants
will be found helpful.
   It is not easy to estimate what level of abstractness best suits
a textbook of linear algebra. Since I have aimed, above all, at
simplicity of presentation I have decided on a thoroughly concrete
treatment, at any rate in the initial stages of the discussion. Thus
I operate throughout with real and complex numbers, and I
define a vector as an ordered set of numbers and a matrix as a
rectangular array of numbers. After the first three chapters,
however, a new and more abstract point of view becomes prominent.
Linear manifolds (i.e. abstract vector spaces) are considered, and
the algebra of matrices is then recognized to be the appropriate
tool for investigating the properties of linear operators; in fact,
particular stress is laid on the representation of linear operators by
matrices. In this way the reader is led gradually towards the
fundamental concept of invariant characterization.
   The points of contact between linear algebra and geometry are
numerous, and I have taken every opportunity of bringing them to
the reader's notice. I have not, of course, sought to provide a syste­
matic discussion of the algebraic background of geometry, but

of the coordinate system, reduction of quadrics to principal axes,
have rather concentrated on a few special topics, such as changes

rotations in the plane and in space, and the classifichtion of
quadrics under the projective and affine grouns.
vi                             PREFACE

     The theoryof matrices gives rise to many striking inequalitir.,j.
The proofs of these are generally very simple, but are widely
scattered throughout the literature and are often not easily
accessible. I have here attempted to collect together, with proofs,
all the better known inequalities of matrix theory. I have also
included a brief sketch of the theory of matrix power series, a topic
of considerable interest and elegance not normally dealt with in
elementary textbooks.
   Numerous exercises are incorporated in the text. They are
designed not so much to test the reader's ingenuity as to direct his
attention to  analogues, generalizations, alternative proofs, and so
on. The reader is recommended to work through these exercises,

sequent discussion At the end of each chapter there is a series of
as the results embodied in them are frequently used in the sub­
                    .



miscellaneous problems arranged approximately in order of in         ­




creasing difficulty. Some of these involve only routine calculations,
others call for some manipUlative skill, and yet others carry the
general theory beyond the stage reached     in the text. A number
of these problems have been taken from recent examination papers
in mathematics, and thanks for permission to use them are due to
the Delega tes of the Clarendon Press, the Syndics of the Cambridge
University Press, and the Universities of Bristol, London, Liver­


   The number of e xisting books on linear algebra is large, and it is
pool, Manchester, and Sheffield .


therefore difficult to make a detailed acknowledgement of sources.
I ought, however, to mention Turnbull and Aitken, An Introduction
to the Theory of Oanonical Matrices, and MacDuffee, The Theory
of Matrices, on both of which I have drawn heavily for historical
references.
   I have received much help from a number of friends and
colleagues. Professor A. G. Walker first suggested that I should

invaluable. Mr. H. Burkill, Mr. A. R. Curtis, Dr. C. S. Davis,
write a book on linear algebra and his encouragement has been

Dr. H. K. Farahat, Dr. Christine M. Hamill, Professor H. A.
Heilbronn, Professor D. G Northcott, and Professor A. Oppenheim
                           .




manuscript or advising me on specific points. Mr. J. C. Shepherdson
have all helped me in a variety of ways, by checking parts of the

read an early version of the manuscript and his acute comments

has, in addition, gi�en me considerable help with Chapters IX and
have enlbled me to remove many obscurities and ambiguities; he
The greatest debt lowe is to Dr. G. T. Kneebone and Professor
                            PREFACE                            vii

�.
R. Rado with both of whom, for several years past, I have been
in the habit of discussing problems of linear algebra and their
presentation to students. But for these conversations I should not
have been able to write the book. Dr. Kneebone has also read
and criticized the manuscript at every stage of preparation and
Professor Rado has supplied me with several of the proofs and
problems which appear in the text. Finally, I wish to record my
thanks to the officers of the Clarendon Press for their helpful
co-operation.
CONTENTS


                                   PART I


     DETERMINANTS, VECTORS, MATRICES, AND
                         LINEAR EQUATIONS

I. D E T E R M IN AN T S
    1.1. Arrangements and the €·symbol                          1
    1.2. Elementary properties of determinants                  5
   <::!])   Multiplication of determinants                     12
    1.4. Expansion theorems                                    14
    1.5 . •Jacobi's theorem                                    24
    1.6. Two special theorems on linear equations              27

II. V E C TOR S P A C E S AND L I N E A I� MANIFOLDS
    � The algebra of vectors                                   39
    Y. Linear manifolds                                        43
   <:]JD Linear dependence and basos                           48
    2.4. Vector representation of linear manifolds             57
    2.5. Inner products and orthonormal bases                  62

III. T H E A LGEB R A OF M A TRI C E S




  $4
    3.1. Elementary algebra                                    72
    3.2. Preliminary notions concerning matrices               74
      �
   G.:YAddition      and multiplication of matrices            78
                                                               85
    3.      Adjugate matrices                                  87
    3.      Inverse matrices                                   90
      .7.   Rational functions of a square matrix             97
    3.8. Partitioned matrices                                 100



    4.1. Change of basis in a linear manifold                 111
IV. L IN E A R O P E R A TO R S




    4.3. Isomorphisms and automorphisms of linear manifolds
    4.2. L inear operators and their representations          113
                                                              123
   �4. Further instances of linear operators                  126

v/s YSTE MS O F LINEAR EQUATIONS AND RANK OF
    MATRICES
   e>       Preliminary results                               131
   @).      The rank theorem                                  136
x                                  C O NTENTS

                                                                      141
      5.4. Systems of homogeneous linear equations
      5.3. The general theory of linear equations
                                                                      1/8
    §J?- Miscellaneous applications                                   152
    � Further theorems on rank of matrices                            158
VI. ELEMENTARY OPERATION S A N D
     OF E Q UIVALENCE
                                                                       �1
                                                 TIlE CONCEP:C
                                         �"tSS.� 't..... Sb�...u.;.�
     � E-operations and E · matrices                                  168
     � EqUIvalent matriees
     6.3. Applications of the precedmg theory
                                                                      172


     6.4. Congr uen ce transformations
                                                                      178
                                                                      182


     6.6. Axiomatic characterization of determinants
     6.5. The general concept of equivalen('e                         186
                                                                      189


                                    PART II

    FURTHER DEYELOPMENT OF MATRIX THEORY

VII. THE CHARACTERISTIC                   EQUATIO�

    � Characteristic polynomials and similarity transformations
     7.1. The coefficients of the charaderistie polynomial            195


    'l!J'- Characteristic roots of rational functions of matriccs
                                                                      199


     -e. The minimum polynomial and the theorem of Cayley and
                                                                      201


            Hamilton                                                  202
     7.5. Estimates of chara('teristi(' roots                         208
     7.6. Characteristic vectors                                      214

VIII. ORTHOGONAL AXD UNITARY MATRICES
     8.1. Orth ogon al matrices                                       222
     8.2. Unitary matrices                                            229
     8.3. Rotations in the plane                                      233
     8.4. Rotations in space                                          236

lX. GROUPS
     9.1. The axioms of group theory                                  252
     9.2. Matrix groups and operator groups                           26 1
     9.3. Representation of groups by matrices
     9.4. Groups of singular matrices
                                                                      267
                                                                      272
     9.5. Invariant spaces and groups of linear transformations       276

X. CANONICAL FORMS
     10.1. The idea of a canonical form                               290
                                                                      292
     10.3. Diagonal canonical forms under the orthogonal similarity
     10.2. Diagonal canonical forms under the similarity group


           • group and the unitary similarity group                   300
xi

                                                                      306
                                     CONTENTS



                                                                      312
      10.4. Triangular canonical forms
      10.5. An intermediate canonical form
      10.6. Simultaneous similarity transformations                   316



                                                                      327
XI.   MATRIX ANALYSIS


                                                                      330
      11.1. Convergent matrix sequences
                     sel ies

      11.3. The relation between matrix functions and matrix poly-
      11.2. Power              and matrix functions


              nomials                                                 341
      11.4. Systems of linear differential equations                  343


                                       PART     III

                               QUADRATIC FORMS

XII. B ILI N E A R , QUADRATIC, AND HERMITIAN
      FORMS
  @yperators and forms of the bilinear and quadratic typcs            353
      12.2. Orth ogonal
      12.3. Gencral    reduction to dia gonal form
                           reduction to diagonal form                 362


              The problem of equ iv alenc e .                         375
                                                                      367
�
      12.5. Classific a tion of quadric s                             380
                                                 Rank and signature


      12.6.   Hermitian forms                                         385



      13.1. The value classes                                         394
XIII. DEFINITE AND INDEFINITE FORMS


      13.2. Transformations of positivo definite forms                398
      13.3. Determinantal criteria                                    400
      13.4. Simultaneous reduction of two quadratic forms             408
      13.5. The inequalities of Hadamard, Minkowski, Fischer, and
                Oppenheim                                             416

BIBLIOGRAPHY                                                          427

INDEX                                                                 �9
PART I

 DETERMINANTS, VECTORS, MATRICES,
      AND LINEAR EQUATIONS

                                          I

                            DETERMINANTS
THE present book is intended to give a systematic account of the
elementary parts of linear algebra. The technique best suited to
this branch of mathematics is undoubtedly that provided by the
calculus of matrices, to which much of the book is devoted, but we
shall also require to make considerable use of the theory of deter­
minants, partly for theoretical purposes and partly as an aid to
computation. In this opening chapter we shall develop the principal
properties of determinants to the extent to which they are needed
for the treatment of linear algebra. t
   The theory of determinants was, indeed, the first topic in linear
algebra to bc studied intensively. It was initiated by Leibnitz
in 1 696, developed further by Bezout, Vandermonde, Cramer,
Lagrange, and Laplace, and given the form with which we are now
familiar by Cauchy, Jacobi, and Sylvester in the first half of the
nineteenth century. The term 'determinant ' occurs for the first
time in Gauss's Di8Qui8itiones arithmeticae (1 801 ).t
1 . 1 . Arran�ements and the e-symbol
    In order to define determinants it is necessary to refer to arrange­
ments among a set of numbers, and the theory of determinants
can be based on a few simple results concerning such arrangements.
In the present section we shall therefore derive the requisite
preliminary results.

of the integers "1> . . . ' "- n.
    1 . 1 . 1 . We shall denote by ("-1> ' "-n) the ordered set consisting
                                          .•.




  t For a mu ch more detailed discussion of d oterm inants Bee Kowalewski,

and Panton, The Theory of Equations, and in Ferrar, 2, Aitken, 10, and Perron, 12.
Einfuhrung in die Dcterminantentheorie. Briefor accounts w ill be found in Burnside


                                                                      Th, Theory of
Determinants in the Hi8torical Order of Development .
(Numbers in bold ·face type refer to the b ibliography at the end.)
  t For historical and bibliographical information see Muir,


  6682                                   B
2                             DETE R MIN ANTS                             I,   § 1.1

    DEFINITION 1. 1.1. If (AI"'" An) and (1-'1"'" I-'n) contain the same
 (distinct) integers, but these integers do not necessarily occur in the
same order, then (Al, ,An) and (l-'l>"',l-'n) are said to be ARRANGE­
                       ...
MENTst of each other. In symbols: (Al, .. . An) = d(l-'v . .. o/1,n) or
                                                ,
(1L1"",l-'n) = d(Al,···, An)·
   We shall for the most part be concerned with arrangements of the
first n positive integers. If (v1 ,vn )      d(l •...• n) and (kl,.. . ,kn)
                                      vk J = d(I, ... , n). We have the
                                         •...      =



= d(I .... , n), then clearly (Vk1,""
following result.
     THEOREM 1.1.1. (i) Let (vv"" vn) vary over all arrangements of
(1 , ... ,n), and let (kl> ...,kn) be a fixed arrangement of ( 1 , . . .,n). Then
( Vkl,"" Vkn ) varies over all arrangements of (1 , ... , n).
     (ii) Let (VI'"'' vn) vary over all arrangements of (1, ... , n), and let
(l-'l'''',l-'n) be a fixed arrangement of ( 1 , .. . ,n). The arrangement
(A1>"" An), defined by the conditions
                       VAl = ILl' ... , VAn = ILn,
then varies over all arrangements of (1, ... , n).
   This theorem is almost obvious. To prove (i), suppose that for
two different choices of (vl>"" vn)-say (xV"'' cxn) and (f3l"'" fJn)­
(vk,'"'' VkJ is the same arrangement, i.e.
                         «Xk1'"'' cxk,,) = (f3k1,· . . ,f3kn)'
and so
These relations are, in fact, the same as
                          (Xl = f3l'     ... ,   CXn =   fJn,
altholgh they are stated in a different order. The two arrange­
ments are thus identical, contrary to hypothesis. It therefore
follows that, as (VI"'" vn) varies over then! arrangements of (I, . .. ,n),
(Vk""" Yk,,) also varies, without repetition, over arrangements of
(1, .. . , n). Hence (Vkl,"" YkJ varies, in fact, over all the n! arrange­
ments.
   The second part of the theorem is established by the same type
of argument. Suppose that for two different ohoices of (vv .. . , vn ) ­

i.e.
say (cxl, ... ,cxn ) and (fJl, ... ,fJn)-(Al> .. . ,An) is the same arrangement,


  t We a.v�id the familiar term 'permutation' since this will be used in a. Borne·
what different sense in Cha.pter IX.
I. § 1.1          ARRA NGEMENTS AND THE €·SYMBOL                                                        3

Then (cxl,. , cxn) = (PI, ... ,Pn). contrary to hypothesis, and the
             •.



assertion follows easily.




                                               {�
  1.1.2. DEFINITION                                                                x
sgnx (read : signumx)
                                1.1.2. For all real values oj                          the junction
                                is defined a8
                                                            (x > 0)

                                                            (x < 0).
                              sgnx       =                  (x = 0)




                                                       =
                                               -1
  EXERCISE 1. 1.1. Show that
                                 sgn x . sgny              sgnxy,




                        €(A1>...,An)=sgn                            (As-'")·t
and deduce that
                                                     =




                        sgnXl·sgn X2." sgnxk




                        t1"'" An) ('
   DEFINITION
                                                             sgn(x1 x2",xk)'




                          1>"" ILn
                    1.1.3.

                                                    1';;r<8';;n
      (i)                                                II

                                        = € Ill" '" lin ) .€ (ILI,···,ILn).
      (u")          €                               '




  EXERCISE 1. 1.2. Show that if '1             <          <   lin' then €(111, .. . , An)            Also
show that if any two A'8 are equal, then I;{A1, .... An)                      O.
                                                     ...                                     =   1.
                                                                          =

  EXERCISE 1.1.3. The interchange of two A'8 in (A1, •• "                           An) is called



                                                     =
transposition, Show that, if (A1" ••, An) = d{ 1,,,., n), then it is possible to
                                                                                                        a


obtain (A l >' •• , ). n ) from (I, . .. ,n) by a succession of transpositions. Show,
furthermore, that if this process can be carr i ed out by 8 transpositions, then


Deduce that, if the same process can also be carried out by 8' transpositions,
                                      €(A1>".,An)          ( - I )'.

then 8 and 8' are either bo th even or both odd.




                           €(A1, .., An') €(ILk1"'" Aka).t
                                            Ak1"'"
  THEOREM 1.1.2. Ij (AI"··' An), (ILl'·'" ILn), and (kl, ... , kn)                                     are
arrangements oj (l, . . . ,n), then
                                 .




                                          €(Al'"'' An)
                                               =


                             ILl'"'' ILn                            ILk


(ILl"'" ILn) are subjected to the same derangement, then the value of
   We may express thi s identity by saying that if (A1""'�) and



                                             ILI, . ·,ILn
                                               .
  t Empty products are, as usual, defined to have the value 1. This implies, in
particular. that for n = 1 every €.symbol is equal to 1.
  : Definition 1.1.3 implies, of course, that

                          €(Ak1' .. Ak.)       sgn
                                                     1 ... '<J";"
                                 ••        =               n      ().kl-).k,)·
4                                    DETERMINANTS                                                  I. § 1 . 1

remains unaltered. To prove this we observe that


                                                      S
                        CAkJ-AkJ(l-'kj-l-'ki) = (As-A,)(I-'s-I-',),                               (1.1.1)
where                    r       min(ki, kj),             =     max(ki, kj).                      (1.1.2)
Now if r, S (such that 1 � r < S � n) are given, then there exist
                             =




unique integers i, j (such that 1 � i < j � n ) satisfying ( 1 . 1 . 2).
Thus there is a biunique correspondence (i.e. a one-one correspon­
dence) between the pairs ki, kj and the pairs r, s. Hence, by (1.1.1),
                                                  =
              IT        (Akj-Aki)(l-'kj-l-'kJ)             IT
                                                      l�r<s�n
                                                                        (-A,) (1-'8-1-',),
            l�i<j�n
Therefore, by Exercise 1.1.1,
    sgn      II
        l�'t<J'�n
                    (AkJ-Ak,)·sgn          II
                                       l<'t<J�n
                                                   (I-'kj-I-'k.)

                                           l�r<s�n                             1�r<8:(n
                                    = sgn       IT        (As-A,). sgn              IT      (I-'s-I-',),

i.e.


    THEOREM 1.1.3. L et 1 � r < s � n. Then
          e(I, . . . , r-l, s, r +l, . . . , s-l, r, s +I, . . . , n) = - 1.
    The expression on the left-hand side is, of course, simply

by e(Al>'''' A n ) , we observe that in the product
e( I, 2, . . . , n) with rand s interchanged. Denoting this expression


                                          IT       (-)
                         l�t<j�n
there are precisely 2 (s- r-I ) + 1                   =    2s - 2r-l negative factors,
namely,
                   (r+l) - s,         (r+ 2)-s,            ...,     (s-I)-s,

                        r-s.
                   r- (r+I),          r - ( r+ 2) ,        .. .,    r- (s -l),

Hence, e(Al"'" An)           =    (-1) 28-21'-1   =   -1,          as   asserted.


    The results obtained so far are sufficient for the discussion in
§ 1.2 and § 1.3. The proof of Laplace's expansion theorem in




        (           )                                       (                       (
§ 1.4, however, presupposes a further identity.
  THEOREM 1.1.4. If (r1, . . . , r n)                                                       d(I,.. . ,n),
and 1 � k < n, then
                                            =
                                                  d(I,... ,n), (sl> ... , s n )         =




       e rl�..., rn
         Sl, .. ·,8n
                        =
                                                          l> . , r
                             (_I)'I+... +TI+81+... +81 e r .. k
                                                         811,,,, Sk
                                                                           )   •



                                                                                     Sk+1'"'' Sn
                                                                                                   )
                                                                                   e rk+l> .. .,rn .
I, § 1.1             ARRANGEMENTS AND THE £-SYMBOL                                            5

   By Exercise 1.1.1 we have
£(rl> . . .,rn )
                       IT (rj-ri)·sgnk+l"t<j"n(rj-ri)·sgn II (rj-1',,)
                    1"t<1"k                              l"i"k
       =     sgn                        IT
                                                                             k+l"j"n
       =£(rl,...,rk).£(rk+1> . . . ,rn) .(-I)V1+... +Vk, (1.1.3)
where, for 1 � i � k, vi denotes the number of numbers among
rk+1"'" l'n which are smaller than rio
  Let r�, ...,r� be defined by the relations
                             dh, ...,rk),
                     (r� , ..., r �)   =     r� < . .. < r� ,

rk+ 1 , ... ,r" which are smaller than r�. Then
and denote by v� ( I � i � k) the number of numbers among

                 v� = r� -I, v; = 1';-2, ..., v� = r�-k,
               VI   + + Vk
                      , , ,       =    v�+ ...+v�    =   r1+ ...+ rk - ik ( k + I),
and hence, by (1.1 . 3),
           £ (rv'''' l' n )   =   (-I y,+... +rk-1k(k+l) f(r1,·..,rk) ' f(rk+1 " ' " l'n).
Similarly
                                        sk) . c (sk+l>'''' S",
           £ (s 1'"'' s 11 ) - (_1)8,+...+Bk-lk(k+l)c( S 1"
                                              "           �  )  '"

and the theorem now follows at once by Definition 1 . 1.3 (ii).

1 .2. Elementary properties of determinants
  1 .2. 1 . We shall now be concerned with the study of certain
properties of square arrays of (real or complex) numbers. A
typical array is

                                                                                         (1.2.1)


  DEFINITION 1.2.1. The nZ numbers aij (i, j                             =   I,... , n) are the
ELEMENTS of the array (1.2.1). The elements
                            ail> aiZ' oo " ain
constitute the i-th ROW, and the elements
                           alj' aZj'"'' anj
constitute the j-th COLUMN of the array. The elements
                          an, azz,···, ann
constitute the DIAGONAL of the array, and are called the                            DIAGONAL
RLRMRNTS_
6                               DETER MIN A NTS                                           I. § 1.2
  The double suffix notation used in (1.2.1) is particularly appr o ­
priate since the two suffixes of an element specify completely its
position in the a rray. We s hall reserve the first suffix for the row
and the secon d for the col umn, 80 that aij denotes the element
standing in the ith row andjth column of the array (1.2.1).

as its determinant.
   With each square array we associate a certain number known



number
    DEFINITION 1.2.2. The DETERMINANT of the array (1.2.1) i8 the

                       0"" ... >..)
                           I          e("l"'" An)a1.�, ...an>""                           (1.2.2)

where the 8ummation extends over all the n! arrangement8 (AI"'" ..)
of (l, . .. ,n).t Thi8 determinant i8 denoted by


                                                                                         ( 1.2.3)

                        ani an2
or,   more briefly, by laijln.
  Determinants were first wri tten in the form (1.2.3), th ou gh
without the use of double suffixes, by Cayley in 1 841. In practice ,
we often use a single letter, such as    to denote a determinant.
                                                   D,
  The determinant (1.2.3) associated with the array (1.2.1) is




         l au
plainly a polynomial, of degre e n, in t he n2 elements of the array.




                   /
  The determinant of the array consisting of the single elem ent
au is , of course, equal to all' Further, we have
             a12
                       =   e(l, 2)aua22+e(2, l)a12a2l = aua22-aI2a21;
         a2l a22
    au au al3          =   e(l, 2, 3)aua22 a33+e(l, 3, 2)aua23a32+
    a21 a22 a23                                        +e(2, 1, 3)a12a21 aS3
    a3l a32 a33              +e(2, 3, 1)a12a23a31+e(3, 1, 2)a13a21a32+
                                                      +e(3, 2, l)a13a22a3I
                       =


                                   +aua23aS1+aI3a21a32 -alS a22aSl'
                           aua22a33-aUa23a32-a12a2la33+

   We o bserve that each term of the expression (1.2.2) f or the
determinant laiiln contains one element from e ach row and one
element from each column of the array (1.2.1). Hence, if any array

  t The sAme convention will be observed whenever                     symbol such        (AI.· .. . >..)
appears under the summation sign.
                                                                  a                 as
I, § 1.2   ELEMENTARY PROPERTIES OF DETERMINANTS                                                   7
contains a row or a column consisting entirely of zeros, its deter­
minant is equal to O.
   A determinant is a numlJer associated with a square array.
However, it is customary to use the term 'determinant' for the

convenient, and we shall adopt it since it will always be clear from
array itself as well as for this number. This usage is ambiguous but

the context whether we refer to the array or to the value of the
determinant associated with it. In view of this convention we may
speak, for instance, about the elements, rows, and columns of a
d eterminant. The determinant (1.2.3) will be called an n-rowed
determinant, or a determinant of order n.
  1.2 .2. Definition 1.2.2 suffers from a lack of symmetry between
the row suffixes and the column suffixes. For the row suffixes
appearing in every term of the sum (1.2.2) are fixed as I, .. . , n,
whereas the column suffixes vary from term to term. The following
theorem shows, however, that this lack of symmetry is only




                                               ('1"'" , IL.n)
apparent.

  THEOREM 1.2.1. Let D be the value of the determinant ( 1.2.3).

   (i) If (.1'''''.11) is any fixed arrangement of (I, . .., n ) , then
                  D
                                                                  a).lfLl· ..aA../!tt'
                              ""
                          (!'-1o" .,p.a)
                      =
                              L            E
                                                IL " .
                                                  l .      n
  (ii) If (ILl"'" ILn) is any fixed arrangement oj (1, . . . , n), then




In view of Definition 1.2.2 we have

                                                                                         (1.2.4)

Let the same derangement which changes (I,. .. ,n) into the fixed
arrangement (.1>""�) change ( vl,· .., vn) into (ILl"'" ILn). Then


and, by Theorem 1.1.2          (p. 3),
(         )
8                                   DETERMINANTS                           I. § 1.2

Hence, by Theorem 1.1.1 (i) (p. 2),
                                     ""     A1>' .. , An
                                                         a
                                     L n) E iLl"'" iLn A1/-'1···aAA/-,n'
                    D =
                                (/-,b""p.
and the first part of the theorem is therefore proved.

same derangement which changes ( vI" '" vn) into the fixed arrange­
   To prove the second part we again start from (1.2.4). Let the

ment (iLl"'" iLn ) change (I, .. . , n) into (A1>"" An). Then, by Theorem
1.1.2,


and also




as asserted.

  Theorem 1 .2.2. The value of a determinant remains unaltered
when the rows and columns are interchanged , i .e.




                                         (         )
    Write brs = asr (r, s   I, ... , n). We have to show that
                                     =



I aij In=Ibi) In. Now, by Theorem 1.2.1 (ii) and Definition 1.2.2,

                                              A.l·" b An n
              I bi) I n =  "" E A1>"" An b
                           L
                            
                       (lib'" ,"") I, ... , n
                      = I E(A1>'''' An ) alA l · .. anA
                       (Ab... ,An)
and the theorem is therefore proved.

    EXERCISE 1.2.1. Give a direct verification of Theorem 1.2.2 for 2·rowed
and 3-rowed determmant.s_

  Theorem 1.2.2 shows that there is symmetry between the rows
and columns of a determinant. Hence every statement proved
about the rows of a determinant is equally valid for columns, and
conversely.

interchanged, then the resulting determinant has the value - D.
  THEoltEM 1.2.3. If two rows (or columns) of a determinant Dare
I. § 1.2 ELEMENTARY PROPERTIES OF DETERMINANTS                                                                   9

  Let 1 � r < s � n, and denote by D' = I a�j In the determinant
obtained by interchanging the rth and sth rows in D = laij In. Then


                                                           (i = r)
                                                           (i * r; i * s)

                                                           (i 8).=




Hence, by Definition 1.2.2,




                                                                                               -
             =-                          (                                    ) alA.···as>.,···arA,···an''·
                            ...
But, by Theorem 1.1.3 (p. 4), e(I, . . . , s, . . . , r, .. . , n) =                               I,   and so

                            �                I, ... , s, ...
                                                    ,          ,       ,
                                                               , r, ... , n
                                        e,
                                 ,A.)
       D'
                       a,
                            L                 v···, r'···' s,···,             n
                             .


Hence , by Theorem 1.2.1 (i), D' = -D.


  COROLLARY. If two rows (or two columns ) of a determinant are
identiN�l, then the determinant vanishes.

  Let D be a determinant with two identical rows, and denote by

rows. Then obviously D' = D. But, by Theorem 1.2.3, D' = - D,
D' the determinant obtained from D by interchanging these two

and therefore D = o.

   EXERCISE 1.2.2. Let                  T1 < ... < Tk.               Show that. if the rows with          suffixes
TIO T2' .••• Tk   of a determinant D are moved into 1st, 2nd, ...• kth place respec­
tively while the relative order of the remaining rows stays unchanged,
then the resultmg determinant is equal to




  'Vhen every element of a particular row or column of a deter­
minant is multiplied by a constant k, we say that the row or
column in question is multiplied by k.

  THEOREM 1.2.4. If a row (or column) ofa determinant is multiplied
by a constant k, then the value of the determinant is also rv,ultiplied
by k.
10                                      DE TER MIN A N TS                                               I. § 1.2
   Let D = laijl" be a given determinant and let D'
from it by multiplying the rth row by k. Then




  The next theorem provides a method for expressing any deter­
minant as a sum of two determinants.
     THEOREM        1 . 2 . 5.




                                                                          aIn
                                                                                  +
                                                                          ann
                                                                                        ,
                                                                         au            aIr
                                                                +
                                                                         anI           a�r
  Denoting the determinant on the left-hand side by Ibijl", we

                                 ( j =1= r)
have

                                                                  (j    =   r).
Hence, by Theorem 1 . 2. 1 (ii) (p. 7),
  I bijl" =       L
              0." ... ,".)
                              £(-'I,···,-'")b",1···b,,,r···b".n

                  L
              <"" ... ,".)
        =                     £(-'1'"'' -'n)a",1 .. · (a",r + a�,r ) .. ·a"."
        =         L
              ("" .... >..)
                              £(-'1'"'' "n)a",1 .. ·a''r.. ·a"-n+

                                                  (>.1 .. . . . >..)
                                             +          L              £("1>"" "n)a",1 . . · a).,r . . ·a"n"


                                                               +
I, § 1.2   E L E MEN T A R Y P R O P E RTIES OF D E T E R M I N A N T S      11
  EXERCISE 1.2.3. State the analogous result for rows.

   A useful corollary to Theorem 1.2.5 can now be easily proved by
induction. It enables us to express a determinant, each of whose
elements is the sum of h terms, as the sum of h n determinants.
   COROLLARY.




                                                                     ann )
                                                                      (k..

   THEOREM 1.2.6. The value of a determinant remains unchanged
if to any row (or column) is added any multiple of another row (or
column).
   By saying that the 8th row of a determinant is added to the rth
row we mean , of course, that every element of the 8th row is added
to the corresponding element of the rth row. Similar terminology
is used for columns.

obtained when k times the 8th row is added to the rth row in D.
   Let D = lailln' and suppose that D' denotes the determinant

Assuming that r < 8 we have




                  D'=




Hence, by Theorem 1.2.5 (as applied to rows),
                   a ll            aln       an


           D' =
                   arl             arn       kaSl
                                         +
                   asl             asn       aSI

                   a nI            ann       anI
12                                                                                                                      I , § 1. 2
and so, by Theor em 1.2 . 4 and the corollary to Theorem 1.2.3,
                                                 D E TERMINANTS




                         D' =D+k                                                                    =D.


1 .3. Multiplication of determinants
  We shall next prove that it is always possible to express the
prod uct of two determinants of the same order n as a determinant
of order n.
  Theorem 1 .3 . 1 . (Multiplication theorem for determinants)
  Let A       l aijln and B
                =           Ibi) I n be given determinants, and write
                                                  =



C    ICi] In' where
                                                                                      I, . . . ,
     =

                                                 n

                                   Crs   =i�I ari biS (r,
                                           �                                 s   =                 n).

                                                AB= C.


                                                      �                         �
Then                                                                                                                    (1.3.1)
     We have
      C   =          �
               (>." ...,>'n)
                               €(, . . . , An)CI>'l",cn>'�

          = L (>." ... ,>..)
                               €(AV"" An)         ( JLI-I aI/-'l b/-'l>'} " ( JLn-I an/-,�b/-'n>'n)
                 n             n

                � . . . � alJLl·. . an JLn
              /-'1�1         /-'n�I                       0", ... , >'. )
                                                                   �        €(A1, . . ·, An )bJLl>'l . . . bJLn>'n'    ( 1.3.2)




                                           bJLni
          =




By Definition 1.2.2 the inner sum in (1.3.2) is equal to




Hence, if any two I-"s are equal, then, by the corollary to Theorem
                                                                                 b/-'nn

1.2.3, the inner sum in (1.3.2) vanishes. It follows that in the n-fold
summation in (1.3.2) we can omit all sets of I-"s which contain at




                                                                                          (I-'I>""I-'n)b b
least two equal numbers. The summation then reduces to a simple

     C                       aI/-'l· .. an,.".
summation over n! arrangements (1-'1 " ' " I-'n), and we therefore have

             (p." .. ,p.n)                       (,, . .. ,,.)
         =       L                                    L            € (Al > ... ,,.,)b""''l .. ·b,.,,.�
                 .                                    .
         = • k E (1-'1>"" I-'n ) a1,.",· . an,.". k E        .                                              ,.",',... ,.".. �.
           (p." ...,/-,n)                        (', .. . .. }.�)
                �                                      �
                                                                   A 1, . . ·, An
I. § 1 . 3       M U L T I P L I C A T I O N O F DETER M I N A N T S                                                13
Hence, by Theorem 1.2.1 (i) (p. 7),




                                                                                                     =   A B.


  The theorem just proved shows how we may form a determinant
which is equal to the product of two given determinants A and B.
We have, in fact, A B     C, where the element standing in the rth
                                 =



row and sth column of C is obtained by multiplying together the

B and adding the products thus obtained. The determinant C
corresponding elements in the rth row of A and the sth column of

constructed in this way may be said to have been obtained by
multiplying      A
               and B 'rows by columns'. Now, by Theorem 1.2.2,
the values of A and B are unaltered if rows and columns in either
determinant or in both determinants are interchanged. Hence
we can equally well form the product AB by carrying out the
multiplication 'rows by rows', or 'columns by columns', or 'columus
by rows'. These conclusions are expressed in the next theorem.


determinant C
  THEOREM l .3.2. The equality ( l . 3.1) continues to hold if the




                                                                              = 1, . . . , n);
                I ci) I n is defined by any one of the following sets of
                     =



relations:
                                      (r, s  1, . , n) ;                      =         . .




                                     71

                                   Ll air bis
                                   t=
                         ers   =                                 (r,s
                                     n


                                   i=l
                               =   L air bsi (r,s                             =   1, . . . , n ) .
                         crs

   An interesting application of Theorem 1.3.2 will be given in
§ l.4.1 (p. 19).


                                                                                                                l
   EXERCISE 1.3.1. Use the definition of a determinant to show that

                                                                                                         atm ·
        all
        am1
             0
                          at m
                          amm
                           0
                                     0

                                     0
                                     1
                                                                          0

                                                                          0
                                                                          0    =
                                                                                    I   a�t
                                                                                        amt              a mm
                                                                                                             ·

             0             0         0                                    1
                                          ....

                                              ............ _- .... _---
14                                     DETERMINANTS                                    I, § 1.3
Deduce, by means of Theorem 1.3.1, that

      au                aIm        0                        0

     amI                amm        0                        0
      0                    0   bll                         bIn

      0                    0   b1l1                        b""



                                           I                               1.1
                                               all.                  �lm         bl�
                                       =



                                               ami                   amm         bnl

1 .4. Expansion theorems

be used in the evaluation of determinants. A procedure that is
   1 .4. 1 . We have already obtained a number of results which can

still more effective for this purpose consists in expressing a deter­


of the present section is to develop such a procedure.
minant in terms of other determinants of lower order. The obj ect


  DEFINITION 1.4.1. The COFACTOR A rs of the element arB in the




                                                                  = 1, . . ,n) ,
determinant
                               D=



                                                                           .
is de ned as
    fi
                       A rs    =   (-lY+sDrs               (r,s
where Dr8 is the determinant of order n-1 obtained when the roth row
and s-th column are deleted from D.
     For example, if
                                      all al2 al3
                                   D= a21 a22 a23 ,


                                                            /
                                      a31 a32 a33

then          A
                  ll
                       =
                           (_1)1+1
                                   a22 a23
                                   a32 aS3
                                               /                 =
                                                                     a22a33-a2SaS2


and           A 23     =
                           (-1)2+3 all               a12   /    =
                                                                     a12a31-ana32'

     EXERCISE 1.4.1.
                                   au                 a32
                        Suppose that Ib.;i            I"
                                             is the determinant obtained when


Show that if the element arl of Iail I .. becomes the element bfXT of Ibill", then
      =
two adjacent     rows (or columns) of a determinan t laill" are interchanged.

BfXT   -..4m where A .. denotes the cofactor of a" in I ail In and BfXT t he co­
factor of bfXT in Ibiil".
Theorem 1.4. 1. (Expansion of determinants in term s of rows
I, § 1.4                 EXPANSION THEOREMS                                     15


and columns)

If the cofactor of apq in D      =   /aiJ/n is denoted by Apq, then
                       n

                       2: A = D
                      k=l ark rk
                                            (r = 1, . . . , n),            (1.4.1)

                       11,

                       2: akr Akr = D
                      k=l
                                            (r = 1, . . . , n ) .          (1.4.2)

   This theorem states, in fact, that we may obtain                 the value of a
determinant by multiplying the elements of any one row or column
by their cofactors and a dding the products thus form ed . The
identity (1.4.1) is known as the expansion of the determinant D
in terms of the elements of the rth row, or s im ply as the expansion
of D in terms of the rth row. Similarly , (1.4.2) is known as the
ex pansion of D in terms of the rth column. In vi ew of Theorem
1.2.2 (p. 8) it is, of course, sufficient to prove (1.4.1).
   We      begin by showing that

             1    o



                                                                           (1.4.3)

Let B, B' denote the values of the determinants on the left-hand
side and the right-hand side respe ctively of (1.4.3). We write
B  = /bij/n, so that bn 1 , b12
                             =   . ..  bIn = o. Then
                                       =      =




But , for any arrangement (A2, , An) of (2, . . . , n ) , we clearly have
                                      •••




Hence

as asserted.
16                                     DETERMIN ANTS                                      I. § 1.4

     Next, by Theorems 1.2.4 and 1.2.5 (pp. 9-10), we have
             an                             ain

     D= arl                                 am




                           = kLlark
             anI                         ann
                                             an                     alk                  ain
                                   n
                                              0                   0 1 0                   0
                              =
                                             anI                    ank                  ann
                                   n
                                  a !1rk,                                                (1.4.4)
                               kLl rk
where !1rk is the determinant obtained from D when the kth element
                                =

in the rth row is replaced by 1 and all other elements in the rth row
are replaced by O. By repeated application of Theorem 1.2.3 (p. 8)
we obtain
                                        o                                         0
                                                            alk                  aln

             /:J.rk   =   (_ 1 ) r-l aT_l•l                ar_l.k               ar_l•n
                                     ar+1•1                ar+l.k               ar+l•n

                                                            ank                  ann

                               1         0                  0             0                   0
                                                         al.k-l      al.k+1               aln

                                       aT_l•l .




                                                   = (-I)'+kDrk'
     ( - 1 )(r-l)+(k-l) a r_l.k
                                                                                         a ' +l,fi
=                                                        ar_l.k_l    aT_l.k+l            aT_I."
                        a'+l,k         a'+l,l'           ar+l.k-l    ar+l.k+l

                                                          an.k_l      an.k+l               ann

Hence, by (1.4.3),                       !1rk
where Drk denotes the determinant obtained when the rth row and
kth column are deleted from D. Hence, by (1.4.4),



and the theorem
         •
                               is proved.
I, § 1.4                   EXPANSION THEOREMS                                              17


This consists in first using Theorem 1.2.6 (p. 11) to introduce a
   We now possess a practical method for evaluating determinants .

number of z ero s into some row or column, and then expanding the

th e d e term i n a n t
determinan t in terms of that row or column. Consider, for example,

                                      9         7        3    -9
                                      6         3        6    -4
                           D=
                                     15         8        7    -7
                                    -5         -6        4     2
Adding      the last c olumn       to   each of      the first three      we   have
                                    0        -2      -6        -9
                                    2        -I          2     -4
                         D=
                                    8           I        0     -7
                                   -3        -4          6      2
Next, we      add   once, twice,    and four times the third row to the

      expression
second row, first row,         and fourt h row respectively. This leads to
the
                                    16     o        -6       -23
                                    10     o         2       -11
                                     8     I         o        -7
                                    29     0         6       -26

Expanding       D   in terms of     the second column we obtain
                                         16         -6       -23
                           D   =   -     10          2       - 11 ,
                                         29          6       -26
and we c an continue the process of                  reduc ti on in a similar     manner
until    D is ev aluated .
  EXERCISE 1.4.2. Show that D                  - 532.
                                                                      be used    to show
                                         =




  The expansion theorem           ( The orem l.4.1) can
that the value      of   the Vandermonde determinant
                                                                      I
                                                                      I
                    D=
                           a�-l a�-2
is giv en by                   D=           IT (ai-a;).                           •   (1.4.5)
  6.82                                         c
                                        l';'i<i';'n
8                                                 DETERMIN A NTS                              I. § 1.4



s true for n-l, where n � 3, and deduce that it is true for n . We
rhe assertion is obvi ou sly true for n 2. We shall assume that it        =




nay clearly assume that all the a's are distinct, for otherwise
1.4.5) is true trivially. Consider the determinant
                    xn-1 xn-2
                    an-1 an-2
                                           x 1
                                           2
                                           az 1         Z


                                       n-1 an-2
                                      an                                          an
                                            n
                                                                                       1

n x, say f (x) , of d eg ree not g re at er than n- l . Moreover
�xpanding i t in terms of the first row, we see that it is a p o lynom ial


                 f (az}   . . . f (an}                           =
md so f(x) is divisi ble by each of the (distinct) factors
                                       0,               =
                                                                                  =




�-a2"'" x-an' Thus
                        K(x-a2)···(x-an};
md here K is independent of x, as may be seen by comparing the
                f (x}                              =




)oefficient of xn-1 in f (x) is equal to
legrees of the two sides o f the equation. Now, by ( 1 .4.1), th e




which, by the induction hypothesis, is equal t o

                                                       2';;t<,';;n




                                                                                           = al.
                                                            IT       (ai-aj ).
rhis, then, is the value of K; and we have
                           f (x )
                                     = (x-a2}···(x-an) Z';;i<j<;;;n (ai-aj).
                                                           IT
We no w complete the proof of ( 1 .4.5) by substituting x
    The result just obtained enables us to derive identities for discriminants
)f algebraic e quat iens . The discriminant 4l of the equation


                                           On'
                       x"+a1x"-1+"'+an-1x+a" = 0,
                                                 is defined
                                                                                              (1.4.6)


                                                                 IT (()._()j)2.
whose roots are 01,                                              as

                                                  4l
                                                            l';;;i<j';;;n
                                    ...•



                                                        =




[t follows that 4l = 0 if and only if (1.4.6) has at least two equal roots. To
9xpress 4l in terms of the coefficients of ( 1.4.6) we observe that, in view of
( 1 .4.5) ,
            &'i-1 {j'f-2          8 1               {j'f-2             1.1 er�l
             I
                                                             1 .
    4l
                     �-1 8�-2                               On            �-1 �-2
         =       ,     .


                                                                   1
I, § 1 .4                 E X P A N S I O N T HE O R E M S                                         19
Carrying out the multiplication columns b y columns, w e have
                               8 2 n_ 2     82n_ 3                   8n_1
                       D. =    82n- 3       82n-4                    8n_2

                               8n_1         8n_2                     80
where 8r = iJr + . . . + B:'. (r = 0 , 1 , 2 , . . ). Using Newton 's formulaet we can
                                               .


express 80, 8 1 , , 8 2 n_ 2 in terms of the coefficients a1 , , an of ( 1.4.6), and hence
               •••                                                  •••




  Consider, for example, the cubic equation x 3 +px + q = O. Here
obtain D. in the desired form.




and it is easily verified that
                                  D. =
                                            I:: :
                                              82     81


            80 = 3,    8 1 = 0,     82    =   -    2p,
Hence D. = _ (4p3 + 27q 2 ) , and thus at least two roots of x3 +px + q = 0 are
equal if and only if 4p3 + 27q 2    O.  =




  EXERCISE 1 .4 . 3 . Show, by the method indicated above, that the dis­
criminant of the quadratic equation x 2 + p.x + v = 0 is fL2 _ 4v.




                                                          =
  We now resume our discussion of the general theory of determi­
nants.


have for r ¥= s,
  THEOREM 1.4.2. With the same notation as in Theorem 1.4.1 we
                        ."
                       ! ark A Sk 0,
                      k=l
                                    n

                                    ! akr A ks =              O.
                                  k=l
  I n other words, if each element of a row (or column) i s multiplied
by the cofactor of the corresponding element of another fix ed row
(or column), then the sum of the n products thus formed is equal to
zero. This result is an easy consequence of Theorem 1.4.1. We need,
Df course, prove only the first of the two stated identities.
  If D' = l a�j ln denotes the determinant obtained from D = l aij l n
when the sth row is replaced by the rth row, then
                                                         (i ¥= s)
                                                         (i = 8).
Denoting by A�j the cofactor of the element a�j in D', we clearly
nave
                                   (k = 1, . .. , n ).
  t See Burnside and Pa.nton, The                                    ( 1 0th edition), i. 1111 5-7, or
Perron, 12, i. 1 50- 1 .
                                          Theory of Equations
20                              DETERMINANTS                                      I, § 1 .4

Hence, by (1.4.1) (p. 15),
                                    71,               n
                                =
                       L a:k A :k = L ark A Bk'
                           D'
                      k=l          k= l
But the rth row and 8th row of D' are identical, and so D'                         =
                                                                                        O.
This completes the proof.

   It is often convenient to combine Theorems 1.4.1 and 1 .4 .2 into
a single statement. For this purpose we need a new and most useful
notation.
  DEFINITION 1 .4 . 2 . The 8ymbol OrB' known as the KRONECKER
DELTA, is defined as
                                    (r    s)           =




                     In'
                                    (r *' 8) .
  With the aid of the Kronecker delta Theorems 1.4.1 and 1 .4.2




                     i ark ASk = ors D
can be combined in the following single theorem.


minant D

                                                }
                     L akr Aks = 0rs D
  THEOREM 1 . 4 . 3 . If Apll denotes the cofactor of apq in the deter­
          l aij then
             =




                 k=l                                 ( r, s = I , . . . , n ) .
                     n


                    k=l
  1 .4.2 . Our next object i s to obtain a generalization o f the
Expansion Theorem 1.4.1 . We require some preliminary definitions.
  DEFINITION 1 .4 . 3 . A k-rowed MINOR of an n-rowed determinant
D is any k-rowed determinant obtained when n - k rows and n - k
columns are deleted from D.
   Alternatively, we may say that a k-rowed minor of D is obtained
by retaining, with their relative order unchanged, only the elements
common to k specified rows and k specified columns.
   For instance, the determinant Dii, obtained from the n-rowed
determinant D by deletion of the ith row and jth column, is an
(n - l ) -rowed minor of D. Each element of D is, of course, a
I-rowed minor of D.
  EXERCISE 1 .4.4. Let 1 < k <            n,

a given n-rowed determinant D vanish. Show that all (k+ I ) .rowed minors
                                               and suppose that all k-rowed minors of

of D vanish also.
        <-

     The k-rowed minor obtained from D by retaining only the
I. § I .4                             E XPA N S I O N T H E O RE M S                                                     21
elements belonging t o rows with suffixes rl, , rk and columns with                •••



suffixes Sl I " " Sk will be denoted by
                                               D (rll " " rk / Sl" ' " Sk ) '




                                                                    I                 1
Thus, for example, if

                                             D    = a2 1 a 22 a23 ,
                                                          aa 2 a33
                                                          aal


                                      D( I , 3 / 2 , 3) =
                                                             a 1 2 a 13 ·
then
                                                             a3 2 a33
   DEFINITION 1 .4.4. The COFACTOR (or ALGEBRAIC COMPLEMENT)
.D(rl, , r k / s!> " " Sk ) of the minor D (rl , rk / s!> " " Sk) in a de terminant
          . . •                                                         , ••.



D is defined as
 .D(r l, . · · , rk / Sl " ' " sk ) ( - I )r1 +. . . +rk+Bl+ ... +BkD(rk+1' · · · ' rn / Sk+1 ' O O " sn) ,
where rk+1 , o o . , rn are the n - k numb ers among I , . . . , n o ther than
                                       =




rl, . , rk' and Sk+ 1" ' " Sn are the n - k numbers among I , . . . , n other than
    . .




  We note that for k = 1 this definition reduces to that of a cofactor
of an element (Definition 1 .4. 1 , p. 1 4 ) . If k n, i.e. if a minor                    =



coincides with the entire determinant, it is convenient to define its
cofactor as I .




                                                                                           _I an
  Consider, by way of illustration, the 4-rowed determinant
D   = / aiJ / 4 ' Here


and
          .D(2, 3 / 2, 4)        =    ( _ I ) 2 +3+ 2 + 4D( I , 4 / 1 , 3 )           =
                                                                                              au
                                                                                                          a13 ·
                                                                                                          a43
                                                                                                                 1
   Theorem 1 .4.4. (Laplace's expansion theorem)

                         n and I � rI < . . . < rk � n. Then
  Let D be                                                                                              integers such
                      <
                      a n n-rowed determinant, and let rl , . . . , rk be
that I � k

                  l "; Ul < . . . <Uk"; n
          D   =             !               D (rl , . . . , rk / u l, . . , uk) .D( rl, . . . , rk / ul, . . . , Uk) '
                                                                           .




                                                                                                        (�) p;oducts
   This theorem (which was obtained, in essence, by Laplace in
1 77 2 ) furnishes us with an expansion of the determinant D in terms
of k specified rows, namely, the rows with suffixes rl , . , r k ' We form                        o o



all possible k-rowed minors of D involving all these rows and
multiply each of them by its cofactor ; the sum of the
22                                                                                                                                    I , § 1 .4
is then equal to D. An an alogou s expansion applies, of course, to
                                                          D E T E RMINAN T S



columns . It should be noted that for k = 1 Theorem 1.4.4 reduces
to the identity ( 1 .4.1) on p. 15.
  To prove the theorem, let the numbers                                                              "H I , " " ""       be defined by




                                                         (r81,· · ·, 8n")aTt81
      1 :::;;; rk+l < . . . < rn :(: n,
the requirements




                                                                                                  €("l" "'''k)
                                         ("I " " ' ''n) J1( I , . . . , n ) .                                    =



Then, by Theorems 1 . 2 . 1 (i) (p. 7 ) and 1.1.4 (p. 4) we have

                                                                                          arn8n
              (8" . . . ,8n ) = .a'(1, • . . ,n)
     D   =                      I                       € 1>" "      "
                                                                                     •••




                                                        (_ 1 ) rl+ .. . +r, +81 +. . . +B,                              X
              (Sl, • • •   ,Bn) = .a' (l,          n)
         =
                                .2          .•.•                                                       81" " , Sk

                                                                                                                                     (1.4.7)

Now we can clearly obtain all arrangements (S1 > ' ' ' ' s,,) of ( 1 , .. . , n)
    and each arrangemen t exactly once-by separating the numbers
1, . . . , n in all possible ways into a set of k and a set of n - k numbers,
-




and letting (S1> . . . , 8k ) vary over all arrangements of the first and
(Sk+1, . . , 8,,) over all arrangements of the second set. Thus the
                              d( l , . . . , n ) below the summation sign in
          .




( 1 .4.7) can be replaced by the conditions
condition (SI , . . . , 8,, )                  =




                                                   (u I , . . , un)      =      d(l, . . . , n) ;                                    ( 1 .4.8)
                                                                               Uk+ 1 < . . . < Un ;
                                                           .




                                   u1 <         < Uk ;
                                                   ".                                                               ( 1 .4. 9 )




                                                                                                             €("l'''''''k)a
      (81) . . . , 8k)         =    d(u1 , · . . , Uk) ;                      (Sk+1, . . · , 8n ) = d(uk+ l ' " '' Un) '

Indicating b y an accent th at the summation i s to b e taken over the
inte gers u1, . . . , u" satisfying (1.4.8) and ( 1 .4.9), we therefore have

              �I                                                                      �
              ..:.,                                                                  ..:.,                                      '181 ' " ar.t81
                                                                     (S" . . . ,B.t) - .s:af (U . . . . . ,Uk)
                           ( _ I )rl+"'+'.t+ul+ ... +u.t
                                                                                                                 81> " . , 8k
  D      =




                                                                                                                            x




                                                                              X

                                                                                                                                a'nu"
23
               ""' _ I ) r,+ ... + rt+1h+ ... +ukD(r
I, § 1.4                          EXPANSION THEOREMS

               "" (                                  V · . . , r k 1 U v " " Uk ) X
           =




                                                                 X D (rk+1" ' " rn 1 Uk+l" ' " Un )
           =   !'D ( rv " " rk 1 Uv " . , uk )l> (rV " " rk 1 UV " " Uk )


                                                       x l> (rl, . . · , rk 1 UV " " Uk )         !             1,
                                                                                            Uk + lt · · ·,Ut.


where the inner sum is extended over all integers Uk + V ' ' ' ' Un
satisfying ( 1 . 4 . 8) and ( 1 .4.9) . Now the integers uk + 1, , , , , un are
clearly determined uniquely for each set of uv ' ' ' ' Uk' Hence the
value of the inner sum is equal to 1 , and the theorem is proved.

   The natural way in which products of minors and their cofactors
occur in the expansion of a determinant can be made intuitively
clear as follows . To expand an n-rowed determinant in terms of the

rows in the form ai] + O and every element apq in each of the remain­
rows with suffixes r1 ' " ' ' rk, we write every element aij in each of these

ing rows in the form O + apq• Using the corollary to Theorem 1 . 2.5
(p. I I ) we then obtain the given determinant as a sum of 2 n
determinants . Each of these either vanishes or else may be
expressed (by virtue of Exercise 1 . 3 . 1 , p. 1 3 , and after a prelimi­
nary rearrangement of rows and columns) as a product of a k-rowed
minor and its cofactor. The reader will find it helpful actually to

k = 2, r l = I , r 2
carry out the procedure described here, say for the case n                 4,                               =



                        3.    =




   A s a n illustration o f the use o f Laplace 's expansion w e shall
evaluate the determinant
                                          0       0       a13      au     al S
                                          0       0       a23      a24     0
                              D=          0       0       a33       0      0
                                          0     a42       a4 3     a 44   a4S
                                        aS l    aS 2      aS3      a S4   a ss
by expanding it in terms of the first three rows. The only 3 -rowed
minor which involves these three rows and does not necessarily
vanish is
                                       a l 3 au al S
              D(I,  2 , 3 1 3, 4 , 5 } a2 3 a24 0     =




                                       a33    0  0
/ a333
24                                   DETERMINANTS                                          I, § 1 .4

Expanding this minor in terms of the last column, we obtain

             D( l , 2, 3 1 3 , 4, 5) = a1 s
                                                      a
                                                        2         /
                                                              a 24 = - alS a 24 a 33'
                                                               0
Furthermore
 . .D( I, 2, 3 1 3, 4, 5)     =   ( _ 1 )1+2 +3+3+4+5D(4, 5 1 1 , 2)




                 D( 1 , 2 , 3 1 3, 4, 5)D( 1 , 2 , 3 1 3, 4 , 5) = al 5 a24 a3 3 a4 2 aS l '
and so, by Theorem 1 .4. 4,
         D   =



1 .5. Jacobi's theorem

of the same order whose elem ents are the cofactors of the elements
  'With every determinant may be associated a second determinant

of the first. We propose now to investigate the relation between
two such determinants.
     DEFINITION       1 .5 . 1 . If A.s de no tes the cofactor of ars in D = laii ln,
then D*      =    IAii l n is known as the AD JUGATE ( DETERMINANT ) of D.
to establish the relation between c orres p on ding minors in D and D*.
    Our object is to express D* in terms of D and , more generally,

In disc us sing these questions we shall require an important general
principle concerning polynomials in several variables . We recall
that two polynomials, say f (x1, , xm ) a n d g(xl , . . . , xm ), are said
                                                      . • .



to be identically equal if f (x1, , xm ) = g(xl , , xm ) for all values of
                                            • • •                     • • •



Xl " ' " xm. Again, the two polynomials are said t o be formally equal
if the corre s ponding coefficients in f an d g are equal. It is well

shall express this relation between the p olyno m i als f and g by
known that identity and formal equality imply each other. We

wri ting f = g.



     fh and f =I 0, then g = h.
     THEOREM  1 . 5. 1. Let f, g, h be polynomials in m variables. If

  When m = 1 this is a well known elementary result. For the
fg   =




proof of the theorem for m > 1 we must refer the reader elsewhere. t
  THEOREM 1 .5.2. If D is an n-rowed determinant and D* its
adjugate, then          D* = Dn-l.


write R = laii In' D* = IAij In' and form the product DD* rows by
  This formula was discovered by Cauchy in 1 8 1 2. To prove it, we


 t See, for example, van der Waerden, Modern A lgebra (English edition ) , i. 47.
! kil aik Ajk !
rows. Thus
I, § 1 . 5                      JA C O B I ' S T H E O R E M                           25


                                                             D      o        o
                                                              0     D        o
       DD* =                             = I S ij D ln =
                                                              o     o    .   D
                      =              n




and therefore                             DD* = D n .                         ( 1 .5. 1 )
I f now D =ft 0, then, dividing both sides of ( 1 . 5. 1 ) by D, we obtain
the required result. If, however, D = ° this obvious device fails,
and we have recourse to Theorem 1 . 5. 1 .
     Let us regard D as a polynomial in its n 2 elements. The adjugate
determinant D* is then a polynomial in the same n 2 elements, and
( 1 . 5. 1 ) is a polynomial identity. B u t D is not an identically vanish­
ing polynomial and so, by ( 1 . 5 . 1 ) and Theorem 1 .5 . 1 (with f = D,
g = D*, h = D11 -1) we obtain the required result.t
    Our next result -the main result of the present section-was
discovered by Jacobi in 1 83 3 .

     If M is a k-rou'ed minor of a determinant D, M* the corresponding
    THEOREM 1 . 5 . 3 . (Jacobi's theorem)

minor of the adjugate determinant D * , and M the cofactor of M in D,
then                            M* = D k -I M .                       ( 1 . 5 . 2)
   Before proving this formula we point out a few special cases.
The order of D is, as usual, denoted by n. (i) If k = 1, then ( 1 .5.2)
simply reduces to the definition of cofactors of elements of a

(iii) For k = n - l the formula ( 1 .5.2) states that if D = l a ii l n >
determinant. (ii) If k         n, then ( 1 .5. 2) reduces to Theorem 1 . 5.2.
                                =




D * = IA ij l n, then the cofactor of A rs in D * is equal to D n -2ars'
(iv) For k = 2 ( 1 . 5 . 2 ) implies that if D = 0, then every 2-rowed
minor of D* vanishes .
   To prove ( 1 .5.2) we first consider the special case when M is
situated in the top left-hand comer of D, so that
             all                a1k                        ak +1 ,k+ 1       ak+ 1,n
M=                                              M=
             ak1                akk                         a n ,k+l          an n
                                         An                       A lk
                         M* =
                                         A kl                     A kk
   t Altemative proofs which do not depend on Theorem 1.5. 1 will M found in
§ 1 . 6 . 3 and § 3.5.
26                            DETERMINANTS                                                 I, § 1 . 5

Multiplying determinants rows b y rows and using Theorem 1 .4. 3
(p. 20), we obtain




                                                                        --
                                                                             --

                          o                      o           o                  --
                                                                                   -
                                                                                       1
                           D                    o
                                  -- - --- -
                                           -



                           o                    D        ak,k+ 1
                           o                    o       ak +l, k+ l '

                              o                 o        an,k+l

the left is equal to M*, while the determinant on the right is equal
Now, by Laplace ' s expansion th eore m , the second determinant on

to D kM . Thus             DM* = Dk M.
Since this is a polynomial identity in the n 2 elements of D, and
since D does not vanish identically, it follows by Theorem 1 . 5. 1
that ( 1 .5.2) is valid for the special case under consideration.
   We next turn to the general case, and suppose the minor M to
consist of those elements of D which belong to the rows with
suffixes Tv " " rk and to the columns with suffixes 81, . . . , 8k (where
r1 < . . . < rk and 81 < . . . < Sk ) ' We write
                       r1 + . . . + rk + s1 + " , + sk = t.
Our aim is to reduce the general case to the special case considered
above by rearranging the rows and columns of D in such a way that
the minor M is moved to the top left-hand corner, while the relative

We denote the new determinant thus obtained by f?), the k-rowed
order of the rows and columns notinvolvedin M remains unchanged.

minor in its top left-hand corner by ../I, the cofactor of ../I in f?) by
.ii, and the k-rowed minor in the top left-hand corner of the adju­
gate determinant f?)* by ../1*. In view of the special case already

                                 ../1 *
discussed we then have
                                         =
                                          f?)k-1v11.                ( 1 .5. 3)
Now o �viously ../l = M; and, by Exercise 1 . 2 . 2 (p. 9) ,
                                   f?)   = ( - l )tD.                                      ( 1 .5.4)
I. § 1 . 5
It is, moreover, clear that
                                      JAC O B I ' S THE O R E M                                                         27




                                                       =
                                             .A
                                             ( 1 )/111 =       -
                                                                 ( 1 .5.5)         .




of aii in 9) is equal to ( - I )'Ai) " t Hence
In view of Exercise 1 .4. 1 (p. 14) it follows easily that the cofactor

                                  .A*       ( _ I )tkM*,         ( 1 .5.6)
and we complete the proof of the theorem by substituting
( 1 .5. 4), ( 1 .5.5) , and ( 1 .5.6) in ( 1 . 5 . 3 ) .
     EXERCISE      1 . 5. 1 .   Let A , H. a ,        . . b e the cofactors o f the elements a, h, g
                                                       .                                                            •••.




                                                               a
in the determinant

                                               tJ,.            h
                                                                       h       g


                                                               g       j c
                                                      =                b j .



of the elements A , H, G • . . . in the determinant
Show that aA + hH + gG                =     tJ,. . aH + hB + gF                    =   O. and also that the cofactors



                                                               H           G
                                                       H           B       F
                                                      f A


                                                           a   F           C
are equal    t.o   a tJ,. . htJ,., gtJ,. • . • • respectively.

1.6. Two special theorems on linear equations
    We shall next prove two special theorems on linear equations
and derive some of their consequences. The second theorem is
needed for establishing the basis theorems (Theorems 2 . 3 . 2 and
2 . 3.3) in the next chapter. In touching on the subject of linear
equations we do not at present seek to develop a general theory-a




                                          :+ a�2 t2: . ': + �ln t�
   1.6. 1. THEOREM 1 . 6. 1 . Let n � 1, and let D = l ail l n be a given
task which we defer till Chapter V.

determinant . Then a necessary and sufficient conditionfor the existence
of numbers t1, , . t n ' not all zero, satisfying the equations
                       • •




                                 a l tl
                                  �
                                                                                         .O}                 ( 1 6 1)
                                                                                                                .   .



                                anl tl+ anz tZ + . . . + ann tn = 0
is                                                         D = O.                                            ( 1 . 6.2)
   The SUfficiency of the stated condition is established by induction

it holds for n- l , where n � 2 ; we shall then show that it also holds
with respect to n. For n      1 the assertion is true trivially. Suppose
                                           =




   t It must. of course. be remembered that aU does not necessarily eland in the
ith row and jth column of !!;.
§ 1 .6
                                                                        ...
28                                         DETERMINANT S                                                     I.

                                                                                                  0, then ( 1 .6. 1 )
is satisfi e d
for n. Let ( 1 . 6 .2) be satisfied. If all                       =                anI =
                                                      = ...
                                                                              =



                 by               t}        1 , t2                     tn     =   0,
and the re quire d assertion is seen to hold. If, on the o th er h and ,
                                       =                          =




the n u m b ers aw " " an I do not all vanish we may assume, withou t
loss of generality, that all =;t= O . In that case we subtract, for
i =  2, . . . , n, ail/aU times the first row from the i th row in D and
o b t ain

              au      al 2                            ai n                    b2 2
               0      b 22                            b 2n        =
                                                                      al l
     D=
               0
                                                                               bn 2




                                                                              = 0,
                      bn 2                         bn n

wh ere                                                            (i, j =       2, . . . , n) .




He n c e




                          (a . . - �I a1J) t
and so, by the in d u c ti o n hypothesi s , there exist numbers t2 , . . . , tn '
not all     zero,   such that

                                                             °
                    L        'J                   )
                    �
                                        a1 1
                    )=2
                                                      =
                                                                       (i     =    2, . . . , n ) .         ( 1 .6. 3 )


Let tl n o w                                       e qu atio n
                                                             n
                 be defined by the

                                       t}
                                                   I
                                            =


                                                  al l j = 2
                                                 --          I
                                                             a}j tj,                                        ( 1 . 6 .4)


so that                                                                                                     ( 1 . 6. 5 )

By ( 1 . 6 . 3 ) and ( 1 . 6 . 4 ) we have

                                                             (i   =    2 , . .. , n ) ,                     ( 1 .6. 6 )

and ( 1 . 6 . 5 ) and ( 1 . 6 . 6) are to g ether e quivalent to (1.6.1). The


     To prove the necessi ty of ( 1 . 6. 2 ) we a g ain argue by induction .
suffi ci e n c y of ( 1 . 6 . 2 ) is therefore established.

We have to show that if D *- O an d the numbers tv ' . " tn satisfy
: 1 . 6. 1 ), then t }        ...     tn   o. For n = I this assertion is true
briviall� Suppose, next, that it holds for n - I , where n � 2.
                                   =
                       =                     =




The numbers an' . . . . an} are not all zero ( sin c e D :f. 0), and we may,
I, § 1 .6 T W O S P E C I A L T H E O R E M S O N L I N E A R E Q U A T I O N S 29

therefore, assume th at all :::j::. O. If tv " " tn satisfy ( 1 .6. 1 ) , then
( 1 . 6.4) holds and therefore so does ( 1 . 6 . 3 ) . But




Hence, by ( 1 . 6 . 3 ) and the induction hypothesis, t2 = . .
                                        0; and the proof is therefore
                                                               tn  O.                     ,   =     =



It follows, by ( l .6.4), that t l               =



complete t
  An alternative proof of the neees1'>ity of condition ( 1 . 6 . 2 ) can be based on
              ·




Theorem 1 . 4 . 3 ( p . 20). Suppose that there eXIi,t n u mbe r>; tl , , tn' not all
zero , satisfying ( 1 . 6. 1 ) , i . e .
                                                                                          • ••




                                                     (i   =     1, . . . , n ) .

Denoting by A ,k the cofactor of           a,k   in D we th ere fo r e have

                                                           (k       �    1 , . . , n) ,
                                                                             .




i.e.                                                           (k         l, ... , n).


Hence, b y Theorem 1 . 4.3 ,
                                                                    =




                                                          (k    =       l,,,.n)

i.e. tk D   0 (k    1 , . , n ) . But, b y hypothesis, t l , . . . , t n are not all equal to
zero ; and therefore D          O.
          =        =      ..


                               =




   An obvious but useful consequence of Theorem 1 . 6. 1 is as follows :

    THEOREM 1 . 6.2. Let aij (i = 1 , . . . , n - l ; j = 1 , . . . , n) be given
numbers, where n ;?;: 2. Then there exists a t least one set of numbers


                                                                                   )
t1, . .. , tn ' not all zero, s uch that
                                   all tl + . . . +aln tn =              0
                                                              ·                                   ( 1 .6.7)
                               an-l,l tl + . . . + an -I,n tn =                  0 .
   To the n - l equations comprising ( 1 .6.7 ) we add the equation


   t The reader should note that the proof just given depends essentia.lly on the

elimination of tl'
elementary devi ce of redu cing the number of ' unknowns ' from n to ;m - l b y
which does not, of course, affect the choice of permissible sets of
30                                 D E TE RMINANTS                                                 I, § 1.6


the numbers tl, . . . , tn ' Since
                          all                             aln
                                                                         =
                                                     an -l , n
                                                                             0,
                        an-l , l
                           0                              °

it follows by the previous theorem that there exist values of tl , . . . , tn'
not all zero, which satisfy ( 1 . 6. 7 ) .
    It is interesting t o observe that we can easily give a direct proof
of Theorem 1 . 6 . 2 , without appealing to the theory of determinants,
by using essentially the same argument as in the proof of Theorem

holds for n - I , where n ;;:: 3. If now a' 11        . . . = a n - l, t = 0 , then
1 . 6. 1 . For n = 2 the assertion is obviously true. Assume that it

the equations ( 1 . 6 . 7 ) are satisfied by tt = 1, t 2      . . . = tn
                                                                =




however, a11, . . . , an -t,t do not all vanish, we may assume that a ll =1= o .
                                                                              O. If,



                                                                             }
                                                                             =                =




I n that case w e consider the equations
                    all tl + a1 2 t2 + " ' +        a1 n tn     =        O
                                                                =        °

                                                                     .
                             b22 t2 + " ' +         b2n tn
                                                                                 ,                (1 . 6 . 8 )

                                                    � �
                           bn-l'2 t2 + .. . + bn t,n n        0

induction hypothesis there exist values of t 2 , . . . , tn ' not all 0, satis­
where the bij are defined as i n the proof o f Theorem 1 . 6. 1 . B y the

fying the last n-2 equations in ( 1 . 6 . 8 ) ; and, with a suitable choice
of tv the first equation can be satisfied, too. But the values of tl , . . . , tn
which satisfy ( 1 . 6 . 8) also satisfy ( 1 . 6 . 7 ) , and the theorem is therefore

              1 . 6. 1 . Let 1 � m <                                 = l ,. .., m ; j      l , . . . , n ) be
proved.
                                              and let a,j
                                                                   tl, .. . t", not all 0, such that
                                          n

given numbers . Show that there exist numbers
  EXERCISE                                                      (i                      =




                                all tl + . . . + al" t"   =   0,




well-known result on polynomials, which will be useful in later
  1 .6.2. As a first application of Theorem 1 .6. 1 we shall prove a

chapters.
  THEOREM 1 . 6,3. If the polynomial
             f (x) = cO x n + cl xn -l + , , , + cn _l X+Cn
�anishes for n+ 1 distinct values of x, then it vanishes identically.
         •
Le t XV " " xn +1 b e distinct numbers ,
I, § 1 .6 T W O S P E C I A L T H E O R E M S O N L I N E A R E Q U A T I O N S 3 1


f (Xl) = . . .
                                                                                    and suppose that
               f ( xn+1)
                   =     0, i.e.    =




Since, by ( 1 .4.5), p. 1 7 , the Vandermonde determinant




is e qual to

and there fore not equal to zero, it follows by Theorem 1 . 6. 1 that
Co   c1    ...  Cn    0, i.e. that f (x) vanishes identically.
                        If f(x), g(x) are polynomials, and there exists
   =       =           =        =




                                                                                                                   a
             Xo
  COROLLARY .
constant           such that
                                    f (x)  g( x )
                         Xo,
                                                       =




whenever x         >           then the equality holds for ALL values of x.
  Let n be the greater of the degrees of f an d g. Now f (x) -g(x)
vanishes for any n + 1 di stinct values of x whi ch exceed Xo, and the
assertion follows , therefore, by Theorem 1 . 6 . 3 .
  1 . 6.3. ThE'orem 1 . 6 . 1 enables u s t o dispense with the comparatively deep
Theorem 1 . 5. 1 (p. 2 4 ) m the proof of Theorem 1 . 5 . 2 . As we recall , there is
only a d ifficulty when D       0, and in that case we h av e to show that D *
                   before, D
                                     c=                                            O.                          =


              as                      tat) t n' D *       IA" ln' and assume ( as we may clearly
d o ) that at least ono element in D, say                                  does not            vanish .   In view of
W e wrIte,                       =                    =


                                                                 ak l '

Th e o rem 1 . 4 . 3 ( p . 2 0 ) and the assumption D                         0 we infer that the relations

                                                            (i
                                                                      =




                                                                  =       1, . . . , n)


are satisfied      for
But here     tl   =   ak l "" 0 and       80,   by Theorem 1.6. 1 ,



                                                                      �l�
                                                                      A nn
                                                                                  1=      O.



    1.6.4. It is useful to possess some easily applioable criteria for
::leciding whether a determ in ant does or does not vanish. Below
we shall deduce one suoh criterion due to Minkowski (1900).
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra
An introduction to linear algebra

More Related Content

What's hot

Lesson 14 derivative of inverse hyperbolic functions
Lesson 14 derivative of inverse hyperbolic functionsLesson 14 derivative of inverse hyperbolic functions
Lesson 14 derivative of inverse hyperbolic functions
Rnold Wilson
 
Basic Rules Of Differentiation
Basic Rules Of DifferentiationBasic Rules Of Differentiation
Basic Rules Of Differentiation
seltzermath
 
Numerical differentiation
Numerical differentiationNumerical differentiation
Numerical differentiation
andrushow
 
Ode powerpoint presentation1
Ode powerpoint presentation1Ode powerpoint presentation1
Ode powerpoint presentation1
Pokkarn Narkhede
 
Solving systems of Linear Equations
Solving systems of Linear EquationsSolving systems of Linear Equations
Solving systems of Linear Equations
swartzje
 
Absolute Value Inequalities
Absolute Value InequalitiesAbsolute Value Inequalities
Absolute Value Inequalities
swartzje
 
Lines and planes in space
Lines and planes  in spaceLines and planes  in space
Lines and planes in space
Tarun Gehlot
 
Lesson 10 derivative of exponential functions
Lesson 10 derivative of exponential functionsLesson 10 derivative of exponential functions
Lesson 10 derivative of exponential functions
Rnold Wilson
 
A25-7 Quadratic Inequalities
A25-7 Quadratic InequalitiesA25-7 Quadratic Inequalities
A25-7 Quadratic Inequalities
vhiggins1
 
Exponential and logarithmic functions
Exponential and logarithmic functionsExponential and logarithmic functions
Exponential and logarithmic functions
Njabulo Nkabinde
 

What's hot (20)

Lesson 14 derivative of inverse hyperbolic functions
Lesson 14 derivative of inverse hyperbolic functionsLesson 14 derivative of inverse hyperbolic functions
Lesson 14 derivative of inverse hyperbolic functions
 
Exact & non differential equation
Exact & non differential equationExact & non differential equation
Exact & non differential equation
 
Basic Rules Of Differentiation
Basic Rules Of DifferentiationBasic Rules Of Differentiation
Basic Rules Of Differentiation
 
Numerical differentiation
Numerical differentiationNumerical differentiation
Numerical differentiation
 
Ode powerpoint presentation1
Ode powerpoint presentation1Ode powerpoint presentation1
Ode powerpoint presentation1
 
Square Root Function Transformation Notes
Square Root Function Transformation NotesSquare Root Function Transformation Notes
Square Root Function Transformation Notes
 
Matrix of linear transformation 1.9-dfs
Matrix of linear transformation 1.9-dfsMatrix of linear transformation 1.9-dfs
Matrix of linear transformation 1.9-dfs
 
Solving systems of Linear Equations
Solving systems of Linear EquationsSolving systems of Linear Equations
Solving systems of Linear Equations
 
Lecture 7 (inequalities)
Lecture 7 (inequalities)Lecture 7 (inequalities)
Lecture 7 (inequalities)
 
Absolute Value Inequalities
Absolute Value InequalitiesAbsolute Value Inequalities
Absolute Value Inequalities
 
Lines and planes in space
Lines and planes  in spaceLines and planes  in space
Lines and planes in space
 
Lesson 10 derivative of exponential functions
Lesson 10 derivative of exponential functionsLesson 10 derivative of exponential functions
Lesson 10 derivative of exponential functions
 
A25-7 Quadratic Inequalities
A25-7 Quadratic InequalitiesA25-7 Quadratic Inequalities
A25-7 Quadratic Inequalities
 
Linear differential equation of second order
Linear differential equation of second orderLinear differential equation of second order
Linear differential equation of second order
 
Optmization techniques
Optmization techniquesOptmization techniques
Optmization techniques
 
Newton’s Divided Difference Formula
Newton’s Divided Difference FormulaNewton’s Divided Difference Formula
Newton’s Divided Difference Formula
 
Green Theorem
Green TheoremGreen Theorem
Green Theorem
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
 
Exponential and logarithmic functions
Exponential and logarithmic functionsExponential and logarithmic functions
Exponential and logarithmic functions
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 

Viewers also liked

Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
itutor
 
Cr concepts the best resource for gmat cr from ivy-gmat (sandeep gupta)
Cr concepts   the best resource for gmat cr from ivy-gmat (sandeep gupta)Cr concepts   the best resource for gmat cr from ivy-gmat (sandeep gupta)
Cr concepts the best resource for gmat cr from ivy-gmat (sandeep gupta)
mamunapece
 
Linear Algebra PowerPoint
Linear Algebra PowerPointLinear Algebra PowerPoint
Linear Algebra PowerPoint
Ashley Carter
 

Viewers also liked (15)

Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
Understand Of Linear Algebra
Understand Of Linear AlgebraUnderstand Of Linear Algebra
Understand Of Linear Algebra
 
Abstract algebra
Abstract algebraAbstract algebra
Abstract algebra
 
Linear Algebra
Linear AlgebraLinear Algebra
Linear Algebra
 
Basic structural geology - Indonesia - SM IAGI IST AKPRIND
Basic structural geology - Indonesia - SM IAGI IST AKPRINDBasic structural geology - Indonesia - SM IAGI IST AKPRIND
Basic structural geology - Indonesia - SM IAGI IST AKPRIND
 
International Journal of Metallurgy and Alloys (Vol 2 Issue 2)
International Journal of Metallurgy and Alloys  (Vol 2 Issue 2)International Journal of Metallurgy and Alloys  (Vol 2 Issue 2)
International Journal of Metallurgy and Alloys (Vol 2 Issue 2)
 
Cr concepts the best resource for gmat cr from ivy-gmat (sandeep gupta)
Cr concepts   the best resource for gmat cr from ivy-gmat (sandeep gupta)Cr concepts   the best resource for gmat cr from ivy-gmat (sandeep gupta)
Cr concepts the best resource for gmat cr from ivy-gmat (sandeep gupta)
 
Structural Geology
Structural GeologyStructural Geology
Structural Geology
 
Linear Algebra PowerPoint
Linear Algebra PowerPointLinear Algebra PowerPoint
Linear Algebra PowerPoint
 
Metallurgical Thermodynamics & Kinetics Lecture Notes
Metallurgical Thermodynamics & Kinetics Lecture NotesMetallurgical Thermodynamics & Kinetics Lecture Notes
Metallurgical Thermodynamics & Kinetics Lecture Notes
 
Cleavage, foliation, lineation
Cleavage, foliation, lineationCleavage, foliation, lineation
Cleavage, foliation, lineation
 
Foliation and lineation
Foliation and lineationFoliation and lineation
Foliation and lineation
 
Combining Algebra Like Terms
Combining Algebra Like TermsCombining Algebra Like Terms
Combining Algebra Like Terms
 
Mathematics fundamentals
Mathematics fundamentalsMathematics fundamentals
Mathematics fundamentals
 
Rock mass classification or rock mass rating of rock materials in civil and m...
Rock mass classification or rock mass rating of rock materials in civil and m...Rock mass classification or rock mass rating of rock materials in civil and m...
Rock mass classification or rock mass rating of rock materials in civil and m...
 

Similar to An introduction to linear algebra

Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdfLinear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
ValentinMamaniArroyo3
 
Analyticalmechan00seelrich bw
Analyticalmechan00seelrich bwAnalyticalmechan00seelrich bw
Analyticalmechan00seelrich bw
Yassin Balja
 
Math -elements_of_abstract_and_linear_algebra
Math  -elements_of_abstract_and_linear_algebraMath  -elements_of_abstract_and_linear_algebra
Math -elements_of_abstract_and_linear_algebra
mhrslideshare014
 
Ma121 revised format for course handout
Ma121 revised format for course handoutMa121 revised format for course handout
Ma121 revised format for course handout
Tamal Pramanick
 
Firk essential physics [yale 2000] 4 ah
Firk   essential physics [yale 2000] 4 ahFirk   essential physics [yale 2000] 4 ah
Firk essential physics [yale 2000] 4 ah
Zulham Mustamin
 
Multiplicative number theory i.classical theory cambridge
Multiplicative number theory i.classical theory cambridgeMultiplicative number theory i.classical theory cambridge
Multiplicative number theory i.classical theory cambridge
Manuel Jesùs Saavedra Jimènez
 
Mathematical methods in quantum mechanics
Mathematical methods in quantum mechanicsMathematical methods in quantum mechanics
Mathematical methods in quantum mechanics
Sergio Zaina
 
Apostol: Calculus Volume 2
Apostol: Calculus Volume 2Apostol: Calculus Volume 2
Apostol: Calculus Volume 2
qadry13
 

Similar to An introduction to linear algebra (20)

Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdfLinear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
Linear-Algebra-Friedberg-Insel-Spence-4th-E.pdf
 
Calculus volume 1
Calculus volume 1Calculus volume 1
Calculus volume 1
 
Advanced Linear Algebra (Third Edition) By Steven Roman
Advanced Linear Algebra (Third Edition) By Steven RomanAdvanced Linear Algebra (Third Edition) By Steven Roman
Advanced Linear Algebra (Third Edition) By Steven Roman
 
Analyticalmechan00seelrich bw
Analyticalmechan00seelrich bwAnalyticalmechan00seelrich bw
Analyticalmechan00seelrich bw
 
A Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With ApplicationsA Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With Applications
 
Math -elements_of_abstract_and_linear_algebra
Math  -elements_of_abstract_and_linear_algebraMath  -elements_of_abstract_and_linear_algebra
Math -elements_of_abstract_and_linear_algebra
 
Differential Calculus by Shanti Narayan.pdf
Differential Calculus by Shanti Narayan.pdfDifferential Calculus by Shanti Narayan.pdf
Differential Calculus by Shanti Narayan.pdf
 
Syllabus
SyllabusSyllabus
Syllabus
 
Ma121 revised format for course handout
Ma121 revised format for course handoutMa121 revised format for course handout
Ma121 revised format for course handout
 
Firk essential physics [yale 2000] 4 ah
Firk   essential physics [yale 2000] 4 ahFirk   essential physics [yale 2000] 4 ah
Firk essential physics [yale 2000] 4 ah
 
hodge-feec-pres-defense
hodge-feec-pres-defensehodge-feec-pres-defense
hodge-feec-pres-defense
 
Multiplicative number theory i.classical theory cambridge
Multiplicative number theory i.classical theory cambridgeMultiplicative number theory i.classical theory cambridge
Multiplicative number theory i.classical theory cambridge
 
Mathematical methods in quantum mechanics
Mathematical methods in quantum mechanicsMathematical methods in quantum mechanics
Mathematical methods in quantum mechanics
 
Apostol: Calculus Volume 2
Apostol: Calculus Volume 2Apostol: Calculus Volume 2
Apostol: Calculus Volume 2
 
8 methematics
8   methematics8   methematics
8 methematics
 
Thelearningpoint
ThelearningpointThelearningpoint
Thelearningpoint
 
Problems in mathematics
Problems in mathematicsProblems in mathematics
Problems in mathematics
 
A First Course In With Applications Complex Analysis
A First Course In With Applications Complex AnalysisA First Course In With Applications Complex Analysis
A First Course In With Applications Complex Analysis
 
Brownian Motion and Martingales
Brownian Motion and MartingalesBrownian Motion and Martingales
Brownian Motion and Martingales
 
MSc Sem III and IV Syllabus.pptx
MSc Sem III and IV Syllabus.pptxMSc Sem III and IV Syllabus.pptx
MSc Sem III and IV Syllabus.pptx
 

Recently uploaded

Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 

Recently uploaded (20)

ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 

An introduction to linear algebra

  • 1.
  • 2.
  • 3. AN INTRODUCTION LINEAR ALGEBRA TO BY L. MIRSKY I.EGTURER IN MATHEMATICS IN THE UNIVERSITY OF SHEFFIELD OXFORD AT THE CLARENDON PRESS 1955
  • 4. f Ox ord University Press, Amen House, London E.O.4 GLASGOW NEW YORK TORONTO MELBOURNE WELLINGTON Geoffrey Oumberlege, Publisher to the University BOMBAY OALCUTTA MADRAS KARAOm CAPE TOWN IBADAN PRINTED IN GREAT BRITAIN
  • 5. PREFACE My object in writing this book has been to provide an elementary and easily readable account of linear algebra. The book is intended mainly for students pursuing an honours course in mathematics, but I hope that the exposition is sufficiently simple to make it equally useful to readers whose principal interests lie in the fields of physics or technology. The material dealt with here is not extensive and, broadly speaking, only those topics are discussed which normally form part of the honours mathematics syllabus in British universities. Within this compass I have attempted to present a systematic and rigorous development of the subject. The account is self-contained, and the reader is not assumed to have any previous knowledge of linear algebra, although some slight acquaintance with the elementary theory of determinants will be found helpful. It is not easy to estimate what level of abstractness best suits a textbook of linear algebra. Since I have aimed, above all, at simplicity of presentation I have decided on a thoroughly concrete treatment, at any rate in the initial stages of the discussion. Thus I operate throughout with real and complex numbers, and I define a vector as an ordered set of numbers and a matrix as a rectangular array of numbers. After the first three chapters, however, a new and more abstract point of view becomes prominent. Linear manifolds (i.e. abstract vector spaces) are considered, and the algebra of matrices is then recognized to be the appropriate tool for investigating the properties of linear operators; in fact, particular stress is laid on the representation of linear operators by matrices. In this way the reader is led gradually towards the fundamental concept of invariant characterization. The points of contact between linear algebra and geometry are numerous, and I have taken every opportunity of bringing them to the reader's notice. I have not, of course, sought to provide a syste­ matic discussion of the algebraic background of geometry, but of the coordinate system, reduction of quadrics to principal axes, have rather concentrated on a few special topics, such as changes rotations in the plane and in space, and the classifichtion of quadrics under the projective and affine grouns.
  • 6. vi PREFACE The theoryof matrices gives rise to many striking inequalitir.,j. The proofs of these are generally very simple, but are widely scattered throughout the literature and are often not easily accessible. I have here attempted to collect together, with proofs, all the better known inequalities of matrix theory. I have also included a brief sketch of the theory of matrix power series, a topic of considerable interest and elegance not normally dealt with in elementary textbooks. Numerous exercises are incorporated in the text. They are designed not so much to test the reader's ingenuity as to direct his attention to analogues, generalizations, alternative proofs, and so on. The reader is recommended to work through these exercises, sequent discussion At the end of each chapter there is a series of as the results embodied in them are frequently used in the sub­ . miscellaneous problems arranged approximately in order of in ­ creasing difficulty. Some of these involve only routine calculations, others call for some manipUlative skill, and yet others carry the general theory beyond the stage reached in the text. A number of these problems have been taken from recent examination papers in mathematics, and thanks for permission to use them are due to the Delega tes of the Clarendon Press, the Syndics of the Cambridge University Press, and the Universities of Bristol, London, Liver­ The number of e xisting books on linear algebra is large, and it is pool, Manchester, and Sheffield . therefore difficult to make a detailed acknowledgement of sources. I ought, however, to mention Turnbull and Aitken, An Introduction to the Theory of Oanonical Matrices, and MacDuffee, The Theory of Matrices, on both of which I have drawn heavily for historical references. I have received much help from a number of friends and colleagues. Professor A. G. Walker first suggested that I should invaluable. Mr. H. Burkill, Mr. A. R. Curtis, Dr. C. S. Davis, write a book on linear algebra and his encouragement has been Dr. H. K. Farahat, Dr. Christine M. Hamill, Professor H. A. Heilbronn, Professor D. G Northcott, and Professor A. Oppenheim . manuscript or advising me on specific points. Mr. J. C. Shepherdson have all helped me in a variety of ways, by checking parts of the read an early version of the manuscript and his acute comments has, in addition, gi�en me considerable help with Chapters IX and have enlbled me to remove many obscurities and ambiguities; he
  • 7. The greatest debt lowe is to Dr. G. T. Kneebone and Professor PREFACE vii �. R. Rado with both of whom, for several years past, I have been in the habit of discussing problems of linear algebra and their presentation to students. But for these conversations I should not have been able to write the book. Dr. Kneebone has also read and criticized the manuscript at every stage of preparation and Professor Rado has supplied me with several of the proofs and problems which appear in the text. Finally, I wish to record my thanks to the officers of the Clarendon Press for their helpful co-operation.
  • 8.
  • 9. CONTENTS PART I DETERMINANTS, VECTORS, MATRICES, AND LINEAR EQUATIONS I. D E T E R M IN AN T S 1.1. Arrangements and the €·symbol 1 1.2. Elementary properties of determinants 5 <::!]) Multiplication of determinants 12 1.4. Expansion theorems 14 1.5 . •Jacobi's theorem 24 1.6. Two special theorems on linear equations 27 II. V E C TOR S P A C E S AND L I N E A I� MANIFOLDS � The algebra of vectors 39 Y. Linear manifolds 43 <:]JD Linear dependence and basos 48 2.4. Vector representation of linear manifolds 57 2.5. Inner products and orthonormal bases 62 III. T H E A LGEB R A OF M A TRI C E S $4 3.1. Elementary algebra 72 3.2. Preliminary notions concerning matrices 74 � G.:YAddition and multiplication of matrices 78 85 3. Adjugate matrices 87 3. Inverse matrices 90 .7. Rational functions of a square matrix 97 3.8. Partitioned matrices 100 4.1. Change of basis in a linear manifold 111 IV. L IN E A R O P E R A TO R S 4.3. Isomorphisms and automorphisms of linear manifolds 4.2. L inear operators and their representations 113 123 �4. Further instances of linear operators 126 v/s YSTE MS O F LINEAR EQUATIONS AND RANK OF MATRICES e> Preliminary results 131 @). The rank theorem 136
  • 10. x C O NTENTS 141 5.4. Systems of homogeneous linear equations 5.3. The general theory of linear equations 1/8 §J?- Miscellaneous applications 152 � Further theorems on rank of matrices 158 VI. ELEMENTARY OPERATION S A N D OF E Q UIVALENCE �1 TIlE CONCEP:C �"tSS.� 't..... Sb�...u.;.� � E-operations and E · matrices 168 � EqUIvalent matriees 6.3. Applications of the precedmg theory 172 6.4. Congr uen ce transformations 178 182 6.6. Axiomatic characterization of determinants 6.5. The general concept of equivalen('e 186 189 PART II FURTHER DEYELOPMENT OF MATRIX THEORY VII. THE CHARACTERISTIC EQUATIO� � Characteristic polynomials and similarity transformations 7.1. The coefficients of the charaderistie polynomial 195 'l!J'- Characteristic roots of rational functions of matriccs 199 -e. The minimum polynomial and the theorem of Cayley and 201 Hamilton 202 7.5. Estimates of chara('teristi(' roots 208 7.6. Characteristic vectors 214 VIII. ORTHOGONAL AXD UNITARY MATRICES 8.1. Orth ogon al matrices 222 8.2. Unitary matrices 229 8.3. Rotations in the plane 233 8.4. Rotations in space 236 lX. GROUPS 9.1. The axioms of group theory 252 9.2. Matrix groups and operator groups 26 1 9.3. Representation of groups by matrices 9.4. Groups of singular matrices 267 272 9.5. Invariant spaces and groups of linear transformations 276 X. CANONICAL FORMS 10.1. The idea of a canonical form 290 292 10.3. Diagonal canonical forms under the orthogonal similarity 10.2. Diagonal canonical forms under the similarity group • group and the unitary similarity group 300
  • 11. xi 306 CONTENTS 312 10.4. Triangular canonical forms 10.5. An intermediate canonical form 10.6. Simultaneous similarity transformations 316 327 XI. MATRIX ANALYSIS 330 11.1. Convergent matrix sequences sel ies 11.3. The relation between matrix functions and matrix poly- 11.2. Power and matrix functions nomials 341 11.4. Systems of linear differential equations 343 PART III QUADRATIC FORMS XII. B ILI N E A R , QUADRATIC, AND HERMITIAN FORMS @yperators and forms of the bilinear and quadratic typcs 353 12.2. Orth ogonal 12.3. Gencral reduction to dia gonal form reduction to diagonal form 362 The problem of equ iv alenc e . 375 367 � 12.5. Classific a tion of quadric s 380 Rank and signature 12.6. Hermitian forms 385 13.1. The value classes 394 XIII. DEFINITE AND INDEFINITE FORMS 13.2. Transformations of positivo definite forms 398 13.3. Determinantal criteria 400 13.4. Simultaneous reduction of two quadratic forms 408 13.5. The inequalities of Hadamard, Minkowski, Fischer, and Oppenheim 416 BIBLIOGRAPHY 427 INDEX �9
  • 12.
  • 13. PART I DETERMINANTS, VECTORS, MATRICES, AND LINEAR EQUATIONS I DETERMINANTS THE present book is intended to give a systematic account of the elementary parts of linear algebra. The technique best suited to this branch of mathematics is undoubtedly that provided by the calculus of matrices, to which much of the book is devoted, but we shall also require to make considerable use of the theory of deter­ minants, partly for theoretical purposes and partly as an aid to computation. In this opening chapter we shall develop the principal properties of determinants to the extent to which they are needed for the treatment of linear algebra. t The theory of determinants was, indeed, the first topic in linear algebra to bc studied intensively. It was initiated by Leibnitz in 1 696, developed further by Bezout, Vandermonde, Cramer, Lagrange, and Laplace, and given the form with which we are now familiar by Cauchy, Jacobi, and Sylvester in the first half of the nineteenth century. The term 'determinant ' occurs for the first time in Gauss's Di8Qui8itiones arithmeticae (1 801 ).t 1 . 1 . Arran�ements and the e-symbol In order to define determinants it is necessary to refer to arrange­ ments among a set of numbers, and the theory of determinants can be based on a few simple results concerning such arrangements. In the present section we shall therefore derive the requisite preliminary results. of the integers "1> . . . ' "- n. 1 . 1 . 1 . We shall denote by ("-1> ' "-n) the ordered set consisting .•. t For a mu ch more detailed discussion of d oterm inants Bee Kowalewski, and Panton, The Theory of Equations, and in Ferrar, 2, Aitken, 10, and Perron, 12. Einfuhrung in die Dcterminantentheorie. Briefor accounts w ill be found in Burnside Th, Theory of Determinants in the Hi8torical Order of Development . (Numbers in bold ·face type refer to the b ibliography at the end.) t For historical and bibliographical information see Muir, 6682 B
  • 14. 2 DETE R MIN ANTS I, § 1.1 DEFINITION 1. 1.1. If (AI"'" An) and (1-'1"'" I-'n) contain the same (distinct) integers, but these integers do not necessarily occur in the same order, then (Al, ,An) and (l-'l>"',l-'n) are said to be ARRANGE­ ... MENTst of each other. In symbols: (Al, .. . An) = d(l-'v . .. o/1,n) or , (1L1"",l-'n) = d(Al,···, An)· We shall for the most part be concerned with arrangements of the first n positive integers. If (v1 ,vn ) d(l •...• n) and (kl,.. . ,kn) vk J = d(I, ... , n). We have the •... = = d(I .... , n), then clearly (Vk1,"" following result. THEOREM 1.1.1. (i) Let (vv"" vn) vary over all arrangements of (1 , ... ,n), and let (kl> ...,kn) be a fixed arrangement of ( 1 , . . .,n). Then ( Vkl,"" Vkn ) varies over all arrangements of (1 , ... , n). (ii) Let (VI'"'' vn) vary over all arrangements of (1, ... , n), and let (l-'l'''',l-'n) be a fixed arrangement of ( 1 , .. . ,n). The arrangement (A1>"" An), defined by the conditions VAl = ILl' ... , VAn = ILn, then varies over all arrangements of (1, ... , n). This theorem is almost obvious. To prove (i), suppose that for two different choices of (vl>"" vn)-say (xV"'' cxn) and (f3l"'" fJn)­ (vk,'"'' VkJ is the same arrangement, i.e. «Xk1'"'' cxk,,) = (f3k1,· . . ,f3kn)' and so These relations are, in fact, the same as (Xl = f3l' ... , CXn = fJn, altholgh they are stated in a different order. The two arrange­ ments are thus identical, contrary to hypothesis. It therefore follows that, as (VI"'" vn) varies over then! arrangements of (I, . .. ,n), (Vk""" Yk,,) also varies, without repetition, over arrangements of (1, .. . , n). Hence (Vkl,"" YkJ varies, in fact, over all the n! arrange­ ments. The second part of the theorem is established by the same type of argument. Suppose that for two different ohoices of (vv .. . , vn ) ­ i.e. say (cxl, ... ,cxn ) and (fJl, ... ,fJn)-(Al> .. . ,An) is the same arrangement, t We a.v�id the familiar term 'permutation' since this will be used in a. Borne· what different sense in Cha.pter IX.
  • 15. I. § 1.1 ARRA NGEMENTS AND THE €·SYMBOL 3 Then (cxl,. , cxn) = (PI, ... ,Pn). contrary to hypothesis, and the •. assertion follows easily. {� 1.1.2. DEFINITION x sgnx (read : signumx) 1.1.2. For all real values oj the junction is defined a8 (x > 0) (x < 0). sgnx = (x = 0) = -1 EXERCISE 1. 1.1. Show that sgn x . sgny sgnxy, €(A1>...,An)=sgn (As-'")·t and deduce that = sgnXl·sgn X2." sgnxk t1"'" An) (' DEFINITION sgn(x1 x2",xk)' 1>"" ILn 1.1.3. 1';;r<8';;n (i) II = € Ill" '" lin ) .€ (ILI,···,ILn). (u") € ' EXERCISE 1. 1.2. Show that if '1 < < lin' then €(111, .. . , An) Also show that if any two A'8 are equal, then I;{A1, .... An) O. ... = 1. = EXERCISE 1.1.3. The interchange of two A'8 in (A1, •• " An) is called = transposition, Show that, if (A1" ••, An) = d{ 1,,,., n), then it is possible to a obtain (A l >' •• , ). n ) from (I, . .. ,n) by a succession of transpositions. Show, furthermore, that if this process can be carr i ed out by 8 transpositions, then Deduce that, if the same process can also be carried out by 8' transpositions, €(A1>".,An) ( - I )'. then 8 and 8' are either bo th even or both odd. €(A1, .., An') €(ILk1"'" Aka).t Ak1"'" THEOREM 1.1.2. Ij (AI"··' An), (ILl'·'" ILn), and (kl, ... , kn) are arrangements oj (l, . . . ,n), then . €(Al'"'' An) = ILl'"'' ILn ILk (ILl"'" ILn) are subjected to the same derangement, then the value of We may express thi s identity by saying that if (A1""'�) and ILI, . ·,ILn . t Empty products are, as usual, defined to have the value 1. This implies, in particular. that for n = 1 every €.symbol is equal to 1. : Definition 1.1.3 implies, of course, that €(Ak1' .. Ak.) sgn 1 ... '<J";" •• = n ().kl-).k,)·
  • 16. 4 DETERMINANTS I. § 1 . 1 remains unaltered. To prove this we observe that S CAkJ-AkJ(l-'kj-l-'ki) = (As-A,)(I-'s-I-',), (1.1.1) where r min(ki, kj), = max(ki, kj). (1.1.2) Now if r, S (such that 1 � r < S � n) are given, then there exist = unique integers i, j (such that 1 � i < j � n ) satisfying ( 1 . 1 . 2). Thus there is a biunique correspondence (i.e. a one-one correspon­ dence) between the pairs ki, kj and the pairs r, s. Hence, by (1.1.1), = IT (Akj-Aki)(l-'kj-l-'kJ) IT l�r<s�n (-A,) (1-'8-1-',), l�i<j�n Therefore, by Exercise 1.1.1, sgn II l�'t<J'�n (AkJ-Ak,)·sgn II l<'t<J�n (I-'kj-I-'k.) l�r<s�n 1�r<8:(n = sgn IT (As-A,). sgn IT (I-'s-I-',), i.e. THEOREM 1.1.3. L et 1 � r < s � n. Then e(I, . . . , r-l, s, r +l, . . . , s-l, r, s +I, . . . , n) = - 1. The expression on the left-hand side is, of course, simply by e(Al>'''' A n ) , we observe that in the product e( I, 2, . . . , n) with rand s interchanged. Denoting this expression IT (-) l�t<j�n there are precisely 2 (s- r-I ) + 1 = 2s - 2r-l negative factors, namely, (r+l) - s, (r+ 2)-s, ..., (s-I)-s, r-s. r- (r+I), r - ( r+ 2) , .. ., r- (s -l), Hence, e(Al"'" An) = (-1) 28-21'-1 = -1, as asserted. The results obtained so far are sufficient for the discussion in § 1.2 and § 1.3. The proof of Laplace's expansion theorem in ( ) ( ( § 1.4, however, presupposes a further identity. THEOREM 1.1.4. If (r1, . . . , r n) d(I,.. . ,n), and 1 � k < n, then = d(I,... ,n), (sl> ... , s n ) = e rl�..., rn Sl, .. ·,8n = l> . , r (_I)'I+... +TI+81+... +81 e r .. k 811,,,, Sk ) • Sk+1'"'' Sn ) e rk+l> .. .,rn .
  • 17. I, § 1.1 ARRANGEMENTS AND THE £-SYMBOL 5 By Exercise 1.1.1 we have £(rl> . . .,rn ) IT (rj-ri)·sgnk+l"t<j"n(rj-ri)·sgn II (rj-1',,) 1"t<1"k l"i"k = sgn IT k+l"j"n =£(rl,...,rk).£(rk+1> . . . ,rn) .(-I)V1+... +Vk, (1.1.3) where, for 1 � i � k, vi denotes the number of numbers among rk+1"'" l'n which are smaller than rio Let r�, ...,r� be defined by the relations dh, ...,rk), (r� , ..., r �) = r� < . .. < r� , rk+ 1 , ... ,r" which are smaller than r�. Then and denote by v� ( I � i � k) the number of numbers among v� = r� -I, v; = 1';-2, ..., v� = r�-k, VI + + Vk , , , = v�+ ...+v� = r1+ ...+ rk - ik ( k + I), and hence, by (1.1 . 3), £ (rv'''' l' n ) = (-I y,+... +rk-1k(k+l) f(r1,·..,rk) ' f(rk+1 " ' " l'n). Similarly sk) . c (sk+l>'''' S", £ (s 1'"'' s 11 ) - (_1)8,+...+Bk-lk(k+l)c( S 1" " � ) '" and the theorem now follows at once by Definition 1 . 1.3 (ii). 1 .2. Elementary properties of determinants 1 .2. 1 . We shall now be concerned with the study of certain properties of square arrays of (real or complex) numbers. A typical array is (1.2.1) DEFINITION 1.2.1. The nZ numbers aij (i, j = I,... , n) are the ELEMENTS of the array (1.2.1). The elements ail> aiZ' oo " ain constitute the i-th ROW, and the elements alj' aZj'"'' anj constitute the j-th COLUMN of the array. The elements an, azz,···, ann constitute the DIAGONAL of the array, and are called the DIAGONAL RLRMRNTS_
  • 18. 6 DETER MIN A NTS I. § 1.2 The double suffix notation used in (1.2.1) is particularly appr o ­ priate since the two suffixes of an element specify completely its position in the a rray. We s hall reserve the first suffix for the row and the secon d for the col umn, 80 that aij denotes the element standing in the ith row andjth column of the array (1.2.1). as its determinant. With each square array we associate a certain number known number DEFINITION 1.2.2. The DETERMINANT of the array (1.2.1) i8 the 0"" ... >..) I e("l"'" An)a1.�, ...an>"" (1.2.2) where the 8ummation extends over all the n! arrangement8 (AI"'" ..) of (l, . .. ,n).t Thi8 determinant i8 denoted by ( 1.2.3) ani an2 or, more briefly, by laijln. Determinants were first wri tten in the form (1.2.3), th ou gh without the use of double suffixes, by Cayley in 1 841. In practice , we often use a single letter, such as to denote a determinant. D, The determinant (1.2.3) associated with the array (1.2.1) is l au plainly a polynomial, of degre e n, in t he n2 elements of the array. / The determinant of the array consisting of the single elem ent au is , of course, equal to all' Further, we have a12 = e(l, 2)aua22+e(2, l)a12a2l = aua22-aI2a21; a2l a22 au au al3 = e(l, 2, 3)aua22 a33+e(l, 3, 2)aua23a32+ a21 a22 a23 +e(2, 1, 3)a12a21 aS3 a3l a32 a33 +e(2, 3, 1)a12a23a31+e(3, 1, 2)a13a21a32+ +e(3, 2, l)a13a22a3I = +aua23aS1+aI3a21a32 -alS a22aSl' aua22a33-aUa23a32-a12a2la33+ We o bserve that each term of the expression (1.2.2) f or the determinant laiiln contains one element from e ach row and one element from each column of the array (1.2.1). Hence, if any array t The sAme convention will be observed whenever symbol such (AI.· .. . >..) appears under the summation sign. a as
  • 19. I, § 1.2 ELEMENTARY PROPERTIES OF DETERMINANTS 7 contains a row or a column consisting entirely of zeros, its deter­ minant is equal to O. A determinant is a numlJer associated with a square array. However, it is customary to use the term 'determinant' for the convenient, and we shall adopt it since it will always be clear from array itself as well as for this number. This usage is ambiguous but the context whether we refer to the array or to the value of the determinant associated with it. In view of this convention we may speak, for instance, about the elements, rows, and columns of a d eterminant. The determinant (1.2.3) will be called an n-rowed determinant, or a determinant of order n. 1.2 .2. Definition 1.2.2 suffers from a lack of symmetry between the row suffixes and the column suffixes. For the row suffixes appearing in every term of the sum (1.2.2) are fixed as I, .. . , n, whereas the column suffixes vary from term to term. The following theorem shows, however, that this lack of symmetry is only ('1"'" , IL.n) apparent. THEOREM 1.2.1. Let D be the value of the determinant ( 1.2.3). (i) If (.1'''''.11) is any fixed arrangement of (I, . .., n ) , then D a).lfLl· ..aA../!tt' "" (!'-1o" .,p.a) = L E IL " . l . n (ii) If (ILl"'" ILn) is any fixed arrangement oj (1, . . . , n), then In view of Definition 1.2.2 we have (1.2.4) Let the same derangement which changes (I,. .. ,n) into the fixed arrangement (.1>""�) change ( vl,· .., vn) into (ILl"'" ILn). Then and, by Theorem 1.1.2 (p. 3),
  • 20. ( ) 8 DETERMINANTS I. § 1.2 Hence, by Theorem 1.1.1 (i) (p. 2), "" A1>' .. , An a L n) E iLl"'" iLn A1/-'1···aAA/-,n' D = (/-,b""p. and the first part of the theorem is therefore proved. same derangement which changes ( vI" '" vn) into the fixed arrange­ To prove the second part we again start from (1.2.4). Let the ment (iLl"'" iLn ) change (I, .. . , n) into (A1>"" An). Then, by Theorem 1.1.2, and also as asserted. Theorem 1 .2.2. The value of a determinant remains unaltered when the rows and columns are interchanged , i .e. ( ) Write brs = asr (r, s I, ... , n). We have to show that = I aij In=Ibi) In. Now, by Theorem 1.2.1 (ii) and Definition 1.2.2, A.l·" b An n I bi) I n = "" E A1>"" An b L (lib'" ,"") I, ... , n = I E(A1>'''' An ) alA l · .. anA (Ab... ,An) and the theorem is therefore proved. EXERCISE 1.2.1. Give a direct verification of Theorem 1.2.2 for 2·rowed and 3-rowed determmant.s_ Theorem 1.2.2 shows that there is symmetry between the rows and columns of a determinant. Hence every statement proved about the rows of a determinant is equally valid for columns, and conversely. interchanged, then the resulting determinant has the value - D. THEoltEM 1.2.3. If two rows (or columns) of a determinant Dare
  • 21. I. § 1.2 ELEMENTARY PROPERTIES OF DETERMINANTS 9 Let 1 � r < s � n, and denote by D' = I a�j In the determinant obtained by interchanging the rth and sth rows in D = laij In. Then (i = r) (i * r; i * s) (i 8).= Hence, by Definition 1.2.2, - =- ( ) alA.···as>.,···arA,···an''· ... But, by Theorem 1.1.3 (p. 4), e(I, . . . , s, . . . , r, .. . , n) = I, and so � I, ... , s, ... , , , , r, ... , n e, ,A.) D' a, L v···, r'···' s,···, n . Hence , by Theorem 1.2.1 (i), D' = -D. COROLLARY. If two rows (or two columns ) of a determinant are identiN�l, then the determinant vanishes. Let D be a determinant with two identical rows, and denote by rows. Then obviously D' = D. But, by Theorem 1.2.3, D' = - D, D' the determinant obtained from D by interchanging these two and therefore D = o. EXERCISE 1.2.2. Let T1 < ... < Tk. Show that. if the rows with suffixes TIO T2' .••• Tk of a determinant D are moved into 1st, 2nd, ...• kth place respec­ tively while the relative order of the remaining rows stays unchanged, then the resultmg determinant is equal to 'Vhen every element of a particular row or column of a deter­ minant is multiplied by a constant k, we say that the row or column in question is multiplied by k. THEOREM 1.2.4. If a row (or column) ofa determinant is multiplied by a constant k, then the value of the determinant is also rv,ultiplied by k.
  • 22. 10 DE TER MIN A N TS I. § 1.2 Let D = laijl" be a given determinant and let D' from it by multiplying the rth row by k. Then The next theorem provides a method for expressing any deter­ minant as a sum of two determinants. THEOREM 1 . 2 . 5. aIn + ann , au aIr + anI a�r Denoting the determinant on the left-hand side by Ibijl", we ( j =1= r) have (j = r). Hence, by Theorem 1 . 2. 1 (ii) (p. 7), I bijl" = L 0." ... ,".) £(-'I,···,-'")b",1···b,,,r···b".n L <"" ... ,".) = £(-'1'"'' -'n)a",1 .. · (a",r + a�,r ) .. ·a"." = L ("" .... >..) £(-'1'"'' "n)a",1 .. ·a''r.. ·a"-n+ (>.1 .. . . . >..) + L £("1>"" "n)a",1 . . · a).,r . . ·a"n" +
  • 23. I, § 1.2 E L E MEN T A R Y P R O P E RTIES OF D E T E R M I N A N T S 11 EXERCISE 1.2.3. State the analogous result for rows. A useful corollary to Theorem 1.2.5 can now be easily proved by induction. It enables us to express a determinant, each of whose elements is the sum of h terms, as the sum of h n determinants. COROLLARY. ann ) (k.. THEOREM 1.2.6. The value of a determinant remains unchanged if to any row (or column) is added any multiple of another row (or column). By saying that the 8th row of a determinant is added to the rth row we mean , of course, that every element of the 8th row is added to the corresponding element of the rth row. Similar terminology is used for columns. obtained when k times the 8th row is added to the rth row in D. Let D = lailln' and suppose that D' denotes the determinant Assuming that r < 8 we have D'= Hence, by Theorem 1.2.5 (as applied to rows), a ll aln an D' = arl arn kaSl + asl asn aSI a nI ann anI
  • 24. 12 I , § 1. 2 and so, by Theor em 1.2 . 4 and the corollary to Theorem 1.2.3, D E TERMINANTS D' =D+k =D. 1 .3. Multiplication of determinants We shall next prove that it is always possible to express the prod uct of two determinants of the same order n as a determinant of order n. Theorem 1 .3 . 1 . (Multiplication theorem for determinants) Let A l aijln and B = Ibi) I n be given determinants, and write = C ICi] In' where I, . . . , = n Crs =i�I ari biS (r, � s = n). AB= C. � � Then (1.3.1) We have C = � (>." ...,>'n) €(, . . . , An)CI>'l",cn>'� = L (>." ... ,>..) €(AV"" An) ( JLI-I aI/-'l b/-'l>'} " ( JLn-I an/-,�b/-'n>'n) n n � . . . � alJLl·. . an JLn /-'1�1 /-'n�I 0", ... , >'. ) � €(A1, . . ·, An )bJLl>'l . . . bJLn>'n' ( 1.3.2) bJLni = By Definition 1.2.2 the inner sum in (1.3.2) is equal to Hence, if any two I-"s are equal, then, by the corollary to Theorem b/-'nn 1.2.3, the inner sum in (1.3.2) vanishes. It follows that in the n-fold summation in (1.3.2) we can omit all sets of I-"s which contain at (I-'I>""I-'n)b b least two equal numbers. The summation then reduces to a simple C aI/-'l· .. an,.". summation over n! arrangements (1-'1 " ' " I-'n), and we therefore have (p." .. ,p.n) (,, . .. ,,.) = L L € (Al > ... ,,.,)b""''l .. ·b,.,,.� . . = • k E (1-'1>"" I-'n ) a1,.",· . an,.". k E . ,.",',... ,.".. �. (p." ...,/-,n) (', .. . .. }.�) � � A 1, . . ·, An
  • 25. I. § 1 . 3 M U L T I P L I C A T I O N O F DETER M I N A N T S 13 Hence, by Theorem 1.2.1 (i) (p. 7), = A B. The theorem just proved shows how we may form a determinant which is equal to the product of two given determinants A and B. We have, in fact, A B C, where the element standing in the rth = row and sth column of C is obtained by multiplying together the B and adding the products thus obtained. The determinant C corresponding elements in the rth row of A and the sth column of constructed in this way may be said to have been obtained by multiplying A and B 'rows by columns'. Now, by Theorem 1.2.2, the values of A and B are unaltered if rows and columns in either determinant or in both determinants are interchanged. Hence we can equally well form the product AB by carrying out the multiplication 'rows by rows', or 'columns by columns', or 'columus by rows'. These conclusions are expressed in the next theorem. determinant C THEOREM l .3.2. The equality ( l . 3.1) continues to hold if the = 1, . . . , n); I ci) I n is defined by any one of the following sets of = relations: (r, s 1, . , n) ; = . . 71 Ll air bis t= ers = (r,s n i=l = L air bsi (r,s = 1, . . . , n ) . crs An interesting application of Theorem 1.3.2 will be given in § l.4.1 (p. 19). l EXERCISE 1.3.1. Use the definition of a determinant to show that atm · all am1 0 at m amm 0 0 0 1 0 0 0 = I a�t amt a mm · 0 0 0 1 .... ............ _- .... _---
  • 26. 14 DETERMINANTS I, § 1.3 Deduce, by means of Theorem 1.3.1, that au aIm 0 0 amI amm 0 0 0 0 bll bIn 0 0 b1l1 b"" I 1.1 all. �lm bl� = ami amm bnl 1 .4. Expansion theorems be used in the evaluation of determinants. A procedure that is 1 .4. 1 . We have already obtained a number of results which can still more effective for this purpose consists in expressing a deter­ of the present section is to develop such a procedure. minant in terms of other determinants of lower order. The obj ect DEFINITION 1.4.1. The COFACTOR A rs of the element arB in the = 1, . . ,n) , determinant D= . is de ned as fi A rs = (-lY+sDrs (r,s where Dr8 is the determinant of order n-1 obtained when the roth row and s-th column are deleted from D. For example, if all al2 al3 D= a21 a22 a23 , / a31 a32 a33 then A ll = (_1)1+1 a22 a23 a32 aS3 / = a22a33-a2SaS2 and A 23 = (-1)2+3 all a12 / = a12a31-ana32' EXERCISE 1.4.1. au a32 Suppose that Ib.;i I" is the determinant obtained when Show that if the element arl of Iail I .. becomes the element bfXT of Ibill", then = two adjacent rows (or columns) of a determinan t laill" are interchanged. BfXT -..4m where A .. denotes the cofactor of a" in I ail In and BfXT t he co­ factor of bfXT in Ibiil".
  • 27. Theorem 1.4. 1. (Expansion of determinants in term s of rows I, § 1.4 EXPANSION THEOREMS 15 and columns) If the cofactor of apq in D = /aiJ/n is denoted by Apq, then n 2: A = D k=l ark rk (r = 1, . . . , n), (1.4.1) 11, 2: akr Akr = D k=l (r = 1, . . . , n ) . (1.4.2) This theorem states, in fact, that we may obtain the value of a determinant by multiplying the elements of any one row or column by their cofactors and a dding the products thus form ed . The identity (1.4.1) is known as the expansion of the determinant D in terms of the elements of the rth row, or s im ply as the expansion of D in terms of the rth row. Similarly , (1.4.2) is known as the ex pansion of D in terms of the rth column. In vi ew of Theorem 1.2.2 (p. 8) it is, of course, sufficient to prove (1.4.1). We begin by showing that 1 o (1.4.3) Let B, B' denote the values of the determinants on the left-hand side and the right-hand side respe ctively of (1.4.3). We write B = /bij/n, so that bn 1 , b12 = . .. bIn = o. Then = = But , for any arrangement (A2, , An) of (2, . . . , n ) , we clearly have ••• Hence as asserted.
  • 28. 16 DETERMIN ANTS I. § 1.4 Next, by Theorems 1.2.4 and 1.2.5 (pp. 9-10), we have an ain D= arl am = kLlark anI ann an alk ain n 0 0 1 0 0 = anI ank ann n a !1rk, (1.4.4) kLl rk where !1rk is the determinant obtained from D when the kth element = in the rth row is replaced by 1 and all other elements in the rth row are replaced by O. By repeated application of Theorem 1.2.3 (p. 8) we obtain o 0 alk aln /:J.rk = (_ 1 ) r-l aT_l•l ar_l.k ar_l•n ar+1•1 ar+l.k ar+l•n ank ann 1 0 0 0 0 al.k-l al.k+1 aln aT_l•l . = (-I)'+kDrk' ( - 1 )(r-l)+(k-l) a r_l.k a ' +l,fi = ar_l.k_l aT_l.k+l aT_I." a'+l,k a'+l,l' ar+l.k-l ar+l.k+l an.k_l an.k+l ann Hence, by (1.4.3), !1rk where Drk denotes the determinant obtained when the rth row and kth column are deleted from D. Hence, by (1.4.4), and the theorem • is proved.
  • 29. I, § 1.4 EXPANSION THEOREMS 17 This consists in first using Theorem 1.2.6 (p. 11) to introduce a We now possess a practical method for evaluating determinants . number of z ero s into some row or column, and then expanding the th e d e term i n a n t determinan t in terms of that row or column. Consider, for example, 9 7 3 -9 6 3 6 -4 D= 15 8 7 -7 -5 -6 4 2 Adding the last c olumn to each of the first three we have 0 -2 -6 -9 2 -I 2 -4 D= 8 I 0 -7 -3 -4 6 2 Next, we add once, twice, and four times the third row to the expression second row, first row, and fourt h row respectively. This leads to the 16 o -6 -23 10 o 2 -11 8 I o -7 29 0 6 -26 Expanding D in terms of the second column we obtain 16 -6 -23 D = - 10 2 - 11 , 29 6 -26 and we c an continue the process of reduc ti on in a similar manner until D is ev aluated . EXERCISE 1.4.2. Show that D - 532. be used to show = The expansion theorem ( The orem l.4.1) can that the value of the Vandermonde determinant I I D= a�-l a�-2 is giv en by D= IT (ai-a;). • (1.4.5) 6.82 c l';'i<i';'n
  • 30. 8 DETERMIN A NTS I. § 1.4 s true for n-l, where n � 3, and deduce that it is true for n . We rhe assertion is obvi ou sly true for n 2. We shall assume that it = nay clearly assume that all the a's are distinct, for otherwise 1.4.5) is true trivially. Consider the determinant xn-1 xn-2 an-1 an-2 x 1 2 az 1 Z n-1 an-2 an an n 1 n x, say f (x) , of d eg ree not g re at er than n- l . Moreover �xpanding i t in terms of the first row, we see that it is a p o lynom ial f (az} . . . f (an} = md so f(x) is divisi ble by each of the (distinct) factors 0, = = �-a2"'" x-an' Thus K(x-a2)···(x-an}; md here K is independent of x, as may be seen by comparing the f (x} = )oefficient of xn-1 in f (x) is equal to legrees of the two sides o f the equation. Now, by ( 1 .4.1), th e which, by the induction hypothesis, is equal t o 2';;t<,';;n = al. IT (ai-aj ). rhis, then, is the value of K; and we have f (x ) = (x-a2}···(x-an) Z';;i<j<;;;n (ai-aj). IT We no w complete the proof of ( 1 .4.5) by substituting x The result just obtained enables us to derive identities for discriminants )f algebraic e quat iens . The discriminant 4l of the equation On' x"+a1x"-1+"'+an-1x+a" = 0, is defined (1.4.6) IT (()._()j)2. whose roots are 01, as 4l l';;;i<j';;;n ...• = [t follows that 4l = 0 if and only if (1.4.6) has at least two equal roots. To 9xpress 4l in terms of the coefficients of ( 1.4.6) we observe that, in view of ( 1 .4.5) , &'i-1 {j'f-2 8 1 {j'f-2 1.1 er�l I 1 . 4l �-1 8�-2 On �-1 �-2 = , . 1
  • 31. I, § 1 .4 E X P A N S I O N T HE O R E M S 19 Carrying out the multiplication columns b y columns, w e have 8 2 n_ 2 82n_ 3 8n_1 D. = 82n- 3 82n-4 8n_2 8n_1 8n_2 80 where 8r = iJr + . . . + B:'. (r = 0 , 1 , 2 , . . ). Using Newton 's formulaet we can . express 80, 8 1 , , 8 2 n_ 2 in terms of the coefficients a1 , , an of ( 1.4.6), and hence ••• ••• Consider, for example, the cubic equation x 3 +px + q = O. Here obtain D. in the desired form. and it is easily verified that D. = I:: : 82 81 80 = 3, 8 1 = 0, 82 = - 2p, Hence D. = _ (4p3 + 27q 2 ) , and thus at least two roots of x3 +px + q = 0 are equal if and only if 4p3 + 27q 2 O. = EXERCISE 1 .4 . 3 . Show, by the method indicated above, that the dis­ criminant of the quadratic equation x 2 + p.x + v = 0 is fL2 _ 4v. = We now resume our discussion of the general theory of determi­ nants. have for r ¥= s, THEOREM 1.4.2. With the same notation as in Theorem 1.4.1 we ." ! ark A Sk 0, k=l n ! akr A ks = O. k=l I n other words, if each element of a row (or column) i s multiplied by the cofactor of the corresponding element of another fix ed row (or column), then the sum of the n products thus formed is equal to zero. This result is an easy consequence of Theorem 1.4.1. We need, Df course, prove only the first of the two stated identities. If D' = l a�j ln denotes the determinant obtained from D = l aij l n when the sth row is replaced by the rth row, then (i ¥= s) (i = 8). Denoting by A�j the cofactor of the element a�j in D', we clearly nave (k = 1, . .. , n ). t See Burnside and Pa.nton, The ( 1 0th edition), i. 1111 5-7, or Perron, 12, i. 1 50- 1 . Theory of Equations
  • 32. 20 DETERMINANTS I, § 1 .4 Hence, by (1.4.1) (p. 15), 71, n = L a:k A :k = L ark A Bk' D' k=l k= l But the rth row and 8th row of D' are identical, and so D' = O. This completes the proof. It is often convenient to combine Theorems 1.4.1 and 1 .4 .2 into a single statement. For this purpose we need a new and most useful notation. DEFINITION 1 .4 . 2 . The 8ymbol OrB' known as the KRONECKER DELTA, is defined as (r s) = In' (r *' 8) . With the aid of the Kronecker delta Theorems 1.4.1 and 1 .4.2 i ark ASk = ors D can be combined in the following single theorem. minant D } L akr Aks = 0rs D THEOREM 1 . 4 . 3 . If Apll denotes the cofactor of apq in the deter­ l aij then = k=l ( r, s = I , . . . , n ) . n k=l 1 .4.2 . Our next object i s to obtain a generalization o f the Expansion Theorem 1.4.1 . We require some preliminary definitions. DEFINITION 1 .4 . 3 . A k-rowed MINOR of an n-rowed determinant D is any k-rowed determinant obtained when n - k rows and n - k columns are deleted from D. Alternatively, we may say that a k-rowed minor of D is obtained by retaining, with their relative order unchanged, only the elements common to k specified rows and k specified columns. For instance, the determinant Dii, obtained from the n-rowed determinant D by deletion of the ith row and jth column, is an (n - l ) -rowed minor of D. Each element of D is, of course, a I-rowed minor of D. EXERCISE 1 .4.4. Let 1 < k < n, a given n-rowed determinant D vanish. Show that all (k+ I ) .rowed minors and suppose that all k-rowed minors of of D vanish also. <- The k-rowed minor obtained from D by retaining only the
  • 33. I. § I .4 E XPA N S I O N T H E O RE M S 21 elements belonging t o rows with suffixes rl, , rk and columns with ••• suffixes Sl I " " Sk will be denoted by D (rll " " rk / Sl" ' " Sk ) ' I 1 Thus, for example, if D = a2 1 a 22 a23 , aa 2 a33 aal D( I , 3 / 2 , 3) = a 1 2 a 13 · then a3 2 a33 DEFINITION 1 .4.4. The COFACTOR (or ALGEBRAIC COMPLEMENT) .D(rl, , r k / s!> " " Sk ) of the minor D (rl , rk / s!> " " Sk) in a de terminant . . • , ••. D is defined as .D(r l, . · · , rk / Sl " ' " sk ) ( - I )r1 +. . . +rk+Bl+ ... +BkD(rk+1' · · · ' rn / Sk+1 ' O O " sn) , where rk+1 , o o . , rn are the n - k numb ers among I , . . . , n o ther than = rl, . , rk' and Sk+ 1" ' " Sn are the n - k numbers among I , . . . , n other than . . We note that for k = 1 this definition reduces to that of a cofactor of an element (Definition 1 .4. 1 , p. 1 4 ) . If k n, i.e. if a minor = coincides with the entire determinant, it is convenient to define its cofactor as I . _I an Consider, by way of illustration, the 4-rowed determinant D = / aiJ / 4 ' Here and .D(2, 3 / 2, 4) = ( _ I ) 2 +3+ 2 + 4D( I , 4 / 1 , 3 ) = au a13 · a43 1 Theorem 1 .4.4. (Laplace's expansion theorem) n and I � rI < . . . < rk � n. Then Let D be integers such < a n n-rowed determinant, and let rl , . . . , rk be that I � k l "; Ul < . . . <Uk"; n D = ! D (rl , . . . , rk / u l, . . , uk) .D( rl, . . . , rk / ul, . . . , Uk) ' . (�) p;oducts This theorem (which was obtained, in essence, by Laplace in 1 77 2 ) furnishes us with an expansion of the determinant D in terms of k specified rows, namely, the rows with suffixes rl , . , r k ' We form o o all possible k-rowed minors of D involving all these rows and multiply each of them by its cofactor ; the sum of the
  • 34. 22 I , § 1 .4 is then equal to D. An an alogou s expansion applies, of course, to D E T E RMINAN T S columns . It should be noted that for k = 1 Theorem 1.4.4 reduces to the identity ( 1 .4.1) on p. 15. To prove the theorem, let the numbers "H I , " " "" be defined by (r81,· · ·, 8n")aTt81 1 :::;;; rk+l < . . . < rn :(: n, the requirements €("l" "'''k) ("I " " ' ''n) J1( I , . . . , n ) . = Then, by Theorems 1 . 2 . 1 (i) (p. 7 ) and 1.1.4 (p. 4) we have arn8n (8" . . . ,8n ) = .a'(1, • . . ,n) D = I € 1>" " " ••• (_ 1 ) rl+ .. . +r, +81 +. . . +B, X (Sl, • • • ,Bn) = .a' (l, n) = .2 .•.• 81" " , Sk (1.4.7) Now we can clearly obtain all arrangements (S1 > ' ' ' ' s,,) of ( 1 , .. . , n) and each arrangemen t exactly once-by separating the numbers 1, . . . , n in all possible ways into a set of k and a set of n - k numbers, - and letting (S1> . . . , 8k ) vary over all arrangements of the first and (Sk+1, . . , 8,,) over all arrangements of the second set. Thus the d( l , . . . , n ) below the summation sign in . ( 1 .4.7) can be replaced by the conditions condition (SI , . . . , 8,, ) = (u I , . . , un) = d(l, . . . , n) ; ( 1 .4.8) Uk+ 1 < . . . < Un ; . u1 < < Uk ; ". ( 1 .4. 9 ) €("l'''''''k)a (81) . . . , 8k) = d(u1 , · . . , Uk) ; (Sk+1, . . · , 8n ) = d(uk+ l ' " '' Un) ' Indicating b y an accent th at the summation i s to b e taken over the inte gers u1, . . . , u" satisfying (1.4.8) and ( 1 .4.9), we therefore have �I � ..:., ..:., '181 ' " ar.t81 (S" . . . ,B.t) - .s:af (U . . . . . ,Uk) ( _ I )rl+"'+'.t+ul+ ... +u.t 81> " . , 8k D = x X a'nu"
  • 35. 23 ""' _ I ) r,+ ... + rt+1h+ ... +ukD(r I, § 1.4 EXPANSION THEOREMS "" ( V · . . , r k 1 U v " " Uk ) X = X D (rk+1" ' " rn 1 Uk+l" ' " Un ) = !'D ( rv " " rk 1 Uv " . , uk )l> (rV " " rk 1 UV " " Uk ) x l> (rl, . . · , rk 1 UV " " Uk ) ! 1, Uk + lt · · ·,Ut. where the inner sum is extended over all integers Uk + V ' ' ' ' Un satisfying ( 1 . 4 . 8) and ( 1 .4.9) . Now the integers uk + 1, , , , , un are clearly determined uniquely for each set of uv ' ' ' ' Uk' Hence the value of the inner sum is equal to 1 , and the theorem is proved. The natural way in which products of minors and their cofactors occur in the expansion of a determinant can be made intuitively clear as follows . To expand an n-rowed determinant in terms of the rows in the form ai] + O and every element apq in each of the remain­ rows with suffixes r1 ' " ' ' rk, we write every element aij in each of these ing rows in the form O + apq• Using the corollary to Theorem 1 . 2.5 (p. I I ) we then obtain the given determinant as a sum of 2 n determinants . Each of these either vanishes or else may be expressed (by virtue of Exercise 1 . 3 . 1 , p. 1 3 , and after a prelimi­ nary rearrangement of rows and columns) as a product of a k-rowed minor and its cofactor. The reader will find it helpful actually to k = 2, r l = I , r 2 carry out the procedure described here, say for the case n 4, = 3. = A s a n illustration o f the use o f Laplace 's expansion w e shall evaluate the determinant 0 0 a13 au al S 0 0 a23 a24 0 D= 0 0 a33 0 0 0 a42 a4 3 a 44 a4S aS l aS 2 aS3 a S4 a ss by expanding it in terms of the first three rows. The only 3 -rowed minor which involves these three rows and does not necessarily vanish is a l 3 au al S D(I, 2 , 3 1 3, 4 , 5 } a2 3 a24 0 = a33 0 0
  • 36. / a333 24 DETERMINANTS I, § 1 .4 Expanding this minor in terms of the last column, we obtain D( l , 2, 3 1 3 , 4, 5) = a1 s a 2 / a 24 = - alS a 24 a 33' 0 Furthermore . .D( I, 2, 3 1 3, 4, 5) = ( _ 1 )1+2 +3+3+4+5D(4, 5 1 1 , 2) D( 1 , 2 , 3 1 3, 4, 5)D( 1 , 2 , 3 1 3, 4 , 5) = al 5 a24 a3 3 a4 2 aS l ' and so, by Theorem 1 .4. 4, D = 1 .5. Jacobi's theorem of the same order whose elem ents are the cofactors of the elements 'With every determinant may be associated a second determinant of the first. We propose now to investigate the relation between two such determinants. DEFINITION 1 .5 . 1 . If A.s de no tes the cofactor of ars in D = laii ln, then D* = IAii l n is known as the AD JUGATE ( DETERMINANT ) of D. to establish the relation between c orres p on ding minors in D and D*. Our object is to express D* in terms of D and , more generally, In disc us sing these questions we shall require an important general principle concerning polynomials in several variables . We recall that two polynomials, say f (x1, , xm ) a n d g(xl , . . . , xm ), are said . • . to be identically equal if f (x1, , xm ) = g(xl , , xm ) for all values of • • • • • • Xl " ' " xm. Again, the two polynomials are said t o be formally equal if the corre s ponding coefficients in f an d g are equal. It is well shall express this relation between the p olyno m i als f and g by known that identity and formal equality imply each other. We wri ting f = g. fh and f =I 0, then g = h. THEOREM 1 . 5. 1. Let f, g, h be polynomials in m variables. If When m = 1 this is a well known elementary result. For the fg = proof of the theorem for m > 1 we must refer the reader elsewhere. t THEOREM 1 .5.2. If D is an n-rowed determinant and D* its adjugate, then D* = Dn-l. write R = laii In' D* = IAij In' and form the product DD* rows by This formula was discovered by Cauchy in 1 8 1 2. To prove it, we t See, for example, van der Waerden, Modern A lgebra (English edition ) , i. 47.
  • 37. ! kil aik Ajk ! rows. Thus I, § 1 . 5 JA C O B I ' S T H E O R E M 25 D o o 0 D o DD* = = I S ij D ln = o o . D = n and therefore DD* = D n . ( 1 .5. 1 ) I f now D =ft 0, then, dividing both sides of ( 1 . 5. 1 ) by D, we obtain the required result. If, however, D = ° this obvious device fails, and we have recourse to Theorem 1 . 5. 1 . Let us regard D as a polynomial in its n 2 elements. The adjugate determinant D* is then a polynomial in the same n 2 elements, and ( 1 . 5. 1 ) is a polynomial identity. B u t D is not an identically vanish­ ing polynomial and so, by ( 1 . 5 . 1 ) and Theorem 1 .5 . 1 (with f = D, g = D*, h = D11 -1) we obtain the required result.t Our next result -the main result of the present section-was discovered by Jacobi in 1 83 3 . If M is a k-rou'ed minor of a determinant D, M* the corresponding THEOREM 1 . 5 . 3 . (Jacobi's theorem) minor of the adjugate determinant D * , and M the cofactor of M in D, then M* = D k -I M . ( 1 . 5 . 2) Before proving this formula we point out a few special cases. The order of D is, as usual, denoted by n. (i) If k = 1, then ( 1 .5.2) simply reduces to the definition of cofactors of elements of a (iii) For k = n - l the formula ( 1 .5.2) states that if D = l a ii l n > determinant. (ii) If k n, then ( 1 .5. 2) reduces to Theorem 1 . 5.2. = D * = IA ij l n, then the cofactor of A rs in D * is equal to D n -2ars' (iv) For k = 2 ( 1 . 5 . 2 ) implies that if D = 0, then every 2-rowed minor of D* vanishes . To prove ( 1 .5.2) we first consider the special case when M is situated in the top left-hand comer of D, so that all a1k ak +1 ,k+ 1 ak+ 1,n M= M= ak1 akk a n ,k+l an n An A lk M* = A kl A kk t Altemative proofs which do not depend on Theorem 1.5. 1 will M found in § 1 . 6 . 3 and § 3.5.
  • 38. 26 DETERMINANTS I, § 1 . 5 Multiplying determinants rows b y rows and using Theorem 1 .4. 3 (p. 20), we obtain -- -- o o o -- - 1 D o -- - --- - - o D ak,k+ 1 o o ak +l, k+ l ' o o an,k+l the left is equal to M*, while the determinant on the right is equal Now, by Laplace ' s expansion th eore m , the second determinant on to D kM . Thus DM* = Dk M. Since this is a polynomial identity in the n 2 elements of D, and since D does not vanish identically, it follows by Theorem 1 . 5. 1 that ( 1 .5.2) is valid for the special case under consideration. We next turn to the general case, and suppose the minor M to consist of those elements of D which belong to the rows with suffixes Tv " " rk and to the columns with suffixes 81, . . . , 8k (where r1 < . . . < rk and 81 < . . . < Sk ) ' We write r1 + . . . + rk + s1 + " , + sk = t. Our aim is to reduce the general case to the special case considered above by rearranging the rows and columns of D in such a way that the minor M is moved to the top left-hand corner, while the relative We denote the new determinant thus obtained by f?), the k-rowed order of the rows and columns notinvolvedin M remains unchanged. minor in its top left-hand corner by ../I, the cofactor of ../I in f?) by .ii, and the k-rowed minor in the top left-hand corner of the adju­ gate determinant f?)* by ../1*. In view of the special case already ../1 * discussed we then have = f?)k-1v11. ( 1 .5. 3) Now o �viously ../l = M; and, by Exercise 1 . 2 . 2 (p. 9) , f?) = ( - l )tD. ( 1 .5.4)
  • 39. I. § 1 . 5 It is, moreover, clear that JAC O B I ' S THE O R E M 27 = .A ( 1 )/111 = - ( 1 .5.5) . of aii in 9) is equal to ( - I )'Ai) " t Hence In view of Exercise 1 .4. 1 (p. 14) it follows easily that the cofactor .A* ( _ I )tkM*, ( 1 .5.6) and we complete the proof of the theorem by substituting ( 1 .5. 4), ( 1 .5.5) , and ( 1 .5.6) in ( 1 . 5 . 3 ) . EXERCISE 1 . 5. 1 . Let A , H. a , . . b e the cofactors o f the elements a, h, g . •••. a in the determinant tJ,. h h g g j c = b j . of the elements A , H, G • . . . in the determinant Show that aA + hH + gG = tJ,. . aH + hB + gF = O. and also that the cofactors H G H B F f A a F C are equal t.o a tJ,. . htJ,., gtJ,. • . • • respectively. 1.6. Two special theorems on linear equations We shall next prove two special theorems on linear equations and derive some of their consequences. The second theorem is needed for establishing the basis theorems (Theorems 2 . 3 . 2 and 2 . 3.3) in the next chapter. In touching on the subject of linear equations we do not at present seek to develop a general theory-a :+ a�2 t2: . ': + �ln t� 1.6. 1. THEOREM 1 . 6. 1 . Let n � 1, and let D = l ail l n be a given task which we defer till Chapter V. determinant . Then a necessary and sufficient conditionfor the existence of numbers t1, , . t n ' not all zero, satisfying the equations • • a l tl � .O} ( 1 6 1) . . anl tl+ anz tZ + . . . + ann tn = 0 is D = O. ( 1 . 6.2) The SUfficiency of the stated condition is established by induction it holds for n- l , where n � 2 ; we shall then show that it also holds with respect to n. For n 1 the assertion is true trivially. Suppose = t It must. of course. be remembered that aU does not necessarily eland in the ith row and jth column of !!;.
  • 40. § 1 .6 ... 28 DETERMINANT S I. 0, then ( 1 .6. 1 ) is satisfi e d for n. Let ( 1 . 6 .2) be satisfied. If all = anI = = ... = by t} 1 , t2 tn = 0, and the re quire d assertion is seen to hold. If, on the o th er h and , = = the n u m b ers aw " " an I do not all vanish we may assume, withou t loss of generality, that all =;t= O . In that case we subtract, for i = 2, . . . , n, ail/aU times the first row from the i th row in D and o b t ain au al 2 ai n b2 2 0 b 22 b 2n = al l D= 0 bn 2 = 0, bn 2 bn n wh ere (i, j = 2, . . . , n) . He n c e (a . . - �I a1J) t and so, by the in d u c ti o n hypothesi s , there exist numbers t2 , . . . , tn ' not all zero, such that ° L 'J ) � a1 1 )=2 = (i = 2, . . . , n ) . ( 1 .6. 3 ) Let tl n o w e qu atio n n be defined by the t} I = al l j = 2 -- I a}j tj, ( 1 . 6 .4) so that ( 1 . 6. 5 ) By ( 1 . 6 . 3 ) and ( 1 . 6 . 4 ) we have (i = 2 , . .. , n ) , ( 1 .6. 6 ) and ( 1 . 6 . 5 ) and ( 1 . 6 . 6) are to g ether e quivalent to (1.6.1). The To prove the necessi ty of ( 1 . 6. 2 ) we a g ain argue by induction . suffi ci e n c y of ( 1 . 6 . 2 ) is therefore established. We have to show that if D *- O an d the numbers tv ' . " tn satisfy : 1 . 6. 1 ), then t } ... tn o. For n = I this assertion is true briviall� Suppose, next, that it holds for n - I , where n � 2. = = = The numbers an' . . . . an} are not all zero ( sin c e D :f. 0), and we may,
  • 41. I, § 1 .6 T W O S P E C I A L T H E O R E M S O N L I N E A R E Q U A T I O N S 29 therefore, assume th at all :::j::. O. If tv " " tn satisfy ( 1 .6. 1 ) , then ( 1 . 6.4) holds and therefore so does ( 1 . 6 . 3 ) . But Hence, by ( 1 . 6 . 3 ) and the induction hypothesis, t2 = . . 0; and the proof is therefore tn O. , = = It follows, by ( l .6.4), that t l = complete t An alternative proof of the neees1'>ity of condition ( 1 . 6 . 2 ) can be based on · Theorem 1 . 4 . 3 ( p . 20). Suppose that there eXIi,t n u mbe r>; tl , , tn' not all zero , satisfying ( 1 . 6. 1 ) , i . e . • •• (i = 1, . . . , n ) . Denoting by A ,k the cofactor of a,k in D we th ere fo r e have (k � 1 , . . , n) , . i.e. (k l, ... , n). Hence, b y Theorem 1 . 4.3 , = (k = l,,,.n) i.e. tk D 0 (k 1 , . , n ) . But, b y hypothesis, t l , . . . , t n are not all equal to zero ; and therefore D O. = = .. = An obvious but useful consequence of Theorem 1 . 6. 1 is as follows : THEOREM 1 . 6.2. Let aij (i = 1 , . . . , n - l ; j = 1 , . . . , n) be given numbers, where n ;?;: 2. Then there exists a t least one set of numbers ) t1, . .. , tn ' not all zero, s uch that all tl + . . . +aln tn = 0 · ( 1 .6.7) an-l,l tl + . . . + an -I,n tn = 0 . To the n - l equations comprising ( 1 .6.7 ) we add the equation t The reader should note that the proof just given depends essentia.lly on the elimination of tl' elementary devi ce of redu cing the number of ' unknowns ' from n to ;m - l b y
  • 42. which does not, of course, affect the choice of permissible sets of 30 D E TE RMINANTS I, § 1.6 the numbers tl, . . . , tn ' Since all aln = an -l , n 0, an-l , l 0 ° it follows by the previous theorem that there exist values of tl , . . . , tn' not all zero, which satisfy ( 1 . 6. 7 ) . It is interesting t o observe that we can easily give a direct proof of Theorem 1 . 6 . 2 , without appealing to the theory of determinants, by using essentially the same argument as in the proof of Theorem holds for n - I , where n ;;:: 3. If now a' 11 . . . = a n - l, t = 0 , then 1 . 6. 1 . For n = 2 the assertion is obviously true. Assume that it the equations ( 1 . 6 . 7 ) are satisfied by tt = 1, t 2 . . . = tn = however, a11, . . . , an -t,t do not all vanish, we may assume that a ll =1= o . O. If, } = = I n that case w e consider the equations all tl + a1 2 t2 + " ' + a1 n tn = O = ° . b22 t2 + " ' + b2n tn , (1 . 6 . 8 ) � � bn-l'2 t2 + .. . + bn t,n n 0 induction hypothesis there exist values of t 2 , . . . , tn ' not all 0, satis­ where the bij are defined as i n the proof o f Theorem 1 . 6. 1 . B y the fying the last n-2 equations in ( 1 . 6 . 8 ) ; and, with a suitable choice of tv the first equation can be satisfied, too. But the values of tl , . . . , tn which satisfy ( 1 . 6 . 8) also satisfy ( 1 . 6 . 7 ) , and the theorem is therefore 1 . 6. 1 . Let 1 � m < = l ,. .., m ; j l , . . . , n ) be proved. and let a,j tl, .. . t", not all 0, such that n given numbers . Show that there exist numbers EXERCISE (i = all tl + . . . + al" t" = 0, well-known result on polynomials, which will be useful in later 1 .6.2. As a first application of Theorem 1 .6. 1 we shall prove a chapters. THEOREM 1 . 6,3. If the polynomial f (x) = cO x n + cl xn -l + , , , + cn _l X+Cn �anishes for n+ 1 distinct values of x, then it vanishes identically. •
  • 43. Le t XV " " xn +1 b e distinct numbers , I, § 1 .6 T W O S P E C I A L T H E O R E M S O N L I N E A R E Q U A T I O N S 3 1 f (Xl) = . . . and suppose that f ( xn+1) = 0, i.e. = Since, by ( 1 .4.5), p. 1 7 , the Vandermonde determinant is e qual to and there fore not equal to zero, it follows by Theorem 1 . 6. 1 that Co c1 ... Cn 0, i.e. that f (x) vanishes identically. If f(x), g(x) are polynomials, and there exists = = = = a Xo COROLLARY . constant such that f (x) g( x ) Xo, = whenever x > then the equality holds for ALL values of x. Let n be the greater of the degrees of f an d g. Now f (x) -g(x) vanishes for any n + 1 di stinct values of x whi ch exceed Xo, and the assertion follows , therefore, by Theorem 1 . 6 . 3 . 1 . 6.3. ThE'orem 1 . 6 . 1 enables u s t o dispense with the comparatively deep Theorem 1 . 5. 1 (p. 2 4 ) m the proof of Theorem 1 . 5 . 2 . As we recall , there is only a d ifficulty when D 0, and in that case we h av e to show that D * before, D c= O. = as tat) t n' D * IA" ln' and assume ( as we may clearly d o ) that at least ono element in D, say does not vanish . In view of W e wrIte, = = ak l ' Th e o rem 1 . 4 . 3 ( p . 2 0 ) and the assumption D 0 we infer that the relations (i = = 1, . . . , n) are satisfied for But here tl = ak l "" 0 and 80, by Theorem 1.6. 1 , �l� A nn 1= O. 1.6.4. It is useful to possess some easily applioable criteria for ::leciding whether a determ in ant does or does not vanish. Below we shall deduce one suoh criterion due to Minkowski (1900).