Successfully reported this slideshow.
Upcoming SlideShare
×

# An introduction to linear algebra

5,175 views

Published on

Determinants, vectors, matrices and linear equations, quadratic forms , more.

Published in: Education, Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Gracias, gracias....

Are you sure you want to  Yes  No
• Be the first to like this

### An introduction to linear algebra

1. 1. AN INTRODUCTIONLINEAR ALGEBRA TO BY L. MIRSKY I.EGTURER IN MATHEMATICS IN THE UNIVERSITY OF SHEFFIELD OXFORD AT THE CLARENDON PRESS 1955
2. 2. fOx ord University Press, Amen House, London E.O.4 GLASGOW NEW YORK TORONTO MELBOURNE WELLINGTON Geoffrey Oumberlege, Publisher to the University BOMBAY OALCUTTA MADRAS KARAOm CAPE TOWN IBADAN PRINTED IN GREAT BRITAIN
3. 3. PREFACEMy object in writing this book has been to provide an elementaryand easily readable account of linear algebra. The book is intendedmainly for students pursuing an honours course in mathematics,but I hope that the exposition is sufficiently simple to make itequally useful to readers whose principal interests lie in the fieldsof physics or technology. The material dealt with here is notextensive and, broadly speaking, only those topics are discussedwhich normally form part of the honours mathematics syllabus inBritish universities. Within this compass I have attempted topresent a systematic and rigorous development of the subject.The account is self-contained, and the reader is not assumed tohave any previous knowledge of linear algebra, although someslight acquaintance with the elementary theory of determinantswill be found helpful. It is not easy to estimate what level of abstractness best suitsa textbook of linear algebra. Since I have aimed, above all, atsimplicity of presentation I have decided on a thoroughly concretetreatment, at any rate in the initial stages of the discussion. ThusI operate throughout with real and complex numbers, and Idefine a vector as an ordered set of numbers and a matrix as arectangular array of numbers. After the first three chapters,however, a new and more abstract point of view becomes prominent.Linear manifolds (i.e. abstract vector spaces) are considered, andthe algebra of matrices is then recognized to be the appropriatetool for investigating the properties of linear operators; in fact,particular stress is laid on the representation of linear operators bymatrices. In this way the reader is led gradually towards thefundamental concept of invariant characterization. The points of contact between linear algebra and geometry arenumerous, and I have taken every opportunity of bringing them tothe readers notice. I have not, of course, sought to provide a syste­matic discussion of the algebraic background of geometry, butof the coordinate system, reduction of quadrics to principal axes,have rather concentrated on a few special topics, such as changesrotations in the plane and in space, and the classifichtion ofquadrics under the projective and affine grouns.
4. 4. vi PREFACE The theoryof matrices gives rise to many striking inequalitir.,j.The proofs of these are generally very simple, but are widelyscattered throughout the literature and are often not easilyaccessible. I have here attempted to collect together, with proofs,all the better known inequalities of matrix theory. I have alsoincluded a brief sketch of the theory of matrix power series, a topicof considerable interest and elegance not normally dealt with inelementary textbooks. Numerous exercises are incorporated in the text. They aredesigned not so much to test the readers ingenuity as to direct hisattention to analogues, generalizations, alternative proofs, and soon. The reader is recommended to work through these exercises,sequent discussion At the end of each chapter there is a series ofas the results embodied in them are frequently used in the sub­ .miscellaneous problems arranged approximately in order of in ­creasing difficulty. Some of these involve only routine calculations,others call for some manipUlative skill, and yet others carry thegeneral theory beyond the stage reached in the text. A numberof these problems have been taken from recent examination papersin mathematics, and thanks for permission to use them are due tothe Delega tes of the Clarendon Press, the Syndics of the CambridgeUniversity Press, and the Universities of Bristol, London, Liver­ The number of e xisting books on linear algebra is large, and it ispool, Manchester, and Sheffield .therefore difficult to make a detailed acknowledgement of sources.I ought, however, to mention Turnbull and Aitken, An Introductionto the Theory of Oanonical Matrices, and MacDuffee, The Theoryof Matrices, on both of which I have drawn heavily for historicalreferences. I have received much help from a number of friends andcolleagues. Professor A. G. Walker first suggested that I shouldinvaluable. Mr. H. Burkill, Mr. A. R. Curtis, Dr. C. S. Davis,write a book on linear algebra and his encouragement has beenDr. H. K. Farahat, Dr. Christine M. Hamill, Professor H. A.Heilbronn, Professor D. G Northcott, and Professor A. Oppenheim .manuscript or advising me on specific points. Mr. J. C. Shepherdsonhave all helped me in a variety of ways, by checking parts of theread an early version of the manuscript and his acute commentshas, in addition, gi�en me considerable help with Chapters IX andhave enlbled me to remove many obscurities and ambiguities; he
5. 5. The greatest debt lowe is to Dr. G. T. Kneebone and Professor PREFACE vii�.R. Rado with both of whom, for several years past, I have beenin the habit of discussing problems of linear algebra and theirpresentation to students. But for these conversations I should nothave been able to write the book. Dr. Kneebone has also readand criticized the manuscript at every stage of preparation andProfessor Rado has supplied me with several of the proofs andproblems which appear in the text. Finally, I wish to record mythanks to the officers of the Clarendon Press for their helpfulco-operation.
6. 6. CONTENTS PART I DETERMINANTS, VECTORS, MATRICES, AND LINEAR EQUATIONSI. D E T E R M IN AN T S 1.1. Arrangements and the €·symbol 1 1.2. Elementary properties of determinants 5 <::!]) Multiplication of determinants 12 1.4. Expansion theorems 14 1.5 . •Jacobis theorem 24 1.6. Two special theorems on linear equations 27II. V E C TOR S P A C E S AND L I N E A I� MANIFOLDS � The algebra of vectors 39 Y. Linear manifolds 43 <:]JD Linear dependence and basos 48 2.4. Vector representation of linear manifolds 57 2.5. Inner products and orthonormal bases 62III. T H E A LGEB R A OF M A TRI C E S \$4 3.1. Elementary algebra 72 3.2. Preliminary notions concerning matrices 74 � G.:YAddition and multiplication of matrices 78 85 3. Adjugate matrices 87 3. Inverse matrices 90 .7. Rational functions of a square matrix 97 3.8. Partitioned matrices 100 4.1. Change of basis in a linear manifold 111IV. L IN E A R O P E R A TO R S 4.3. Isomorphisms and automorphisms of linear manifolds 4.2. L inear operators and their representations 113 123 �4. Further instances of linear operators 126v/s YSTE MS O F LINEAR EQUATIONS AND RANK OF MATRICES e> Preliminary results 131 @). The rank theorem 136
7. 7. x C O NTENTS 141 5.4. Systems of homogeneous linear equations 5.3. The general theory of linear equations 1/8 §J?- Miscellaneous applications 152 � Further theorems on rank of matrices 158VI. ELEMENTARY OPERATION S A N D OF E Q UIVALENCE �1 TIlE CONCEP:C �"tSS.� t..... Sb�...u.;.� � E-operations and E · matrices 168 � EqUIvalent matriees 6.3. Applications of the precedmg theory 172 6.4. Congr uen ce transformations 178 182 6.6. Axiomatic characterization of determinants 6.5. The general concept of equivalen(e 186 189 PART II FURTHER DEYELOPMENT OF MATRIX THEORYVII. THE CHARACTERISTIC EQUATIO� � Characteristic polynomials and similarity transformations 7.1. The coefficients of the charaderistie polynomial 195 l!J- Characteristic roots of rational functions of matriccs 199 -e. The minimum polynomial and the theorem of Cayley and 201 Hamilton 202 7.5. Estimates of chara(teristi( roots 208 7.6. Characteristic vectors 214VIII. ORTHOGONAL AXD UNITARY MATRICES 8.1. Orth ogon al matrices 222 8.2. Unitary matrices 229 8.3. Rotations in the plane 233 8.4. Rotations in space 236lX. GROUPS 9.1. The axioms of group theory 252 9.2. Matrix groups and operator groups 26 1 9.3. Representation of groups by matrices 9.4. Groups of singular matrices 267 272 9.5. Invariant spaces and groups of linear transformations 276X. CANONICAL FORMS 10.1. The idea of a canonical form 290 292 10.3. Diagonal canonical forms under the orthogonal similarity 10.2. Diagonal canonical forms under the similarity group • group and the unitary similarity group 300
8. 8. xi 306 CONTENTS 312 10.4. Triangular canonical forms 10.5. An intermediate canonical form 10.6. Simultaneous similarity transformations 316 327XI. MATRIX ANALYSIS 330 11.1. Convergent matrix sequences sel ies 11.3. The relation between matrix functions and matrix poly- 11.2. Power and matrix functions nomials 341 11.4. Systems of linear differential equations 343 PART III QUADRATIC FORMSXII. B ILI N E A R , QUADRATIC, AND HERMITIAN FORMS @yperators and forms of the bilinear and quadratic typcs 353 12.2. Orth ogonal 12.3. Gencral reduction to dia gonal form reduction to diagonal form 362 The problem of equ iv alenc e . 375 367� 12.5. Classific a tion of quadric s 380 Rank and signature 12.6. Hermitian forms 385 13.1. The value classes 394XIII. DEFINITE AND INDEFINITE FORMS 13.2. Transformations of positivo definite forms 398 13.3. Determinantal criteria 400 13.4. Simultaneous reduction of two quadratic forms 408 13.5. The inequalities of Hadamard, Minkowski, Fischer, and Oppenheim 416BIBLIOGRAPHY 427INDEX �9
9. 9. PART I DETERMINANTS, VECTORS, MATRICES, AND LINEAR EQUATIONS I DETERMINANTSTHE present book is intended to give a systematic account of theelementary parts of linear algebra. The technique best suited tothis branch of mathematics is undoubtedly that provided by thecalculus of matrices, to which much of the book is devoted, but weshall also require to make considerable use of the theory of deter­minants, partly for theoretical purposes and partly as an aid tocomputation. In this opening chapter we shall develop the principalproperties of determinants to the extent to which they are neededfor the treatment of linear algebra. t The theory of determinants was, indeed, the first topic in linearalgebra to bc studied intensively. It was initiated by Leibnitzin 1 696, developed further by Bezout, Vandermonde, Cramer,Lagrange, and Laplace, and given the form with which we are nowfamiliar by Cauchy, Jacobi, and Sylvester in the first half of thenineteenth century. The term determinant occurs for the firsttime in Gausss Di8Qui8itiones arithmeticae (1 801 ).t1 . 1 . Arran�ements and the e-symbol In order to define determinants it is necessary to refer to arrange­ments among a set of numbers, and the theory of determinantscan be based on a few simple results concerning such arrangements.In the present section we shall therefore derive the requisitepreliminary results.of the integers "1> . . . "- n. 1 . 1 . 1 . We shall denote by ("-1> "-n) the ordered set consisting .•. t For a mu ch more detailed discussion of d oterm inants Bee Kowalewski,and Panton, The Theory of Equations, and in Ferrar, 2, Aitken, 10, and Perron, 12.Einfuhrung in die Dcterminantentheorie. Briefor accounts w ill be found in Burnside Th, Theory ofDeterminants in the Hi8torical Order of Development .(Numbers in bold ·face type refer to the b ibliography at the end.) t For historical and bibliographical information see Muir, 6682 B
10. 10. 2 DETE R MIN ANTS I, § 1.1 DEFINITION 1. 1.1. If (AI"" An) and (1-1"" I-n) contain the same (distinct) integers, but these integers do not necessarily occur in thesame order, then (Al, ,An) and (l-l>",l-n) are said to be ARRANGE­ ...MENTst of each other. In symbols: (Al, .. . An) = d(l-v . .. o/1,n) or ,(1L1"",l-n) = d(Al,···, An)· We shall for the most part be concerned with arrangements of thefirst n positive integers. If (v1 ,vn ) d(l •...• n) and (kl,.. . ,kn) vk J = d(I, ... , n). We have the •... == d(I .... , n), then clearly (Vk1,""following result. THEOREM 1.1.1. (i) Let (vv"" vn) vary over all arrangements of(1 , ... ,n), and let (kl> ...,kn) be a fixed arrangement of ( 1 , . . .,n). Then( Vkl,"" Vkn ) varies over all arrangements of (1 , ... , n). (ii) Let (VI" vn) vary over all arrangements of (1, ... , n), and let(l-l,l-n) be a fixed arrangement of ( 1 , .. . ,n). The arrangement(A1>"" An), defined by the conditions VAl = ILl ... , VAn = ILn,then varies over all arrangements of (1, ... , n). This theorem is almost obvious. To prove (i), suppose that fortwo different choices of (vl>"" vn)-say (xV" cxn) and (f3l"" fJn)­(vk," VkJ is the same arrangement, i.e. «Xk1" cxk,,) = (f3k1,· . . ,f3kn)and soThese relations are, in fact, the same as (Xl = f3l ... , CXn = fJn,altholgh they are stated in a different order. The two arrange­ments are thus identical, contrary to hypothesis. It thereforefollows that, as (VI"" vn) varies over then! arrangements of (I, . .. ,n),(Vk""" Yk,,) also varies, without repetition, over arrangements of(1, .. . , n). Hence (Vkl,"" YkJ varies, in fact, over all the n! arrange­ments. The second part of the theorem is established by the same typeof argument. Suppose that for two different ohoices of (vv .. . , vn ) ­i.e.say (cxl, ... ,cxn ) and (fJl, ... ,fJn)-(Al> .. . ,An) is the same arrangement, t We a.v�id the familiar term permutation since this will be used in a. Borne·what different sense in Cha.pter IX.
11. 11. I. § 1.1 ARRA NGEMENTS AND THE €·SYMBOL 3Then (cxl,. , cxn) = (PI, ... ,Pn). contrary to hypothesis, and the •.assertion follows easily. {� 1.1.2. DEFINITION xsgnx (read : signumx) 1.1.2. For all real values oj the junction is defined a8 (x > 0) (x < 0). sgnx = (x = 0) = -1 EXERCISE 1. 1.1. Show that sgn x . sgny sgnxy, €(A1>...,An)=sgn (As-")·tand deduce that = sgnXl·sgn X2." sgnxk t1"" An) ( DEFINITION sgn(x1 x2",xk) 1>"" ILn 1.1.3. 1;;r<8;;n (i) II = € Ill" " lin ) .€ (ILI,···,ILn). (u") € EXERCISE 1. 1.2. Show that if 1 < < lin then €(111, .. . , An) Alsoshow that if any two A8 are equal, then I;{A1, .... An) O. ... = 1. = EXERCISE 1.1.3. The interchange of two A8 in (A1, •• " An) is called =transposition, Show that, if (A1" ••, An) = d{ 1,,,., n), then it is possible to aobtain (A l > •• , ). n ) from (I, . .. ,n) by a succession of transpositions. Show,furthermore, that if this process can be carr i ed out by 8 transpositions, thenDeduce that, if the same process can also be carried out by 8 transpositions, €(A1>".,An) ( - I ).then 8 and 8 are either bo th even or both odd. €(A1, .., An) €(ILk1"" Aka).t Ak1"" THEOREM 1.1.2. Ij (AI"·· An), (ILl·" ILn), and (kl, ... , kn) arearrangements oj (l, . . . ,n), then . €(Al" An) = ILl" ILn ILk(ILl"" ILn) are subjected to the same derangement, then the value of We may express thi s identity by saying that if (A1""�) and ILI, . ·,ILn . t Empty products are, as usual, defined to have the value 1. This implies, inparticular. that for n = 1 every €.symbol is equal to 1. : Definition 1.1.3 implies, of course, that €(Ak1 .. Ak.) sgn 1 ... <J";" •• = n ().kl-).k,)·
12. 12. 4 DETERMINANTS I. § 1 . 1remains unaltered. To prove this we observe that S CAkJ-AkJ(l-kj-l-ki) = (As-A,)(I-s-I-,), (1.1.1)where r min(ki, kj), = max(ki, kj). (1.1.2)Now if r, S (such that 1 � r < S � n) are given, then there exist =unique integers i, j (such that 1 � i < j � n ) satisfying ( 1 . 1 . 2).Thus there is a biunique correspondence (i.e. a one-one correspon­dence) between the pairs ki, kj and the pairs r, s. Hence, by (1.1.1), = IT (Akj-Aki)(l-kj-l-kJ) IT l�r<s�n (-A,) (1-8-1-,), l�i<j�nTherefore, by Exercise 1.1.1, sgn II l�t<J�n (AkJ-Ak,)·sgn II l<t<J�n (I-kj-I-k.) l�r<s�n 1�r<8:(n = sgn IT (As-A,). sgn IT (I-s-I-,),i.e. THEOREM 1.1.3. L et 1 � r < s � n. Then e(I, . . . , r-l, s, r +l, . . . , s-l, r, s +I, . . . , n) = - 1. The expression on the left-hand side is, of course, simplyby e(Al> A n ) , we observe that in the producte( I, 2, . . . , n) with rand s interchanged. Denoting this expression IT (-) l�t<j�nthere are precisely 2 (s- r-I ) + 1 = 2s - 2r-l negative factors,namely, (r+l) - s, (r+ 2)-s, ..., (s-I)-s, r-s. r- (r+I), r - ( r+ 2) , .. ., r- (s -l),Hence, e(Al"" An) = (-1) 28-21-1 = -1, as asserted. The results obtained so far are sufficient for the discussion in§ 1.2 and § 1.3. The proof of Laplaces expansion theorem in ( ) ( (§ 1.4, however, presupposes a further identity. THEOREM 1.1.4. If (r1, . . . , r n) d(I,.. . ,n),and 1 � k < n, then = d(I,... ,n), (sl> ... , s n ) = e rl�..., rn Sl, .. ·,8n = l> . , r (_I)I+... +TI+81+... +81 e r .. k 811,,,, Sk ) • Sk+1" Sn ) e rk+l> .. .,rn .
13. 13. I, § 1.1 ARRANGEMENTS AND THE £-SYMBOL 5 By Exercise 1.1.1 we have£(rl> . . .,rn ) IT (rj-ri)·sgnk+l"t<j"n(rj-ri)·sgn II (rj-1,,) 1"t<1"k l"i"k = sgn IT k+l"j"n =£(rl,...,rk).£(rk+1> . . . ,rn) .(-I)V1+... +Vk, (1.1.3)where, for 1 � i � k, vi denotes the number of numbers amongrk+1"" ln which are smaller than rio Let r�, ...,r� be defined by the relations dh, ...,rk), (r� , ..., r �) = r� < . .. < r� ,rk+ 1 , ... ,r" which are smaller than r�. Thenand denote by v� ( I � i � k) the number of numbers among v� = r� -I, v; = 1;-2, ..., v� = r�-k, VI + + Vk , , , = v�+ ...+v� = r1+ ...+ rk - ik ( k + I),and hence, by (1.1 . 3), £ (rv l n ) = (-I y,+... +rk-1k(k+l) f(r1,·..,rk) f(rk+1 " " ln).Similarly sk) . c (sk+l> S", £ (s 1" s 11 ) - (_1)8,+...+Bk-lk(k+l)c( S 1" " � ) "and the theorem now follows at once by Definition 1 . 1.3 (ii).1 .2. Elementary properties of determinants 1 .2. 1 . We shall now be concerned with the study of certainproperties of square arrays of (real or complex) numbers. Atypical array is (1.2.1) DEFINITION 1.2.1. The nZ numbers aij (i, j = I,... , n) are theELEMENTS of the array (1.2.1). The elements ail> aiZ oo " ainconstitute the i-th ROW, and the elements alj aZj" anjconstitute the j-th COLUMN of the array. The elements an, azz,···, annconstitute the DIAGONAL of the array, and are called the DIAGONALRLRMRNTS_
14. 14. 6 DETER MIN A NTS I. § 1.2 The double suffix notation used in (1.2.1) is particularly appr o ­priate since the two suffixes of an element specify completely itsposition in the a rray. We s hall reserve the first suffix for the rowand the secon d for the col umn, 80 that aij denotes the elementstanding in the ith row andjth column of the array (1.2.1).as its determinant. With each square array we associate a certain number knownnumber DEFINITION 1.2.2. The DETERMINANT of the array (1.2.1) i8 the 0"" ... >..) I e("l"" An)a1.�, ...an>"" (1.2.2)where the 8ummation extends over all the n! arrangement8 (AI"" ..)of (l, . .. ,n).t Thi8 determinant i8 denoted by ( 1.2.3) ani an2or, more briefly, by laijln. Determinants were first wri tten in the form (1.2.3), th ou ghwithout the use of double suffixes, by Cayley in 1 841. In practice ,we often use a single letter, such as to denote a determinant. D, The determinant (1.2.3) associated with the array (1.2.1) is l auplainly a polynomial, of degre e n, in t he n2 elements of the array. / The determinant of the array consisting of the single elem entau is , of course, equal to all Further, we have a12 = e(l, 2)aua22+e(2, l)a12a2l = aua22-aI2a21; a2l a22 au au al3 = e(l, 2, 3)aua22 a33+e(l, 3, 2)aua23a32+ a21 a22 a23 +e(2, 1, 3)a12a21 aS3 a3l a32 a33 +e(2, 3, 1)a12a23a31+e(3, 1, 2)a13a21a32+ +e(3, 2, l)a13a22a3I = +aua23aS1+aI3a21a32 -alS a22aSl aua22a33-aUa23a32-a12a2la33+ We o bserve that each term of the expression (1.2.2) f or thedeterminant laiiln contains one element from e ach row and oneelement from each column of the array (1.2.1). Hence, if any array t The sAme convention will be observed whenever symbol such (AI.· .. . >..)appears under the summation sign. a as
15. 15. I, § 1.2 ELEMENTARY PROPERTIES OF DETERMINANTS 7contains a row or a column consisting entirely of zeros, its deter­minant is equal to O. A determinant is a numlJer associated with a square array.However, it is customary to use the term determinant for theconvenient, and we shall adopt it since it will always be clear fromarray itself as well as for this number. This usage is ambiguous butthe context whether we refer to the array or to the value of thedeterminant associated with it. In view of this convention we mayspeak, for instance, about the elements, rows, and columns of ad eterminant. The determinant (1.2.3) will be called an n-roweddeterminant, or a determinant of order n. 1.2 .2. Definition 1.2.2 suffers from a lack of symmetry betweenthe row suffixes and the column suffixes. For the row suffixesappearing in every term of the sum (1.2.2) are fixed as I, .. . , n,whereas the column suffixes vary from term to term. The followingtheorem shows, however, that this lack of symmetry is only (1"" , IL.n)apparent. THEOREM 1.2.1. Let D be the value of the determinant ( 1.2.3). (i) If (.1.11) is any fixed arrangement of (I, . .., n ) , then D a).lfLl· ..aA../!tt "" (!-1o" .,p.a) = L E IL " . l . n (ii) If (ILl"" ILn) is any fixed arrangement oj (1, . . . , n), thenIn view of Definition 1.2.2 we have (1.2.4)Let the same derangement which changes (I,. .. ,n) into the fixedarrangement (.1>""�) change ( vl,· .., vn) into (ILl"" ILn). Thenand, by Theorem 1.1.2 (p. 3),
16. 16. ( )8 DETERMINANTS I. § 1.2Hence, by Theorem 1.1.1 (i) (p. 2), "" A1> .. , An a L n) E iLl"" iLn A1/-1···aAA/-,n D = (/-,b""p.and the first part of the theorem is therefore proved.same derangement which changes ( vI" " vn) into the fixed arrange­ To prove the second part we again start from (1.2.4). Let thement (iLl"" iLn ) change (I, .. . , n) into (A1>"" An). Then, by Theorem1.1.2,and alsoas asserted. Theorem 1 .2.2. The value of a determinant remains unalteredwhen the rows and columns are interchanged , i .e. ( ) Write brs = asr (r, s I, ... , n). We have to show that =I aij In=Ibi) In. Now, by Theorem 1.2.1 (ii) and Definition 1.2.2, A.l·" b An n I bi) I n = "" E A1>"" An b L (lib" ,"") I, ... , n = I E(A1> An ) alA l · .. anA (Ab... ,An)and the theorem is therefore proved. EXERCISE 1.2.1. Give a direct verification of Theorem 1.2.2 for 2·rowedand 3-rowed determmant.s_ Theorem 1.2.2 shows that there is symmetry between the rowsand columns of a determinant. Hence every statement provedabout the rows of a determinant is equally valid for columns, andconversely.interchanged, then the resulting determinant has the value - D. THEoltEM 1.2.3. If two rows (or columns) of a determinant Dare
17. 17. I. § 1.2 ELEMENTARY PROPERTIES OF DETERMINANTS 9 Let 1 � r < s � n, and denote by D = I a�j In the determinantobtained by interchanging the rth and sth rows in D = laij In. Then (i = r) (i * r; i * s) (i 8).=Hence, by Definition 1.2.2, - =- ( ) alA.···as>.,···arA,···an· ...But, by Theorem 1.1.3 (p. 4), e(I, . . . , s, . . . , r, .. . , n) = I, and so � I, ... , s, ... , , , , r, ... , n e, ,A.) D a, L v···, r··· s,···, n .Hence , by Theorem 1.2.1 (i), D = -D. COROLLARY. If two rows (or two columns ) of a determinant areidentiN�l, then the determinant vanishes. Let D be a determinant with two identical rows, and denote byrows. Then obviously D = D. But, by Theorem 1.2.3, D = - D,D the determinant obtained from D by interchanging these twoand therefore D = o. EXERCISE 1.2.2. Let T1 < ... < Tk. Show that. if the rows with suffixesTIO T2 .••• Tk of a determinant D are moved into 1st, 2nd, ...• kth place respec­tively while the relative order of the remaining rows stays unchanged,then the resultmg determinant is equal to Vhen every element of a particular row or column of a deter­minant is multiplied by a constant k, we say that the row orcolumn in question is multiplied by k. THEOREM 1.2.4. If a row (or column) ofa determinant is multipliedby a constant k, then the value of the determinant is also rv,ultipliedby k.
18. 18. 10 DE TER MIN A N TS I. § 1.2 Let D = laijl" be a given determinant and let Dfrom it by multiplying the rth row by k. Then The next theorem provides a method for expressing any deter­minant as a sum of two determinants. THEOREM 1 . 2 . 5. aIn + ann , au aIr + anI a�r Denoting the determinant on the left-hand side by Ibijl", we ( j =1= r)have (j = r).Hence, by Theorem 1 . 2. 1 (ii) (p. 7), I bijl" = L 0." ... ,".) £(-I,···,-")b",1···b,,,r···b".n L <"" ... ,".) = £(-1" -n)a",1 .. · (a",r + a�,r ) .. ·a"." = L ("" .... >..) £(-1" "n)a",1 .. ·ar.. ·a"-n+ (>.1 .. . . . >..) + L £("1>"" "n)a",1 . . · a).,r . . ·a"n" +
19. 19. I, § 1.2 E L E MEN T A R Y P R O P E RTIES OF D E T E R M I N A N T S 11 EXERCISE 1.2.3. State the analogous result for rows. A useful corollary to Theorem 1.2.5 can now be easily proved byinduction. It enables us to express a determinant, each of whoseelements is the sum of h terms, as the sum of h n determinants. COROLLARY. ann ) (k.. THEOREM 1.2.6. The value of a determinant remains unchangedif to any row (or column) is added any multiple of another row (orcolumn). By saying that the 8th row of a determinant is added to the rthrow we mean , of course, that every element of the 8th row is addedto the corresponding element of the rth row. Similar terminologyis used for columns.obtained when k times the 8th row is added to the rth row in D. Let D = lailln and suppose that D denotes the determinantAssuming that r < 8 we have D=Hence, by Theorem 1.2.5 (as applied to rows), a ll aln an D = arl arn kaSl + asl asn aSI a nI ann anI
20. 20. 12 I , § 1. 2and so, by Theor em 1.2 . 4 and the corollary to Theorem 1.2.3, D E TERMINANTS D =D+k =D.1 .3. Multiplication of determinants We shall next prove that it is always possible to express theprod uct of two determinants of the same order n as a determinantof order n. Theorem 1 .3 . 1 . (Multiplication theorem for determinants) Let A l aijln and B = Ibi) I n be given determinants, and write =C ICi] In where I, . . . , = n Crs =i�I ari biS (r, � s = n). AB= C. � �Then (1.3.1) We have C = � (>." ...,>n) €(, . . . , An)CI>l",cn>� = L (>." ... ,>..) €(AV"" An) ( JLI-I aI/-l b/-l>} " ( JLn-I an/-,�b/-n>n) n n � . . . � alJLl·. . an JLn /-1�1 /-n�I 0", ... , >. ) � €(A1, . . ·, An )bJLl>l . . . bJLn>n ( 1.3.2) bJLni =By Definition 1.2.2 the inner sum in (1.3.2) is equal toHence, if any two I-"s are equal, then, by the corollary to Theorem b/-nn1.2.3, the inner sum in (1.3.2) vanishes. It follows that in the n-foldsummation in (1.3.2) we can omit all sets of I-"s which contain at (I-I>""I-n)b bleast two equal numbers. The summation then reduces to a simple C aI/-l· .. an,.".summation over n! arrangements (1-1 " " I-n), and we therefore have (p." .. ,p.n) (,, . .. ,,.) = L L € (Al > ... ,,.,)b""l .. ·b,.,,.� . . = • k E (1-1>"" I-n ) a1,.",· . an,.". k E . ,.",,... ,.".. �. (p." ...,/-,n) (, .. . .. }.�) � � A 1, . . ·, An
21. 21. I. § 1 . 3 M U L T I P L I C A T I O N O F DETER M I N A N T S 13Hence, by Theorem 1.2.1 (i) (p. 7), = A B. The theorem just proved shows how we may form a determinantwhich is equal to the product of two given determinants A and B.We have, in fact, A B C, where the element standing in the rth =row and sth column of C is obtained by multiplying together theB and adding the products thus obtained. The determinant Ccorresponding elements in the rth row of A and the sth column ofconstructed in this way may be said to have been obtained bymultiplying A and B rows by columns. Now, by Theorem 1.2.2,the values of A and B are unaltered if rows and columns in eitherdeterminant or in both determinants are interchanged. Hencewe can equally well form the product AB by carrying out themultiplication rows by rows, or columns by columns, or columusby rows. These conclusions are expressed in the next theorem.determinant C THEOREM l .3.2. The equality ( l . 3.1) continues to hold if the = 1, . . . , n); I ci) I n is defined by any one of the following sets of =relations: (r, s 1, . , n) ; = . . 71 Ll air bis t= ers = (r,s n i=l = L air bsi (r,s = 1, . . . , n ) . crs An interesting application of Theorem 1.3.2 will be given in§ l.4.1 (p. 19). l EXERCISE 1.3.1. Use the definition of a determinant to show that atm · all am1 0 at m amm 0 0 0 1 0 0 0 = I a�t amt a mm · 0 0 0 1 .... ............ _- .... _---
22. 22. 14 DETERMINANTS I, § 1.3Deduce, by means of Theorem 1.3.1, that au aIm 0 0 amI amm 0 0 0 0 bll bIn 0 0 b1l1 b"" I 1.1 all. �lm bl� = ami amm bnl1 .4. Expansion theoremsbe used in the evaluation of determinants. A procedure that is 1 .4. 1 . We have already obtained a number of results which canstill more effective for this purpose consists in expressing a deter­of the present section is to develop such a procedure.minant in terms of other determinants of lower order. The obj ect DEFINITION 1.4.1. The COFACTOR A rs of the element arB in the = 1, . . ,n) ,determinant D= .is de ned as fi A rs = (-lY+sDrs (r,swhere Dr8 is the determinant of order n-1 obtained when the roth rowand s-th column are deleted from D. For example, if all al2 al3 D= a21 a22 a23 , / a31 a32 a33then A ll = (_1)1+1 a22 a23 a32 aS3 / = a22a33-a2SaS2and A 23 = (-1)2+3 all a12 / = a12a31-ana32 EXERCISE 1.4.1. au a32 Suppose that Ib.;i I" is the determinant obtained whenShow that if the element arl of Iail I .. becomes the element bfXT of Ibill", then =two adjacent rows (or columns) of a determinan t laill" are interchanged.BfXT -..4m where A .. denotes the cofactor of a" in I ail In and BfXT t he co­factor of bfXT in Ibiil".
23. 23. Theorem 1.4. 1. (Expansion of determinants in term s of rowsI, § 1.4 EXPANSION THEOREMS 15and columns)If the cofactor of apq in D = /aiJ/n is denoted by Apq, then n 2: A = D k=l ark rk (r = 1, . . . , n), (1.4.1) 11, 2: akr Akr = D k=l (r = 1, . . . , n ) . (1.4.2) This theorem states, in fact, that we may obtain the value of adeterminant by multiplying the elements of any one row or columnby their cofactors and a dding the products thus form ed . Theidentity (1.4.1) is known as the expansion of the determinant Din terms of the elements of the rth row, or s im ply as the expansionof D in terms of the rth row. Similarly , (1.4.2) is known as theex pansion of D in terms of the rth column. In vi ew of Theorem1.2.2 (p. 8) it is, of course, sufficient to prove (1.4.1). We begin by showing that 1 o (1.4.3)Let B, B denote the values of the determinants on the left-handside and the right-hand side respe ctively of (1.4.3). We writeB = /bij/n, so that bn 1 , b12 = . .. bIn = o. Then = =But , for any arrangement (A2, , An) of (2, . . . , n ) , we clearly have •••Henceas asserted.
24. 24. 16 DETERMIN ANTS I. § 1.4 Next, by Theorems 1.2.4 and 1.2.5 (pp. 9-10), we have an ain D= arl am = kLlark anI ann an alk ain n 0 0 1 0 0 = anI ank ann n a !1rk, (1.4.4) kLl rkwhere !1rk is the determinant obtained from D when the kth element =in the rth row is replaced by 1 and all other elements in the rth roware replaced by O. By repeated application of Theorem 1.2.3 (p. 8)we obtain o 0 alk aln /:J.rk = (_ 1 ) r-l aT_l•l ar_l.k ar_l•n ar+1•1 ar+l.k ar+l•n ank ann 1 0 0 0 0 al.k-l al.k+1 aln aT_l•l . = (-I)+kDrk ( - 1 )(r-l)+(k-l) a r_l.k a +l,fi= ar_l.k_l aT_l.k+l aT_I." a+l,k a+l,l ar+l.k-l ar+l.k+l an.k_l an.k+l annHence, by (1.4.3), !1rkwhere Drk denotes the determinant obtained when the rth row andkth column are deleted from D. Hence, by (1.4.4),and the theorem • is proved.
25. 25. I, § 1.4 EXPANSION THEOREMS 17This consists in first using Theorem 1.2.6 (p. 11) to introduce a We now possess a practical method for evaluating determinants .number of z ero s into some row or column, and then expanding theth e d e term i n a n tdeterminan t in terms of that row or column. Consider, for example, 9 7 3 -9 6 3 6 -4 D= 15 8 7 -7 -5 -6 4 2Adding the last c olumn to each of the first three we have 0 -2 -6 -9 2 -I 2 -4 D= 8 I 0 -7 -3 -4 6 2Next, we add once, twice, and four times the third row to the expressionsecond row, first row, and fourt h row respectively. This leads tothe 16 o -6 -23 10 o 2 -11 8 I o -7 29 0 6 -26Expanding D in terms of the second column we obtain 16 -6 -23 D = - 10 2 - 11 , 29 6 -26and we c an continue the process of reduc ti on in a similar manneruntil D is ev aluated . EXERCISE 1.4.2. Show that D - 532. be used to show = The expansion theorem ( The orem l.4.1) canthat the value of the Vandermonde determinant I I D= a�-l a�-2is giv en by D= IT (ai-a;). • (1.4.5) 6.82 c l;i<i;n
26. 26. 8 DETERMIN A NTS I. § 1.4s true for n-l, where n � 3, and deduce that it is true for n . Werhe assertion is obvi ou sly true for n 2. We shall assume that it =nay clearly assume that all the as are distinct, for otherwise1.4.5) is true trivially. Consider the determinant xn-1 xn-2 an-1 an-2 x 1 2 az 1 Z n-1 an-2 an an n 1n x, say f (x) , of d eg ree not g re at er than n- l . Moreover�xpanding i t in terms of the first row, we see that it is a p o lynom ial f (az} . . . f (an} =md so f(x) is divisi ble by each of the (distinct) factors 0, = =�-a2"" x-an Thus K(x-a2)···(x-an};md here K is independent of x, as may be seen by comparing the f (x} =)oefficient of xn-1 in f (x) is equal tolegrees of the two sides o f the equation. Now, by ( 1 .4.1), th ewhich, by the induction hypothesis, is equal t o 2;;t<,;;n = al. IT (ai-aj ).rhis, then, is the value of K; and we have f (x ) = (x-a2}···(x-an) Z;;i<j<;;;n (ai-aj). ITWe no w complete the proof of ( 1 .4.5) by substituting x The result just obtained enables us to derive identities for discriminants)f algebraic e quat iens . The discriminant 4l of the equation On x"+a1x"-1+"+an-1x+a" = 0, is defined (1.4.6) IT (()._()j)2.whose roots are 01, as 4l l;;;i<j;;;n ...• =[t follows that 4l = 0 if and only if (1.4.6) has at least two equal roots. To9xpress 4l in terms of the coefficients of ( 1.4.6) we observe that, in view of( 1 .4.5) , &i-1 {jf-2 8 1 {jf-2 1.1 er�l I 1 . 4l �-1 8�-2 On �-1 �-2 = , . 1
27. 27. I, § 1 .4 E X P A N S I O N T HE O R E M S 19Carrying out the multiplication columns b y columns, w e have 8 2 n_ 2 82n_ 3 8n_1 D. = 82n- 3 82n-4 8n_2 8n_1 8n_2 80where 8r = iJr + . . . + B:. (r = 0 , 1 , 2 , . . ). Using Newton s formulaet we can .express 80, 8 1 , , 8 2 n_ 2 in terms of the coefficients a1 , , an of ( 1.4.6), and hence ••• ••• Consider, for example, the cubic equation x 3 +px + q = O. Hereobtain D. in the desired form.and it is easily verified that D. = I:: : 82 81 80 = 3, 8 1 = 0, 82 = - 2p,Hence D. = _ (4p3 + 27q 2 ) , and thus at least two roots of x3 +px + q = 0 areequal if and only if 4p3 + 27q 2 O. = EXERCISE 1 .4 . 3 . Show, by the method indicated above, that the dis­criminant of the quadratic equation x 2 + p.x + v = 0 is fL2 _ 4v. = We now resume our discussion of the general theory of determi­nants.have for r ¥= s, THEOREM 1.4.2. With the same notation as in Theorem 1.4.1 we ." ! ark A Sk 0, k=l n ! akr A ks = O. k=l I n other words, if each element of a row (or column) i s multipliedby the cofactor of the corresponding element of another fix ed row(or column), then the sum of the n products thus formed is equal tozero. This result is an easy consequence of Theorem 1.4.1. We need,Df course, prove only the first of the two stated identities. If D = l a�j ln denotes the determinant obtained from D = l aij l nwhen the sth row is replaced by the rth row, then (i ¥= s) (i = 8).Denoting by A�j the cofactor of the element a�j in D, we clearlynave (k = 1, . .. , n ). t See Burnside and Pa.nton, The ( 1 0th edition), i. 1111 5-7, orPerron, 12, i. 1 50- 1 . Theory of Equations
28. 28. 20 DETERMINANTS I, § 1 .4Hence, by (1.4.1) (p. 15), 71, n = L a:k A :k = L ark A Bk D k=l k= lBut the rth row and 8th row of D are identical, and so D = O.This completes the proof. It is often convenient to combine Theorems 1.4.1 and 1 .4 .2 intoa single statement. For this purpose we need a new and most usefulnotation. DEFINITION 1 .4 . 2 . The 8ymbol OrB known as the KRONECKERDELTA, is defined as (r s) = In (r * 8) . With the aid of the Kronecker delta Theorems 1.4.1 and 1 .4.2 i ark ASk = ors Dcan be combined in the following single theorem.minant D } L akr Aks = 0rs D THEOREM 1 . 4 . 3 . If Apll denotes the cofactor of apq in the deter­ l aij then = k=l ( r, s = I , . . . , n ) . n k=l 1 .4.2 . Our next object i s to obtain a generalization o f theExpansion Theorem 1.4.1 . We require some preliminary definitions. DEFINITION 1 .4 . 3 . A k-rowed MINOR of an n-rowed determinantD is any k-rowed determinant obtained when n - k rows and n - kcolumns are deleted from D. Alternatively, we may say that a k-rowed minor of D is obtainedby retaining, with their relative order unchanged, only the elementscommon to k specified rows and k specified columns. For instance, the determinant Dii, obtained from the n-roweddeterminant D by deletion of the ith row and jth column, is an(n - l ) -rowed minor of D. Each element of D is, of course, aI-rowed minor of D. EXERCISE 1 .4.4. Let 1 < k < n,a given n-rowed determinant D vanish. Show that all (k+ I ) .rowed minors and suppose that all k-rowed minors ofof D vanish also. <- The k-rowed minor obtained from D by retaining only the
29. 29. I. § I .4 E XPA N S I O N T H E O RE M S 21elements belonging t o rows with suffixes rl, , rk and columns with •••suffixes Sl I " " Sk will be denoted by D (rll " " rk / Sl" " Sk ) I 1Thus, for example, if D = a2 1 a 22 a23 , aa 2 a33 aal D( I , 3 / 2 , 3) = a 1 2 a 13 ·then a3 2 a33 DEFINITION 1 .4.4. The COFACTOR (or ALGEBRAIC COMPLEMENT).D(rl, , r k / s!> " " Sk ) of the minor D (rl , rk / s!> " " Sk) in a de terminant . . • , ••.D is defined as .D(r l, . · · , rk / Sl " " sk ) ( - I )r1 +. . . +rk+Bl+ ... +BkD(rk+1 · · · rn / Sk+1 O O " sn) ,where rk+1 , o o . , rn are the n - k numb ers among I , . . . , n o ther than =rl, . , rk and Sk+ 1" " Sn are the n - k numbers among I , . . . , n other than . . We note that for k = 1 this definition reduces to that of a cofactorof an element (Definition 1 .4. 1 , p. 1 4 ) . If k n, i.e. if a minor =coincides with the entire determinant, it is convenient to define itscofactor as I . _I an Consider, by way of illustration, the 4-rowed determinantD = / aiJ / 4 Hereand .D(2, 3 / 2, 4) = ( _ I ) 2 +3+ 2 + 4D( I , 4 / 1 , 3 ) = au a13 · a43 1 Theorem 1 .4.4. (Laplaces expansion theorem) n and I � rI < . . . < rk � n. Then Let D be integers such < a n n-rowed determinant, and let rl , . . . , rk bethat I � k l "; Ul < . . . <Uk"; n D = ! D (rl , . . . , rk / u l, . . , uk) .D( rl, . . . , rk / ul, . . . , Uk) . (�) p;oducts This theorem (which was obtained, in essence, by Laplace in1 77 2 ) furnishes us with an expansion of the determinant D in termsof k specified rows, namely, the rows with suffixes rl , . , r k We form o oall possible k-rowed minors of D involving all these rows andmultiply each of them by its cofactor ; the sum of the
30. 30. 22 I , § 1 .4is then equal to D. An an alogou s expansion applies, of course, to D E T E RMINAN T Scolumns . It should be noted that for k = 1 Theorem 1.4.4 reducesto the identity ( 1 .4.1) on p. 15. To prove the theorem, let the numbers "H I , " " "" be defined by (r81,· · ·, 8n")aTt81 1 :::;;; rk+l < . . . < rn :(: n,the requirements €("l" "k) ("I " " n) J1( I , . . . , n ) . =Then, by Theorems 1 . 2 . 1 (i) (p. 7 ) and 1.1.4 (p. 4) we have arn8n (8" . . . ,8n ) = .a(1, • . . ,n) D = I € 1>" " " ••• (_ 1 ) rl+ .. . +r, +81 +. . . +B, X (Sl, • • • ,Bn) = .a (l, n) = .2 .•.• 81" " , Sk (1.4.7)Now we can clearly obtain all arrangements (S1 > s,,) of ( 1 , .. . , n) and each arrangemen t exactly once-by separating the numbers1, . . . , n in all possible ways into a set of k and a set of n - k numbers,-and letting (S1> . . . , 8k ) vary over all arrangements of the first and(Sk+1, . . , 8,,) over all arrangements of the second set. Thus the d( l , . . . , n ) below the summation sign in .( 1 .4.7) can be replaced by the conditionscondition (SI , . . . , 8,, ) = (u I , . . , un) = d(l, . . . , n) ; ( 1 .4.8) Uk+ 1 < . . . < Un ; . u1 < < Uk ; ". ( 1 .4. 9 ) €("lk)a (81) . . . , 8k) = d(u1 , · . . , Uk) ; (Sk+1, . . · , 8n ) = d(uk+ l " Un) Indicating b y an accent th at the summation i s to b e taken over theinte gers u1, . . . , u" satisfying (1.4.8) and ( 1 .4.9), we therefore have �I � ..:., ..:., 181 " ar.t81 (S" . . . ,B.t) - .s:af (U . . . . . ,Uk) ( _ I )rl+"+.t+ul+ ... +u.t 81> " . , 8k D = x X anu"
31. 31. 23 "" _ I ) r,+ ... + rt+1h+ ... +ukD(rI, § 1.4 EXPANSION THEOREMS "" ( V · . . , r k 1 U v " " Uk ) X = X D (rk+1" " rn 1 Uk+l" " Un ) = !D ( rv " " rk 1 Uv " . , uk )l> (rV " " rk 1 UV " " Uk ) x l> (rl, . . · , rk 1 UV " " Uk ) ! 1, Uk + lt · · ·,Ut.where the inner sum is extended over all integers Uk + V Unsatisfying ( 1 . 4 . 8) and ( 1 .4.9) . Now the integers uk + 1, , , , , un areclearly determined uniquely for each set of uv Uk Hence thevalue of the inner sum is equal to 1 , and the theorem is proved. The natural way in which products of minors and their cofactorsoccur in the expansion of a determinant can be made intuitivelyclear as follows . To expand an n-rowed determinant in terms of therows in the form ai] + O and every element apq in each of the remain­rows with suffixes r1 " rk, we write every element aij in each of theseing rows in the form O + apq• Using the corollary to Theorem 1 . 2.5(p. I I ) we then obtain the given determinant as a sum of 2 ndeterminants . Each of these either vanishes or else may beexpressed (by virtue of Exercise 1 . 3 . 1 , p. 1 3 , and after a prelimi­nary rearrangement of rows and columns) as a product of a k-rowedminor and its cofactor. The reader will find it helpful actually tok = 2, r l = I , r 2carry out the procedure described here, say for the case n 4, = 3. = A s a n illustration o f the use o f Laplace s expansion w e shallevaluate the determinant 0 0 a13 au al S 0 0 a23 a24 0 D= 0 0 a33 0 0 0 a42 a4 3 a 44 a4S aS l aS 2 aS3 a S4 a ssby expanding it in terms of the first three rows. The only 3 -rowedminor which involves these three rows and does not necessarilyvanish is a l 3 au al S D(I, 2 , 3 1 3, 4 , 5 } a2 3 a24 0 = a33 0 0
32. 32. / a33324 DETERMINANTS I, § 1 .4Expanding this minor in terms of the last column, we obtain D( l , 2, 3 1 3 , 4, 5) = a1 s a 2 / a 24 = - alS a 24 a 33 0Furthermore . .D( I, 2, 3 1 3, 4, 5) = ( _ 1 )1+2 +3+3+4+5D(4, 5 1 1 , 2) D( 1 , 2 , 3 1 3, 4, 5)D( 1 , 2 , 3 1 3, 4 , 5) = al 5 a24 a3 3 a4 2 aS l and so, by Theorem 1 .4. 4, D =1 .5. Jacobis theoremof the same order whose elem ents are the cofactors of the elements With every determinant may be associated a second determinantof the first. We propose now to investigate the relation betweentwo such determinants. DEFINITION 1 .5 . 1 . If A.s de no tes the cofactor of ars in D = laii ln,then D* = IAii l n is known as the AD JUGATE ( DETERMINANT ) of D.to establish the relation between c orres p on ding minors in D and D*. Our object is to express D* in terms of D and , more generally,In disc us sing these questions we shall require an important generalprinciple concerning polynomials in several variables . We recallthat two polynomials, say f (x1, , xm ) a n d g(xl , . . . , xm ), are said . • .to be identically equal if f (x1, , xm ) = g(xl , , xm ) for all values of • • • • • •Xl " " xm. Again, the two polynomials are said t o be formally equalif the corre s ponding coefficients in f an d g are equal. It is wellshall express this relation between the p olyno m i als f and g byknown that identity and formal equality imply each other. Wewri ting f = g. fh and f =I 0, then g = h. THEOREM 1 . 5. 1. Let f, g, h be polynomials in m variables. If When m = 1 this is a well known elementary result. For thefg =proof of the theorem for m > 1 we must refer the reader elsewhere. t THEOREM 1 .5.2. If D is an n-rowed determinant and D* itsadjugate, then D* = Dn-l.write R = laii In D* = IAij In and form the product DD* rows by This formula was discovered by Cauchy in 1 8 1 2. To prove it, we t See, for example, van der Waerden, Modern A lgebra (English edition ) , i. 47.
33. 33. ! kil aik Ajk !rows. ThusI, § 1 . 5 JA C O B I S T H E O R E M 25 D o o 0 D o DD* = = I S ij D ln = o o . D = nand therefore DD* = D n . ( 1 .5. 1 )I f now D =ft 0, then, dividing both sides of ( 1 . 5. 1 ) by D, we obtainthe required result. If, however, D = ° this obvious device fails,and we have recourse to Theorem 1 . 5. 1 . Let us regard D as a polynomial in its n 2 elements. The adjugatedeterminant D* is then a polynomial in the same n 2 elements, and( 1 . 5. 1 ) is a polynomial identity. B u t D is not an identically vanish­ing polynomial and so, by ( 1 . 5 . 1 ) and Theorem 1 .5 . 1 (with f = D,g = D*, h = D11 -1) we obtain the required result.t Our next result -the main result of the present section-wasdiscovered by Jacobi in 1 83 3 . If M is a k-roued minor of a determinant D, M* the corresponding THEOREM 1 . 5 . 3 . (Jacobis theorem)minor of the adjugate determinant D * , and M the cofactor of M in D,then M* = D k -I M . ( 1 . 5 . 2) Before proving this formula we point out a few special cases.The order of D is, as usual, denoted by n. (i) If k = 1, then ( 1 .5.2)simply reduces to the definition of cofactors of elements of a(iii) For k = n - l the formula ( 1 .5.2) states that if D = l a ii l n >determinant. (ii) If k n, then ( 1 .5. 2) reduces to Theorem 1 . 5.2. =D * = IA ij l n, then the cofactor of A rs in D * is equal to D n -2ars(iv) For k = 2 ( 1 . 5 . 2 ) implies that if D = 0, then every 2-rowedminor of D* vanishes . To prove ( 1 .5.2) we first consider the special case when M issituated in the top left-hand comer of D, so that all a1k ak +1 ,k+ 1 ak+ 1,nM= M= ak1 akk a n ,k+l an n An A lk M* = A kl A kk t Altemative proofs which do not depend on Theorem 1.5. 1 will M found in§ 1 . 6 . 3 and § 3.5.
34. 34. 26 DETERMINANTS I, § 1 . 5Multiplying determinants rows b y rows and using Theorem 1 .4. 3(p. 20), we obtain -- -- o o o -- - 1 D o -- - --- - - o D ak,k+ 1 o o ak +l, k+ l o o an,k+lthe left is equal to M*, while the determinant on the right is equalNow, by Laplace s expansion th eore m , the second determinant onto D kM . Thus DM* = Dk M.Since this is a polynomial identity in the n 2 elements of D, andsince D does not vanish identically, it follows by Theorem 1 . 5. 1that ( 1 .5.2) is valid for the special case under consideration. We next turn to the general case, and suppose the minor M toconsist of those elements of D which belong to the rows withsuffixes Tv " " rk and to the columns with suffixes 81, . . . , 8k (wherer1 < . . . < rk and 81 < . . . < Sk ) We write r1 + . . . + rk + s1 + " , + sk = t.Our aim is to reduce the general case to the special case consideredabove by rearranging the rows and columns of D in such a way thatthe minor M is moved to the top left-hand corner, while the relativeWe denote the new determinant thus obtained by f?), the k-rowedorder of the rows and columns notinvolvedin M remains unchanged.minor in its top left-hand corner by ../I, the cofactor of ../I in f?) by.ii, and the k-rowed minor in the top left-hand corner of the adju­gate determinant f?)* by ../1*. In view of the special case already ../1 *discussed we then have = f?)k-1v11. ( 1 .5. 3)Now o �viously ../l = M; and, by Exercise 1 . 2 . 2 (p. 9) , f?) = ( - l )tD. ( 1 .5.4)
35. 35. I. § 1 . 5It is, moreover, clear that JAC O B I S THE O R E M 27 = .A ( 1 )/111 = - ( 1 .5.5) .of aii in 9) is equal to ( - I )Ai) " t HenceIn view of Exercise 1 .4. 1 (p. 14) it follows easily that the cofactor .A* ( _ I )tkM*, ( 1 .5.6)and we complete the proof of the theorem by substituting( 1 .5. 4), ( 1 .5.5) , and ( 1 .5.6) in ( 1 . 5 . 3 ) . EXERCISE 1 . 5. 1 . Let A , H. a , . . b e the cofactors o f the elements a, h, g . •••. ain the determinant tJ,. h h g g j c = b j .of the elements A , H, G • . . . in the determinantShow that aA + hH + gG = tJ,. . aH + hB + gF = O. and also that the cofactors H G H B F f A a F Care equal t.o a tJ,. . htJ,., gtJ,. • . • • respectively.1.6. Two special theorems on linear equations We shall next prove two special theorems on linear equationsand derive some of their consequences. The second theorem isneeded for establishing the basis theorems (Theorems 2 . 3 . 2 and2 . 3.3) in the next chapter. In touching on the subject of linearequations we do not at present seek to develop a general theory-a :+ a�2 t2: . : + �ln t� 1.6. 1. THEOREM 1 . 6. 1 . Let n � 1, and let D = l ail l n be a giventask which we defer till Chapter V.determinant . Then a necessary and sufficient conditionfor the existenceof numbers t1, , . t n not all zero, satisfying the equations • • a l tl � .O} ( 1 6 1) . . anl tl+ anz tZ + . . . + ann tn = 0is D = O. ( 1 . 6.2) The SUfficiency of the stated condition is established by inductionit holds for n- l , where n � 2 ; we shall then show that it also holdswith respect to n. For n 1 the assertion is true trivially. Suppose = t It must. of course. be remembered that aU does not necessarily eland in theith row and jth column of !!;.
36. 36. § 1 .6 ...28 DETERMINANT S I. 0, then ( 1 .6. 1 )is satisfi e dfor n. Let ( 1 . 6 .2) be satisfied. If all = anI = = ... = by t} 1 , t2 tn = 0,and the re quire d assertion is seen to hold. If, on the o th er h and , = =the n u m b ers aw " " an I do not all vanish we may assume, withou tloss of generality, that all =;t= O . In that case we subtract, fori = 2, . . . , n, ail/aU times the first row from the i th row in D ando b t ain au al 2 ai n b2 2 0 b 22 b 2n = al l D= 0 bn 2 = 0, bn 2 bn nwh ere (i, j = 2, . . . , n) .He n c e (a . . - �I a1J) tand so, by the in d u c ti o n hypothesi s , there exist numbers t2 , . . . , tn not all zero, such that ° L J ) � a1 1 )=2 = (i = 2, . . . , n ) . ( 1 .6. 3 )Let tl n o w e qu atio n n be defined by the t} I = al l j = 2 -- I a}j tj, ( 1 . 6 .4)so that ( 1 . 6. 5 )By ( 1 . 6 . 3 ) and ( 1 . 6 . 4 ) we have (i = 2 , . .. , n ) , ( 1 .6. 6 )and ( 1 . 6 . 5 ) and ( 1 . 6 . 6) are to g ether e quivalent to (1.6.1). The To prove the necessi ty of ( 1 . 6. 2 ) we a g ain argue by induction .suffi ci e n c y of ( 1 . 6 . 2 ) is therefore established.We have to show that if D *- O an d the numbers tv . " tn satisfy: 1 . 6. 1 ), then t } ... tn o. For n = I this assertion is truebriviall� Suppose, next, that it holds for n - I , where n � 2. = = =The numbers an . . . . an} are not all zero ( sin c e D :f. 0), and we may,
37. 37. I, § 1 .6 T W O S P E C I A L T H E O R E M S O N L I N E A R E Q U A T I O N S 29therefore, assume th at all :::j::. O. If tv " " tn satisfy ( 1 .6. 1 ) , then( 1 . 6.4) holds and therefore so does ( 1 . 6 . 3 ) . ButHence, by ( 1 . 6 . 3 ) and the induction hypothesis, t2 = . . 0; and the proof is therefore tn O. , = =It follows, by ( l .6.4), that t l =complete t An alternative proof of the neees1>ity of condition ( 1 . 6 . 2 ) can be based on ·Theorem 1 . 4 . 3 ( p . 20). Suppose that there eXIi,t n u mbe r>; tl , , tn not allzero , satisfying ( 1 . 6. 1 ) , i . e . • •• (i = 1, . . . , n ) .Denoting by A ,k the cofactor of a,k in D we th ere fo r e have (k � 1 , . . , n) , .i.e. (k l, ... , n).Hence, b y Theorem 1 . 4.3 , = (k = l,,,.n)i.e. tk D 0 (k 1 , . , n ) . But, b y hypothesis, t l , . . . , t n are not all equal tozero ; and therefore D O. = = .. = An obvious but useful consequence of Theorem 1 . 6. 1 is as follows : THEOREM 1 . 6.2. Let aij (i = 1 , . . . , n - l ; j = 1 , . . . , n) be givennumbers, where n ;?;: 2. Then there exists a t least one set of numbers )t1, . .. , tn not all zero, s uch that all tl + . . . +aln tn = 0 · ( 1 .6.7) an-l,l tl + . . . + an -I,n tn = 0 . To the n - l equations comprising ( 1 .6.7 ) we add the equation t The reader should note that the proof just given depends essentia.lly on theelimination of tlelementary devi ce of redu cing the number of unknowns from n to ;m - l b y
38. 38. which does not, of course, affect the choice of permissible sets of30 D E TE RMINANTS I, § 1.6the numbers tl, . . . , tn Since all aln = an -l , n 0, an-l , l 0 °it follows by the previous theorem that there exist values of tl , . . . , tnnot all zero, which satisfy ( 1 . 6. 7 ) . It is interesting t o observe that we can easily give a direct proofof Theorem 1 . 6 . 2 , without appealing to the theory of determinants,by using essentially the same argument as in the proof of Theoremholds for n - I , where n ;;:: 3. If now a 11 . . . = a n - l, t = 0 , then1 . 6. 1 . For n = 2 the assertion is obviously true. Assume that itthe equations ( 1 . 6 . 7 ) are satisfied by tt = 1, t 2 . . . = tn =however, a11, . . . , an -t,t do not all vanish, we may assume that a ll =1= o . O. If, } = =I n that case w e consider the equations all tl + a1 2 t2 + " + a1 n tn = O = ° . b22 t2 + " + b2n tn , (1 . 6 . 8 ) � � bn-l2 t2 + .. . + bn t,n n 0induction hypothesis there exist values of t 2 , . . . , tn not all 0, satis­where the bij are defined as i n the proof o f Theorem 1 . 6. 1 . B y thefying the last n-2 equations in ( 1 . 6 . 8 ) ; and, with a suitable choiceof tv the first equation can be satisfied, too. But the values of tl , . . . , tnwhich satisfy ( 1 . 6 . 8) also satisfy ( 1 . 6 . 7 ) , and the theorem is therefore 1 . 6. 1 . Let 1 � m < = l ,. .., m ; j l , . . . , n ) beproved. and let a,j tl, .. . t", not all 0, such that ngiven numbers . Show that there exist numbers EXERCISE (i = all tl + . . . + al" t" = 0,well-known result on polynomials, which will be useful in later 1 .6.2. As a first application of Theorem 1 .6. 1 we shall prove achapters. THEOREM 1 . 6,3. If the polynomial f (x) = cO x n + cl xn -l + , , , + cn _l X+Cn�anishes for n+ 1 distinct values of x, then it vanishes identically. •