Jim Hefferonhttp://joshua.smcvt.edu/linearalgebraLinearLinearAlgebraAlgebra
NotationR, R+, Rnreal numbers, reals greater than 0, n-tuples of realsN, C natural numbers: {0, 1, 2, . . .}, complex numb...
PrefaceThis book helps students to master the material of a standard US undergraduatefirst course in Linear Algebra.The mat...
begins with linear reduction, from the start we do more than compute. Thefirst chapter includes proofs showing that linear ...
Acknowledgements. I am grateful to all of the people who have helped with thiswork, from class-testers and adopters to peo...
Finally, a caution for all students, independent or not: I cannot overemphasizethat the statement, “I understand the mater...
ContentsChapter One: Linear SystemsI Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . 1I.1 Gauss’s M...
Chapter Three: Maps Between SpacesI Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157I.1 Definiti...
Chapter Five: SimilarityI Complex Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 353I.1 Review of Factoring...
Chapter OneLinear SystemsI Solving Linear SystemsSystems of linear equations are common in science and mathematics. These ...
2 Chapter One. Linear Systemsmust equal the number present afterward. Applying that in turn to the elementsC, H, N, and O ...
Section I. Solving Linear Systems 31.4 Example To solve this system3x3 = 9x1 + 5x2 − 2x3 = 213 x1 + 2x2 = 3we transform it...
4 Chapter One. Linear SystemsEach of the three Gauss’s Method operations has a restriction. Multiplyinga row by 0 is not a...
Section I. Solving Linear Systems 5the third row.−ρ2+ρ3−→x + y = 0−3y + 3z = 3−4z = 0Now the system’s solution is easy to ...
6 Chapter One. Linear Systems(Had there been more than one suitable row below the second then we couldhave swapped in any ...
Section I. Solving Linear Systems 7Here the system is inconsistent: no pair of numbers satisfies all of the equationssimult...
8 Chapter One. Linear Systemsany solutions at all despite that in echelon form it has a ‘0 = 0’ row.2x − 2z = 6y + z = 12x...
Section I. Solving Linear Systems 9than Gauss’s Method, since it involves more arithmetic operations, and is alsomore like...
10 Chapter One. Linear Systems(b) Can we derive the equation 5x−3y = 2 with a sequence of Gaussian reductionsteps from the...
Section I. Solving Linear Systems 11then found that 14 members preferred B over C! The Board has not yet recoveredfrom the...
12 Chapter One. Linear SystemsCompared with (∗), the advantage of (∗∗) is that z can be any real number.This makes the job...
Section I. Solving Linear Systems 13(The terms ‘parameter’ and ‘free variable’ do not mean the same thing. In theprior exa...
14 Chapter One. Linear SystemsMatrices occur throughout this book. We shall use Mn×m to denote thecollection of n×m matric...
Section I. Solving Linear Systems 15arrow: a, b, . . . or α, β, . . . (boldface is also common: a or α). For instance,this...
16 Chapter One. Linear Systems2.12 Example231 +3−14 =2 + 33 − 11 + 4 =525 7 ·14−1−3=...
Section I. Solving Linear Systems 172.14 Example In the same way, this systemx − y + z = 13x + z = 35x − 2y + 3z = 5reduce...
18 Chapter One. Linear Systemsthe geometry of solution sets. After that, in this chapter’s final section, we willsettle the...
Section I. Solving Linear Systems 192.21 Decide if the vector is in the set.(a)3−1, {−62k k ∈ R}(b)54, {5−4j j ∈ R}(c)21...
20 Chapter One. Linear Systems2.30 Make up a four equations/four unknowns system having(a) a one-parameter solution set;(b...
Section I. Solving Linear Systems 21The combination is unrestricted in that w and u can be any real numbers —there is no c...
22 Chapter One. Linear SystemsObviously the two reductions go in the same way. We can study how to reducea linear systems ...
Section I. Solving Linear Systems 23We will make two points before the proof. The first point is that the basicidea of the ...
24 Chapter One. Linear Systemsis trivially true) but because the system is homogeneous we cannot get anycontradictory equa...
Section I. Solving Linear Systems 253.7 Lemma For a linear system, where p is any particular solution, the solutionset equ...
26 Chapter One. Linear SystemsThat single vector is obviously a particular solution. The associated homogeneoussystem redu...
Section I. Solving Linear Systems 27s, t ∈ R are unequal then sv = tv because sv − tv = (s − t)v is non-0, since anynon-0 ...
28 Chapter One. Linear SystemsWe have made the distinction in the definition because a system with the samenumber of equati...
Section I. Solving Linear Systems 29with the same left sides but different right sides. The first has a solution while these...
30 Chapter One. Linear Systems3.16 For the system2x − y − w = 3y + z + 2w = 2x − 2y − z = −1which of these can be used as ...
Section I. Solving Linear Systems 31(a) s + t (b) 3s (c) ks + mt for k, m ∈ RWhat’s wrong with this argument: “These three...
32 Chapter One. Linear SystemsII Linear GeometryIf you have seen the elements of vectors before then this section is anopt...
Section II. Linear Geometry 33An object comprised of a magnitude and a direction is a vector (we use thesame word as in th...
34 Chapter One. Linear Systemsvectorv1v2is in canonical position then it extends to the endpoint (v1, v2).We typically jus...
Section II. Linear Geometry 35The above drawings show how vectors and vector operations behave in R2.We can extend to R3, ...
36 Chapter One. Linear Systemsand lines in even higher-dimensional spaces work in the same way.In R3, a line uses one para...
Section II. Linear Geometry 37is a line,{0000+ t110−1+ s2010t, s ∈ R}is a plane, and{31...
38 Chapter One. Linear Systems(b) the vector from (3, 3) to (2, 5) in R2(c) the vector from (1, 0, 6) to (5, 0, 3) in R3(d...
Section II. Linear Geometry 39II.2 Length and Angle MeasuresWe’ve translated the first section’s results about solution set...
40 Chapter One. Linear Systemsis the angle between the vectors. The left side gives(u1 − v1)2+ (u2 − v2)2+ (u3 − v3)2= (u2...
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Jim hefferon   linear algebra
Upcoming SlideShare
Loading in...5
×

Jim hefferon linear algebra

1,565

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,565
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
45
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Jim hefferon linear algebra"

  1. 1. Jim Hefferonhttp://joshua.smcvt.edu/linearalgebraLinearLinearAlgebraAlgebra
  2. 2. NotationR, R+, Rnreal numbers, reals greater than 0, n-tuples of realsN, C natural numbers: {0, 1, 2, . . .}, complex numbers(a .. b), [a .. b] interval (open, closed) of reals between a and b. . . sequence; like a set but order mattersV, W, U vector spacesv, w, 0, 0V vectors, zero vector, zero vector of VB, D, β, δ bases, basis vectorsEn = e1, . . . , en standard basis for RnRepB(v) matrix representing the vectorPn set of degree n polynomialsMn×m set of n×m matrices[S] span of the set SM ⊕ N direct sum of subspacesV ∼= W isomorphic spacesh, g homomorphisms, linear mapsH, G matricest, s transformations; maps from a space to itselfT, S square matricesRepB,D(h) matrix representing the map hhi,j matrix entry from row i, column jZn×m, Z, In×n, I zero matrix, identity matrix|T| determinant of the matrix TR(h), N (h) range space and null space of the map hR∞(h), N∞(h) generalized range space and null spaceLower case Greek alphabet, with pronounciationcharacter name character nameα alpha AL-fuh ν nu NEWβ beta BAY-tuh ξ xi KSIGHγ gamma GAM-muh o omicron OM-uh-CRONδ delta DEL-tuh π pi PIEepsilon EP-suh-lon ρ rho ROWζ zeta ZAY-tuh σ sigma SIG-muhη eta AY-tuh τ tau TOW as in cowθ theta THAY-tuh υ upsilon OOP-suh-LONι iota eye-OH-tuh φ phi FEE, or FI as in hiκ kappa KAP-uh χ chi KI as in hiλ lambda LAM-duh ψ psi SIGH, or PSIGHµ mu MEW ω omega oh-MAY-guh
  3. 3. PrefaceThis book helps students to master the material of a standard US undergraduatefirst course in Linear Algebra.The material is standard in that the subjects covered are Gaussian reduction,vector spaces, linear maps, determinants, and eigenvalues and eigenvectors.Another standard is book’s audience: sophomores or juniors, usually witha background of at least one semester of calculus. The help that it gives tostudents comes from taking a developmental approach — this book’s presentationemphasizes motivation and naturalness, using many examples as well as extensiveand careful exercises.The developmental approach is what most recommends this book so I willelaborate. Courses at the beginning of a mathematics program focus less ontheory and more on calculating. Later courses ask for mathematical maturity: theability to follow different types of arguments, a familiarity with the themes thatunderlie many mathematical investigations such as elementary set and functionfacts, and a capacity for some independent reading and thinking. Some programshave a separate course devoted to developing maturity and some do not. Ineither case, a Linear Algebra course is an ideal spot to work on this transition.It comes early in a program so that progress made here pays off later but alsocomes late enough that students are serious about mathematics. The materialis accessible, coherent, and elegant. There are a variety of argument styles,including direct proofs, proofs by contradiction, and proofs by induction. And,examples are plentiful.Helping readers start the transition to being serious students of mathematicsrequires taking the mathematics seriously so all of the results here are proved.On the other hand, we cannot assume that students have already arrived and soin contrast with more advanced texts this book is filled with examples, oftenquite detailed.Some books that assume a not-yet-sophisticated reader begin with extensivecomputations of linear systems, matrix multiplications, and determinants. Then,when vector spaces and linear maps finally appear and definitions and proofsstart, the abrupt change can bring students to an abrupt stop. While this book
  4. 4. begins with linear reduction, from the start we do more than compute. Thefirst chapter includes proofs showing that linear reduction gives a correct andcomplete solution set. Then, with the linear systems work as motivation so thatthe study of linear combinations is natural, the second chapter starts with thedefinition of a real vector space. In the schedule below this happens at the startof the third week.Another example of this book’s emphasis on motivation and naturalnessis that the third chapter on linear maps does not begin with the definition ofhomomorphism. Instead it begins with the definition of isomorphism, which isnatural: students themselves observe that some spaces are “the same” as others.After that, the next section takes the reasonable step of isolating the operation-preservation idea to define homomorphism. This approach loses mathematicalslickness but it is a good trade because it gives to students a large gain insensibility.A student progresses most in mathematics while doing exercises. In thisbook problem sets start with simple checks and range up to reasonably involvedproofs. Since instructors usually assign about a dozen exercises I have tried toput two dozen in each set, thereby giving a selection. There are even a few thatare puzzles taken from various journals, competitions, or problems collections.These are marked with a ‘?’ and as part of the fun I have retained the originalwording as much as possible.That is, as with the rest of the book the exercises are aimed to both buildan ability at, and help students experience the pleasure of, doing mathematics.Students should see how the ideas arise and should be able to picture themselvesdoing the same type of work.Applications and computers. The point of view taken here, that students shouldthink of Linear Algebra as about vector spaces and linear maps, is not taken tothe complete exclusion of others. Applications and computing are interestingand vital aspects of the subject. Consequently each of this book’s chapters closeswith a few topics in those areas. They are brief enough that an instructor can doone in a day’s class or can assign them as independent or small-group projects.Most simply give a reader a taste of the subject, discuss how Linear Algebracomes in, point to some further reading, and give a few exercises. Whether theyfigure formally in a course or not these help readers see for themselves thatLinear Algebra is a tool that a professional must master.Availability. This book is freely available. In particular, instructors can printcopies for students and sell them out of a college bookstore. See http://joshua.smcvt.edu/linearalgebra for the license details. That page also has the latestversion, exercise answers, beamer slides, and LATEX source.A text is a large and complex project. One of the lessons of softwaredevelopment is that such a project will have errors. I welcome bug reports and Iperiodically issue revisions. My contact information is on the web page.– ii –
  5. 5. Acknowledgements. I am grateful to all of the people who have helped with thiswork, from class-testers and adopters to people who sent bug reports. I am alsovery grateful to Saint Michael’s College, for supporting my work on this textover many years.If you are reading this on your own. This book’s emphasis on motivation anddevelopment, and its availability, make it widely used for self-study. If you arean independent student then good for you; I admire your industry. However,you may find some advice useful.While an experienced instructor knows what subjects and pace suit theirclass, a suggested semester’s timetable may help you estimate how long sectionstypically take to cover. This one one was graciously shared by George Ashline.week Monday Wednesday Friday1 One.I.1 One.I.1, 2 One.I.2, 32 One.I.3 One.III.1 One.III.23 Two.I.1 Two.I.1, 2 Two.I.24 Two.II.1 Two.III.1 Two.III.25 Two.III.2 Two.III.2, 3 Two.III.36 exam Three.I.1 Three.I.17 Three.I.2 Three.I.2 Three.II.18 Three.II.1 Three.II.2 Three.II.29 Three.III.1 Three.III.2 Three.IV.1, 210 Three.IV.2, 3 Three.IV.4 Three.V.111 Three.V.1 Three.V.2 Four.I.112 exam Four.I.2 Four.III.113 Five.II.1 –Thanksgiving break–14 Five.II.1, 2 Five.II.2 Five.II.3This schedule supposes that you already know Section One.II, the elements ofvectors. In the above course, in addition to the shown exams and to the finalexam that is not shown, students must do take-home problem sets that includeproofs. That is, the computations are important but so are the proofs.In the table of contents I have marked subsections as optional if someinstructors will pass over them in favor of spending more time elsewhere.You might pick one or two topics that appeal to you from the end of eachchapter. You’ll get more from these if you have access to software for calculations.I recommend Sage, freely available from http://sagemath.org.My main advice is: do many exercises. I have marked a good sample with’s in the margin. Do not simply read some answers — you must actually try theproblems and quite possibly struggle with some of them. For all of the exercisesyou must justify your answer either with a computation or with a proof. Beaware that few people can write correct proofs without training. Try to find aknowledgeable person to work with you on these.– iii –
  6. 6. Finally, a caution for all students, independent or not: I cannot overemphasizethat the statement, “I understand the material but it is only that I have troublewith the problems” shows a misconception. Being able to do things with theideas is their entire point. The quotes below express this sentiment admirably.They capture the essence of both the beauty and the power of mathematics andscience in general, and of Linear Algebra in particular. (I took the liberty offormatting them as poetry).I know of no better tacticthan the illustration of exciting principlesby well-chosen particulars.–Stephen Jay GouldIf you really wish to learnthen you must mount the machineand become acquainted with its tricksby actual trial.–Wilbur WrightJim HefferonMathematics, Saint Michael’s CollegeColchester, Vermont USA 05439http://joshua.smcvt.edu/linearalgebra2012-Feb-29Author’s Note. Inventing a good exercise, one that enlightens as well as tests,is a creative act and hard work. The inventor deserves recognition. But textshave traditionally not given attributions for questions. I have changed that herewhere I was sure of the source. I would be glad to hear from anyone who canhelp me to correctly attribute others of the questions.– iv –
  7. 7. ContentsChapter One: Linear SystemsI Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . 1I.1 Gauss’s Method . . . . . . . . . . . . . . . . . . . . . . . . . 2I.2 Describing the Solution Set . . . . . . . . . . . . . . . . . . . 11I.3 General = Particular + Homogeneous . . . . . . . . . . . . . . 20II Linear Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 32II.1 Vectors in Space* . . . . . . . . . . . . . . . . . . . . . . . . 32II.2 Length and Angle Measures* . . . . . . . . . . . . . . . . . . 39III Reduced Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . 46III.1 Gauss-Jordan Reduction . . . . . . . . . . . . . . . . . . . . . 46III.2 The Linear Combination Lemma . . . . . . . . . . . . . . . . 51Topic: Computer Algebra Systems . . . . . . . . . . . . . . . . . . . 59Topic: Input-Output Analysis . . . . . . . . . . . . . . . . . . . . . . 61Topic: Accuracy of Computations . . . . . . . . . . . . . . . . . . . . 65Topic: Analyzing Networks . . . . . . . . . . . . . . . . . . . . . . . . 69Chapter Two: Vector SpacesI Definition of Vector Space . . . . . . . . . . . . . . . . . . . . . . 76I.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . 76I.2 Subspaces and Spanning Sets . . . . . . . . . . . . . . . . . . 87II Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . 97II.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . 97III Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . 109III.1 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109III.2 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115III.3 Vector Spaces and Linear Systems . . . . . . . . . . . . . . . 121III.4 Combining Subspaces* . . . . . . . . . . . . . . . . . . . . . . 128Topic: Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137Topic: Crystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139Topic: Voting Paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . 143Topic: Dimensional Analysis . . . . . . . . . . . . . . . . . . . . . . . 149
  8. 8. Chapter Three: Maps Between SpacesI Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157I.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . 157I.2 Dimension Characterizes Isomorphism . . . . . . . . . . . . . 166II Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 173II.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173II.2 Range space and Null space . . . . . . . . . . . . . . . . . . . 180III Computing Linear Maps . . . . . . . . . . . . . . . . . . . . . . . 191III.1 Representing Linear Maps with Matrices . . . . . . . . . . . 191III.2 Any Matrix Represents a Linear Map* . . . . . . . . . . . . . 201IV Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 209IV.1 Sums and Scalar Products . . . . . . . . . . . . . . . . . . . . 209IV.2 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . 212IV.3 Mechanics of Matrix Multiplication . . . . . . . . . . . . . . 220IV.4 Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229V Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237V.1 Changing Representations of Vectors . . . . . . . . . . . . . . 237V.2 Changing Map Representations . . . . . . . . . . . . . . . . . 241VI Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249VI.1 Orthogonal Projection Into a Line* . . . . . . . . . . . . . . 249VI.2 Gram-Schmidt Orthogonalization* . . . . . . . . . . . . . . . 253VI.3 Projection Into a Subspace* . . . . . . . . . . . . . . . . . . . 258Topic: Line of Best Fit . . . . . . . . . . . . . . . . . . . . . . . . . . 267Topic: Geometry of Linear Maps . . . . . . . . . . . . . . . . . . . . 272Topic: Magic Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . 278Topic: Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . 283Topic: Orthonormal Matrices . . . . . . . . . . . . . . . . . . . . . . 289Chapter Four: DeterminantsI Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296I.1 Exploration* . . . . . . . . . . . . . . . . . . . . . . . . . . . 296I.2 Properties of Determinants . . . . . . . . . . . . . . . . . . . 301I.3 The Permutation Expansion . . . . . . . . . . . . . . . . . . 305I.4 Determinants Exist* . . . . . . . . . . . . . . . . . . . . . . . 313II Geometry of Determinants . . . . . . . . . . . . . . . . . . . . . . 320II.1 Determinants as Size Functions . . . . . . . . . . . . . . . . . 320III Laplace’s Expansion . . . . . . . . . . . . . . . . . . . . . . . . . 326III.1 Laplace’s Expansion Formula* . . . . . . . . . . . . . . . . . 326Topic: Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 331Topic: Speed of Calculating Determinants . . . . . . . . . . . . . . . 334Topic: Chiò’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Topic: Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . 341
  9. 9. Chapter Five: SimilarityI Complex Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 353I.1 Review of Factoring and Complex Numbers* . . . . . . . . . 354I.2 Complex Representations . . . . . . . . . . . . . . . . . . . . 356II Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358II.1 Definition and Examples . . . . . . . . . . . . . . . . . . . . 358II.2 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 360II.3 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . 364III Nilpotence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373III.1 Self-Composition* . . . . . . . . . . . . . . . . . . . . . . . . 373III.2 Strings* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377IV Jordan Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388IV.1 Polynomials of Maps and Matrices* . . . . . . . . . . . . . . 388IV.2 Jordan Canonical Form* . . . . . . . . . . . . . . . . . . . . . 396Topic: Method of Powers . . . . . . . . . . . . . . . . . . . . . . . . . 410Topic: Stable Populations . . . . . . . . . . . . . . . . . . . . . . . . 414Topic: Page Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . 416Topic: Linear Recurrences . . . . . . . . . . . . . . . . . . . . . . . . 420AppendixPropositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3Techniques of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . A-4Sets, Functions, and Relations . . . . . . . . . . . . . . . . . . . . . A-6∗Starred subsections are optional.
  10. 10. Chapter OneLinear SystemsI Solving Linear SystemsSystems of linear equations are common in science and mathematics. These twoexamples from high school science [Onan] give a sense of how they arise.The first example is from Statics. Suppose that we have three objects, onewith a mass known to be 2 kg and we want to find the unknown masses. Supposefurther that experimentation with a meter stick produces these two balances.ch 21540 50c h225 5025For the masses to balance we must have that the sum of moments on the leftequals the sum of moments on the right, where the moment of an object is itsmass times its distance from the balance point. That gives a system of twoequations.40h + 15c = 10025c = 50 + 50hThe second example of a linear system is from Chemistry. We can mix,under controlled conditions, toluene C7H8 and nitric acid HNO3 to producetrinitrotoluene C7H5O6N3 along with the byproduct water (conditions haveto be very well controlled — trinitrotoluene is better known as TNT). In whatproportion should we mix them? The number of atoms of each element presentbefore the reactionx C7H8 + y HNO3 −→ z C7H5O6N3 + w H2O
  11. 11. 2 Chapter One. Linear Systemsmust equal the number present afterward. Applying that in turn to the elementsC, H, N, and O gives this system.7x = 7z8x + 1y = 5z + 2w1y = 3z3y = 6z + 1wBoth examples come down to solving a system of equations. In each system,the equations involve only the first power of each variable. This chapter showshow to solve any such system.I.1 Gauss’s Method1.1 Definition A linear combination of x1, . . . , xn has the forma1x1 + a2x2 + a3x3 + · · · + anxnwhere the numbers a1, . . . , an ∈ R are the combination’s coefficients. A linearequation in the variables x1, . . . , xn has the form a1x1 + a2x2 + a3x3 + · · · +anxn = d where d ∈ R is the constant.An n-tuple (s1, s2, . . . , sn) ∈ Rnis a solution of, or satisfies, that equationif substituting the numbers s1, . . . , sn for the variables gives a true statement:a1s1 + a2s2 + · · · + ansn = d. A system of linear equationsa1,1x1 + a1,2x2 + · · · + a1,nxn = d1a2,1x1 + a2,2x2 + · · · + a2,nxn = d2...am,1x1 + am,2x2 + · · · + am,nxn = dmhas the solution (s1, s2, . . . , sn) if that n-tuple is a solution of all of the equationsin the system.1.2 Example The combination 3x1 + 2x2 of x1 and x2 is linear. The combination3x21 + 2 sin(x2) is not linear, nor is 3x21 + 2x2.1.3 Example The ordered pair (−1, 5) is a solution of this system.3x1 + 2x2 = 7−x1 + x2 = 6In contrast, (5, −1) is not a solution.Finding the set of all solutions is solving the system. We don’t need guessworkor good luck; there is an algorithm that always works. This algorithm is Gauss’sMethod (or Gaussian elimination or linear elimination).
  12. 12. Section I. Solving Linear Systems 31.4 Example To solve this system3x3 = 9x1 + 5x2 − 2x3 = 213 x1 + 2x2 = 3we transform it, step by step, until it is in a form that we can easily solve.The first transformation rewrites the system by interchanging the first andthird row.swap row 1 with row 3−→13 x1 + 2x2 = 3x1 + 5x2 − 2x3 = 23x3 = 9The second transformation rescales the first row by multiplying both sides ofthe equation by 3.multiply row 1 by 3−→x1 + 6x2 = 9x1 + 5x2 − 2x3 = 23x3 = 9The third transformation is the only nontrivial one in this example. We mentallymultiply both sides of the first row by −1, mentally add that to the second row,and write the result in as the new second row.add −1 times row 1 to row 2−→x1 + 6x2 = 9−x2 − 2x3 = −73x3 = 9The point of these steps is that we’ve brought the system to a form where we caneasily find the value of each variable. The bottom equation shows that x3 = 3.Substituting 3 for x3 in the middle equation shows that x2 = 1. Substitutingthose two into the top equation gives that x1 = 3. Thus the system has a uniquesolution; the solution set is { (3, 1, 3) }.Most of this subsection and the next one consists of examples of solvinglinear systems by Gauss’s Method. We will use it throughout the book. It is fastand easy. But before we do those examples we will first show that this Methodis also safe in that it never loses solutions or picks up extraneous solutions.1.5 Theorem (Gauss’s Method) If a linear system is changed to another by one ofthese operations(1) an equation is swapped with another(2) an equation has both sides multiplied by a nonzero constant(3) an equation is replaced by the sum of itself and a multiple of anotherthen the two systems have the same set of solutions.
  13. 13. 4 Chapter One. Linear SystemsEach of the three Gauss’s Method operations has a restriction. Multiplyinga row by 0 is not allowed because obviously that can change the solution setof the system. Similarly, adding a multiple of a row to itself is not allowedbecause adding −1 times the row to itself has the effect of multiplying the rowby 0. Finally, we disallow swapping a row with itself to make some results inthe fourth chapter easier to state and remember, and also because it’s pointless.Proof We will cover the equation swap operation here. The other two casesare Exercise 31.Consider the swap of row i with row j. The tuple (s1, . . . , sn) satisfies thesystem before the swap if and only if substituting the values for the variables,the s’s for the x’s, gives a conjunction of true statements: a1,1s1 + a1,2s2 +· · · + a1,nsn = d1 and . . . ai,1s1 + ai,2s2 + · · · + ai,nsn = di and . . . aj,1s1 +aj,2s2 + · · · + aj,nsn = dj and . . . am,1s1 + am,2s2 + · · · + am,nsn = dm.In a list of statements joined with ‘and’ we can rearrange the order of thestatements. Thus this requirement is met if and only if a1,1s1 + a1,2s2 + · · · +a1,nsn = d1 and . . . aj,1s1 +aj,2s2 +· · ·+aj,nsn = dj and . . . ai,1s1 +ai,2s2 +· · ·+ai,nsn = di and . . . am,1s1 +am,2s2 +· · ·+am,nsn = dm. This is exactlythe requirement that (s1, . . . , sn) solves the system after the row swap. QED1.6 Definition The three operations from Theorem 1.5 are the elementary re-duction operations, or row operations, or Gaussian operations. They areswapping, multiplying by a scalar (or rescaling), and row combination.When writing out the calculations, we will abbreviate ‘row i’ by ‘ρi’. Forinstance, we will denote a row combination operation by kρi + ρj, with the rowthat changes written second. To save writing we will often combine additionsteps when they use the same ρi; see the next example.1.7 Example Gauss’s Method systematically applies the row operations to solvea system. Here is a typical case.x + y = 02x − y + 3z = 3x − 2y − z = 3We begin by using the first row to eliminate the 2x in the second row and the xin the third. To get rid of the 2x, we mentally multiply the entire first row by−2, add that to the second row, and write the result in as the new second row.To eliminate the x leading the third row, we multiply the first row by −1, addthat to the third row, and write the result in as the new third row.−2ρ1+ρ2−→−ρ1+ρ3x + y = 0−3y + 3z = 3−3y − z = 3To finish we transform the second system into a third system, where the lastequation involves only one unknown. We use the second row to eliminate y from
  14. 14. Section I. Solving Linear Systems 5the third row.−ρ2+ρ3−→x + y = 0−3y + 3z = 3−4z = 0Now the system’s solution is easy to find. The third row shows that z = 0.Substitute that back into the second row to get y = −1 and then substituteback into the first row to get x = 1.1.8 Example For the Physics problem from the start of this chapter, Gauss’sMethod gives this.40h + 15c = 100−50h + 25c = 505/4ρ1+ρ2−→40h + 15c = 100(175/4)c = 175So c = 4, and back-substitution gives that h = 1. (We will solve the Chemistryproblem later.)1.9 Example The reductionx + y + z = 92x + 4y − 3z = 13x + 6y − 5z = 0−2ρ1+ρ2−→−3ρ1+ρ3x + y + z = 92y − 5z = −173y − 8z = −27−(3/2)ρ2+ρ3−→x + y + z = 92y − 5z = −17−(1/2)z = −(3/2)shows that z = 3, y = −1, and x = 7.As illustrated above, the point of Gauss’s Method is to use the elementaryreduction operations to set up back-substitution.1.10 Definition In each row of a system, the first variable with a nonzero coefficientis the row’s leading variable. A system is in echelon form if each leadingvariable is to the right of the leading variable in the row above it (except for theleading variable in the first row).1.11 Example The prior three examples only used the operation of row combina-tion. This linear system requires the swap operation to get it into echelon formbecause after the first combinationx − y = 02x − 2y + z + 2w = 4y + w = 02z + w = 5−2ρ1+ρ2−→x − y = 0z + 2w = 4y + w = 02z + w = 5the second equation has no leading y. To get one, we put in place a lower-downrow that has a leading y.ρ2↔ρ3−→x − y = 0y + w = 0z + 2w = 42z + w = 5
  15. 15. 6 Chapter One. Linear Systems(Had there been more than one suitable row below the second then we couldhave swapped in any one.) With that, Gauss’s Method proceeds as before.−2ρ3+ρ4−→x − y = 0y + w = 0z + 2w = 4−3w = −3Back-substitution gives w = 1, z = 2 , y = −1, and x = −1.The row rescaling operation is not needed, strictly speaking, to solve linearsystems. But we will use it later in this chapter as part of a variation on Gauss’sMethod, the Gauss-Jordan Method.All of the systems seen so far have the same number of equations as unknowns.All of them have a solution, and for all of them there is only one solution. Wefinish this subsection by seeing some other things that can happen.1.12 Example This system has more equations than variables.x + 3y = 12x + y = −32x + 2y = −2Gauss’s Method helps us understand this system also, since this−2ρ1+ρ2−→−2ρ1+ρ3x + 3y = 1−5y = −5−4y = −4shows that one of the equations is redundant. Echelon form−(4/5)ρ2+ρ3−→x + 3y = 1−5y = −50 = 0gives that y = 1 and x = −2. The ‘0 = 0’ reflects the redundancy.Gauss’s Method is also useful on systems with more variables than equations.Many examples are in the next subsection.Another way that linear systems can differ from the examples shown earlieris that some linear systems do not have a unique solution. This can happen intwo ways.The first is that a system can fail to have any solution at all.1.13 Example Contrast the system in the last example with this one.x + 3y = 12x + y = −32x + 2y = 0−2ρ1+ρ2−→−2ρ1+ρ3x + 3y = 1−5y = −5−4y = −2
  16. 16. Section I. Solving Linear Systems 7Here the system is inconsistent: no pair of numbers satisfies all of the equationssimultaneously. Echelon form makes this inconsistency obvious.−(4/5)ρ2+ρ3−→x + 3y = 1−5y = −50 = 2The solution set is empty.1.14 Example The prior system has more equations than unknowns, but thatis not what causes the inconsistency — Example 1.12 has more equations thanunknowns and yet is consistent. Nor is having more equations than unknownsnecessary for inconsistency, as we see with this inconsistent system that has thesame number of equations as unknowns.x + 2y = 82x + 4y = 8−2ρ1+ρ2−→x + 2y = 80 = −8The other way that a linear system can fail to have a unique solution, besideshaving no solutions, is to have many solutions.1.15 Example In this systemx + y = 42x + 2y = 8any pair of real numbers (s1, s2) satisfying the first equation also satisfies thesecond. The solution set {(x, y) x + y = 4} is infinite; some of its membersare (0, 4), (−1, 5), and (2.5, 1.5).The result of applying Gauss’s Method here contrasts with the prior examplebecause we do not get a contradictory equation.−2ρ1+ρ2−→x + y = 40 = 0Don’t be fooled by the final system in that example. A ‘0 = 0’ equation isnot the signal that a system has many solutions.1.16 Example The absence of a ‘0 = 0’ does not keep a system from havingmany different solutions. This system is in echelon form has no ‘0 = 0’, but hasinfinitely many solutions.x + y + z = 0y + z = 0Some solutions are: (0, 1, −1), (0, 1/2, −1/2), (0, 0, 0), and (0, −π, π). There areinfinitely many solutions because any triple whose first component is 0 andwhose second component is the negative of the third is a solution.Nor does the presence of a ‘0 = 0’ mean that the system must have manysolutions. Example 1.12 shows that. So does this system, which does not have
  17. 17. 8 Chapter One. Linear Systemsany solutions at all despite that in echelon form it has a ‘0 = 0’ row.2x − 2z = 6y + z = 12x + y − z = 73y + 3z = 0−ρ1+ρ3−→2x − 2z = 6y + z = 1y + z = 13y + 3z = 0−ρ2+ρ3−→−3ρ2+ρ42x − 2z = 6y + z = 10 = 00 = −3We will finish this subsection with a summary of what we’ve seen so farabout Gauss’s Method.Gauss’s Method uses the three row operations to set a system up for backsubstitution. If any step shows a contradictory equation then we can stop withthe conclusion that the system has no solutions. If we reach echelon form withouta contradictory equation, and each variable is a leading variable in its row, thenthe system has a unique solution and we find it by back substitution. Finally,if we reach echelon form without a contradictory equation, and there is not aunique solution — that is, at least one variable is not a leading variable — thenthe system has many solutions.The next subsection deals with the third case. We will see that such a systemmust have infinitely many solutions and we will describe the solution set.Note For all exercises, you must justify your answer. For instance, if aquestion asks whether a system has a solution then you must justify ayes response by producing the solution and must justify a no response byshowing that no solution exists.Exercises1.17 Use Gauss’s Method to find the unique solution for each system.(a)2x + 3y = 13x − y = −1(b)x − z = 03x + y = 1−x + y + z = 41.18 Use Gauss’s Method to solve each system or conclude ‘many solutions’ or ‘nosolutions’.(a) 2x + 2y = 5x − 4y = 0(b) −x + y = 1x + y = 2(c) x − 3y + z = 1x + y + 2z = 14(d) −x − y = 1−3x − 3y = 2(e) 4y + z = 202x − 2y + z = 0x + z = 5x + y − z = 10(f) 2x + z + w = 5y − w = −13x − z − w = 04x + y + 2z + w = 91.19 We can solve linear systems by methods other than Gauss’s. One often taughtin high school is to solve one of the equations for a variable, then substitute theresulting expression into other equations. Then we repeat that step until thereis an equation with only one variable. From that we get the first number in thesolution and then we get the rest with back-substitution. This method takes longer
  18. 18. Section I. Solving Linear Systems 9than Gauss’s Method, since it involves more arithmetic operations, and is alsomore likely to lead to errors. To illustrate how it can lead to wrong conclusions,we will use the systemx + 3y = 12x + y = −32x + 2y = 0from Example 1.13.(a) Solve the first equation for x and substitute that expression into the secondequation. Find the resulting y.(b) Again solve the first equation for x, but this time substitute that expressioninto the third equation. Find this y.What extra step must a user of this method take to avoid erroneously concluding asystem has a solution?1.20 For which values of k are there no solutions, many solutions, or a uniquesolution to this system?x − y = 13x − 3y = k1.21 This system is not linear, in some sense,2 sin α − cos β + 3 tan γ = 34 sin α + 2 cos β − 2 tan γ = 106 sin α − 3 cos β + tan γ = 9and yet we can nonetheless apply Gauss’s Method. Do so. Does the system have asolution?1.22 [Anton] What conditions must the constants, the b’s, satisfy so that each ofthese systems has a solution? Hint. Apply Gauss’s Method and see what happensto the right side.(a) x − 3y = b13x + y = b2x + 7y = b32x + 4y = b4(b) x1 + 2x2 + 3x3 = b12x1 + 5x2 + 3x3 = b2x1 + 8x3 = b31.23 True or false: a system with more unknowns than equations has at least onesolution. (As always, to say ‘true’ you must prove it, while to say ‘false’ you mustproduce a counterexample.)1.24 Must any Chemistry problem like the one that starts this subsection — a balancethe reaction problem — have infinitely many solutions?1.25 Find the coefficients a, b, and c so that the graph of f(x) = ax2+ bx + c passesthrough the points (1, 2), (−1, 6), and (2, 3).1.26 After Theorem 1.5 we note that multiplying a row by 0 is not allowed becausethat could change a solution set. Give an example of a system with solution set S0where after multiplying a row by 0 the new system has a solution set S1 and S0 isa proper subset of S1. Give an example where S0 = S1.1.27 Gauss’s Method works by combining the equations in a system to make newequations.(a) Can we derive the equation 3x − 2y = 5 by a sequence of Gaussian reductionsteps from the equations in this system?x + y = 14x − y = 6
  19. 19. 10 Chapter One. Linear Systems(b) Can we derive the equation 5x−3y = 2 with a sequence of Gaussian reductionsteps from the equations in this system?2x + 2y = 53x + y = 4(c) Can we derive 6x − 9y + 5z = −2 by a sequence of Gaussian reduction stepsfrom the equations in the system?2x + y − z = 46x − 3y + z = 51.28 Prove that, where a, b, . . . , e are real numbers and a = 0, ifax + by = chas the same solution set asax + dy = ethen they are the same equation. What if a = 0?1.29 Show that if ad − bc = 0 thenax + by = jcx + dy = khas a unique solution.1.30 In the systemax + by = cdx + ey = feach of the equations describes a line in the xy-plane. By geometrical reasoning,show that there are three possibilities: there is a unique solution, there is nosolution, and there are infinitely many solutions.1.31 Finish the proof of Theorem 1.5.1.32 Is there a two-unknowns linear system whose solution set is all of R2?1.33 Are any of the operations used in Gauss’s Method redundant? That is, can wemake any of the operations from a combination of the others?1.34 Prove that each operation of Gauss’s Method is reversible. That is, show thatif two systems are related by a row operation S1 → S2 then there is a row operationto go back S2 → S1.? 1.35 [Anton] A box holding pennies, nickels and dimes contains thirteen coins witha total value of 83 cents. How many coins of each type are in the box?? 1.36 [Con. Prob. 1955] Four positive integers are given. Select any three of theintegers, find their arithmetic average, and add this result to the fourth integer.Thus the numbers 29, 23, 21, and 17 are obtained. One of the original integersis:(a) 19 (b) 21 (c) 23 (d) 29 (e) 17? 1.37 [Am. Math. Mon., Jan. 1935] Laugh at this: AHAHA + TEHE = TEHAW. Itresulted from substituting a code letter for each digit of a simple example inaddition, and it is required to identify the letters and prove the solution unique.? 1.38 [Wohascum no. 2] The Wohascum County Board of Commissioners, which has20 members, recently had to elect a President. There were three candidates (A, B,and C); on each ballot the three candidates were to be listed in order of preference,with no abstentions. It was found that 11 members, a majority, preferred A overB (thus the other 9 preferred B over A). Similarly, it was found that 12 memberspreferred C over A. Given these results, it was suggested that B should withdraw,to enable a runoff election between A and C. However, B protested, and it was
  20. 20. Section I. Solving Linear Systems 11then found that 14 members preferred B over C! The Board has not yet recoveredfrom the resulting confusion. Given that every possible order of A, B, C appearedon at least one ballot, how many members voted for B as their first choice?? 1.39 [Am. Math. Mon., Jan. 1963] “This system of n linear equations with n un-knowns,” said the Great Mathematician, “has a curious property.”“Good heavens!” said the Poor Nut, “What is it?”“Note,” said the Great Mathematician, “that the constants are in arithmeticprogression.”“It’s all so clear when you explain it!” said the Poor Nut. “Do you mean like6x + 9y = 12 and 15x + 18y = 21?”“Quite so,” said the Great Mathematician, pulling out his bassoon. “Indeed,the system has a unique solution. Can you find it?”“Good heavens!” cried the Poor Nut, “I am baffled.”Are you?I.2 Describing the Solution SetA linear system with a unique solution has a solution set with one element. Alinear system with no solution has a solution set that is empty. In these casesthe solution set is easy to describe. Solution sets are a challenge to describe onlywhen they contain many elements.2.1 Example This system has many solutions because in echelon form2x + z = 3x − y − z = 13x − y = 4−(1/2)ρ1+ρ2−→−(3/2)ρ1+ρ32x + z = 3−y − (3/2)z = −1/2−y − (3/2)z = −1/2−ρ2+ρ3−→2x + z = 3−y − (3/2)z = −1/20 = 0not all of the variables are leading variables. Theorem 1.5 shows that an (x, y, z)satisfies the first system if and only if it satisfies the third. So we can describethe solution set {(x, y, z) 2x + z = 3 and x − y − z = 1 and 3x − y = 4} in thisway.{(x, y, z) 2x + z = 3 and −y − 3z/2 = −1/2} (∗)This description is better because it has two equations instead of three but it isnot optimal because it still has some hard to understand interactions among thevariables.To improve it, use the variable that does not lead any equation, z, to describethe variables that do lead, x and y. The second equation gives y = (1/2)−(3/2)zand the first equation gives x = (3/2)−(1/2)z. Thus we can describe the solutionset in this way.{(x, y, z) = ((3/2) − (1/2)z, (1/2) − (3/2)z, z) z ∈ R} (∗∗)
  21. 21. 12 Chapter One. Linear SystemsCompared with (∗), the advantage of (∗∗) is that z can be any real number.This makes the job of deciding which tuples are in the solution set much easier.For instance, taking z = 2 shows that (1/2, −5/2, 2) is a solution.2.2 Definition In an echelon form linear system the variables that are not leadingare free.2.3 Example Reduction of a linear system can end with more than one variablefree. On this system Gauss’s Methodx + y + z − w = 1y − z + w = −13x + 6z − 6w = 6−y + z − w = 1−3ρ1+ρ3−→x + y + z − w = 1y − z + w = −1−3y + 3z − 3w = 3−y + z − w = 13ρ2+ρ3−→ρ2+ρ4x + y + z − w = 1y − z + w = −10 = 00 = 0leaves x and y leading, and both z and w free. To get the description that weprefer we work from the bottom. We first express the leading variable y in termsof z and w, with y = −1 + z − w. Moving up to the top equation, substitutingfor y gives x + (−1 + z − w) + z − w = 1 and solving for x leaves x = 2 − 2z + 2w.The solution set{(2 − 2z + 2w, −1 + z − w, z, w) z, w ∈ R} (∗∗)has the leading variables in terms of the free variables.2.4 Example The list of leading variables may skip over some columns. Afterthis reduction2x − 2y = 0z + 3w = 23x − 3y = 0x − y + 2z + 6w = 4−(3/2)ρ1+ρ3−→−(1/2)ρ1+ρ42x − 2y = 0z + 3w = 20 = 02z + 6w = 4−2ρ2+ρ4−→2x − 2y = 0z + 3w = 20 = 00 = 0x and z are the leading variables, not x and y. The free variables are y and wand so we can describe the solution set as {(y, y, 2 − 3w, w) y, w ∈ R}. Forinstance, (1, 1, 2, 0) satisfies the system — take y = 1 and w = 0. The four-tuple(1, 0, 5, 4) is not a solution since its first coordinate does not equal its second.A variable that we use to describe a family of solutions is a parameter. Wesay that the solution set in the prior example is parametrized with y and w.
  22. 22. Section I. Solving Linear Systems 13(The terms ‘parameter’ and ‘free variable’ do not mean the same thing. In theprior example y and w are free because in the echelon form system they do notlead while they are parameters because of how we used them to describe the setof solutions. Had we instead rewritten the second equation as w = 2/3 − (1/3)zthen the free variables would still be y and w but the parameters would be yand z.)In the rest of this book we will solve linear systems by bringing them toechelon form and then using the free variables as parameters in the descriptionof the solution set.2.5 Example This is another system with infinitely many solutions.x + 2y = 12x + z = 23x + 2y + z − w = 4−2ρ1+ρ2−→−3ρ1+ρ3x + 2y = 1−4y + z = 0−4y + z − w = 1−ρ2+ρ3−→x + 2y = 1−4y + z = 0−w = 1The leading variables are x, y, and w. The variable z is free. Notice that,although there are infinitely many solutions, the value of w doesn’t vary but isconstant at −1. To parametrize, write w in terms of z with w = −1 + 0z. Theny = (1/4)z. Substitute for y in the first equation to get x = 1 − (1/2)z. Thesolution set is {(1 − (1/2)z, (1/4)z, z, −1) z ∈ R}.Parametrizing solution sets shows that systems with free variables haveinfinitely many solutions. In the prior example, z takes on all real number values,each associated with an element of the solution set, and so there are infinitelymany such elements.We finish this subsection by developing a streamlined notation for linearsystems and their solution sets.2.6 Definition An m×n matrix is a rectangular array of numbers with m rowsand n columns. Each number in the matrix is an entry.Matrices are usually named by upper case roman letters such as A. Forinstance,A =1 2.2 53 4 −7has 2 rows and 3 columns and so is a 2×3 matrix. Read that aloud as “two-by-three”; the number of rows is always first. We denote entries with thecorresponding lower-case letter so that ai,j is the number in row i and column jof the array. The entry in the second row and first column is a2,1 = 3. Notethat the order of the subscripts matters: a1,2 = a2,1 since a1,2 = 2.2. (Theparentheses around the array are so that when two matrices are adjacent thenwe can tell where one ends and the next one begins.)
  23. 23. 14 Chapter One. Linear SystemsMatrices occur throughout this book. We shall use Mn×m to denote thecollection of n×m matrices.2.7 Example We can abbreviate this linear systemx + 2y = 4y − z = 0x + 2z = 4with this matrix. 1 2 0 40 1 −1 01 0 2 4The vertical bar just reminds a reader of the difference between the coefficientson the system’s left hand side and the constants on the right. With a bar, thisis an augmented matrix. In this notation the clerical load of Gauss’s Method —the copying of variables, the writing of +’s and =’s — is lighter.1 2 0 40 1 −1 01 0 2 4−ρ1+ρ3−→1 2 0 40 1 −1 00 −2 2 02ρ2+ρ3−→1 2 0 40 1 −1 00 0 0 0The second row stands for y − z = 0 and the first row stands for x + 2y = 4 sothe solution set is {(4 − 2z, z, z) z ∈ R}.We will also use the matrix notation to clarify the descriptions of solution sets.The description {(2 − 2z + 2w, −1 + z − w, z, w) z, w ∈ R} from Example 2.3is hard to read. We will rewrite it to group all the constants together, all thecoefficients of z together, and all the coefficients of w together. We will writethem vertically, in one-column matrices.{2−100+−2110· z +2−101· w z, w ∈ R}For instance, the top line says that x = 2 − 2z + 2w and the second line saysthat y = −1 + z − w. The next section gives a geometric interpretation that willhelp us picture the solution sets.2.8 Definition A vector (or column vector) is a matrix with a single column.A matrix with a single row is a row vector. The entries of a vector are itscomponents. A column or row vector whose components are all zeros is a zerovector.Vectors are an exception to the convention of representing matrices withcapital roman letters. We use lower-case roman or greek letters overlined with an
  24. 24. Section I. Solving Linear Systems 15arrow: a, b, . . . or α, β, . . . (boldface is also common: a or α). For instance,this is a column vector with a third component of 7.v =137A zero vector is denoted 0. There are many different zero vectors, e.g., theone-tall zero vector, the two-tall zero vector, etc. Nonetheless we will usuallysay “the” zero vector, expecting that the size will be clear from the context.2.9 Definition The linear equation a1x1 +a2x2 + · · · +anxn = d with unknownsx1, . . . , xn is satisfied bys =s1...snif a1s1 + a2s2 + · · · + ansn = d. A vector satisfies a linear system if it satisfieseach equation in the system.The style of description of solution sets that we use involves adding thevectors, and also multiplying them by real numbers, such as the z and w. Weneed to define these operations.2.10 Definition The vector sum of u and v is the vector of the sums.u + v =u1...un +v1...vn =u1 + v1...un + vnNote that the vectors must have the same number of entries for the additionto be defined. This entry-by-entry addition works for any pair of matrices, notjust vectors, provided that they have the same number of rows and columns.2.11 Definition The scalar multiplication of the real number r and the vector vis the vector of the multiples.r · v = r ·v1...vn =rv1...rvnAs with the addition operation, the entry-by-entry scalar multiplicationoperation extends beyond just vectors to any matrix.We write scalar multiplication in either order, as r · v or v · r, or without the‘·’ symbol: rv. (Do not refer to scalar multiplication as ‘scalar product’ becausewe use that name for a different operation.)
  25. 25. 16 Chapter One. Linear Systems2.12 Example231 +3−14 =2 + 33 − 11 + 4 =525 7 ·14−1−3=728−7−21Notice that the definitions of vector addition and scalar multiplication agreewhere they overlap, for instance, v + v = 2v.With the notation defined, we can now solve systems in the way that we willuse from now on.2.13 Example This system2x + y − w = 4y + w + u = 4x − z + 2w = 0reduces in this way.2 1 0 −1 0 40 1 0 1 1 41 0 −1 2 0 0−(1/2)ρ1+ρ3−→2 1 0 −1 0 40 1 0 1 1 40 −1/2 −1 5/2 0 −2(1/2)ρ2+ρ3−→2 1 0 −1 0 40 1 0 1 1 40 0 −1 3 1/2 0The solution set is {(w + (1/2)u, 4 − w − u, 3w + (1/2)u, w, u) w, u ∈ R}. Wewrite that in vector form.{xyzwu=04000+1−1310w +1/2−11/201u w, u ∈ R}Note how well vector notation sets off the coefficients of each parameter. Forinstance, the third row of the vector form shows plainly that if u is fixed then zincreases three times as fast as w. Another thing shown plainly is that settingboth w and u to zero gives that this vectorxyzwu=04000is a particular solution of the linear system.
  26. 26. Section I. Solving Linear Systems 172.14 Example In the same way, this systemx − y + z = 13x + z = 35x − 2y + 3z = 5reduces1 −1 1 13 0 1 35 −2 3 5−3ρ1+ρ2−→−5ρ1+ρ31 −1 1 10 3 −2 00 3 −2 0−ρ2+ρ3−→1 −1 1 10 3 −2 00 0 0 0to a one-parameter solution set.{100 +−1/32/31 z z ∈ R}As in the prior example, the vector not associated with the parameter100is a particular solution of the system.Before the exercises, we will consider what we have accomplished and whatwe have yet to do.So far we have done the mechanics of Gauss’s Method. Except for one result,Theorem 1.5 — which we did because it says that the method gives the rightanswers — we have not stopped to consider any of the interesting questions thatarise.For example, can we prove that we can always describe solution sets as above,with a particular solution vector added to an unrestricted linear combinationof some other vectors? We’ve noted that the solution sets we described in thisway have infinitely many solutions so an answer to this question would tell usabout the size of solution sets. It will also help us understand the geometry ofthe solution sets.Many questions arise from our observation that we can do Gauss’s Methodin more than one way (for instance, when swapping rows we may have a choiceof more than one row). Theorem 1.5 says that we must get the same solutionset no matter how we proceed but if we do Gauss’s Method in two ways mustwe get the same number of free variables in each echelon form system? Mustthose be the same variables, that is, is solving a problem one way to get y andw free and solving it another way to get y and z free impossible?In the rest of this chapter we will answer these questions. The answer toeach is ‘yes’. We do the first one, the proof about the description of solution sets,in the next subsection. Then, in the chapter’s second section, we will describe
  27. 27. 18 Chapter One. Linear Systemsthe geometry of solution sets. After that, in this chapter’s final section, we willsettle the questions about the parameters. When we are done we will not onlyhave a solid grounding in the practice of Gauss’s Method, we will also havea solid grounding in the theory. We will know exactly what can and cannothappen in a reduction.Exercises2.15 Find the indicated entry of the matrix, if it is defined.A =1 3 12 −1 4(a) a2,1 (b) a1,2 (c) a2,2 (d) a3,12.16 Give the size of each matrix.(a)1 0 42 1 5(b)1 1−1 13 −1 (c)5 1010 52.17 Do the indicated vector operation, if it is defined.(a)211 +304 (b) 54−1(c)151 −311 (d) 721+ 935(e)12+123 (f) 6311 − 4203 + 21152.18 Solve each system using matrix notation. Express the solution using vec-tors.(a) 3x + 6y = 18x + 2y = 6(b) x + y = 1x − y = −1(c) x1 + x3 = 4x1 − x2 + 2x3 = 54x1 − x2 + 5x3 = 17(d) 2a + b − c = 22a + c = 3a − b = 0(e) x + 2y − z = 32x + y + w = 4x − y + z + w = 1(f) x + z + w = 42x + y − w = 23x + y + z = 72.19 Solve each system using matrix notation. Give each solution set in vectornotation.(a) 2x + y − z = 14x − y = 3(b) x − z = 1y + 2z − w = 3x + 2y + 3z − w = 7(c) x − y + z = 0y + w = 03x − 2y + 3z + w = 0−y − w = 0(d) a + 2b + 3c + d − e = 13a − b + c + d + e = 32.20 The vector is in the set. What value of the parameters produces that vec-tor?(a)5−5, {1−1k k ∈ R}(b)−121, {−210 i +301 j i, j ∈ R}(c)0−42, {110 m +201 n m, n ∈ R}
  28. 28. Section I. Solving Linear Systems 192.21 Decide if the vector is in the set.(a)3−1, {−62k k ∈ R}(b)54, {5−4j j ∈ R}(c)21−1, {03−7 +1−13 r r ∈ R}(d)101, {201 j +−3−11 k j, k ∈ R}2.22 [Cleary] A farmer with 1200 acres is considering planting three different crops,corn, soybeans, and oats. The farmer wants to use all 1200 acres. Seed corn costs$20 per acre, while soybean and oat seed cost $50 and $12 per acre respectively.The farmer has $40 000 available to buy seed and intends to spend it all.(a) Use the information above to formulate two linear equations with threeunknowns and solve it.(b) Solutions to the system are choices that the farmer can make. Write downtwo reasonable solutions.(c) Suppose that in the fall when the crops mature, the farmer can bring inrevenue of $100 per acre for corn, $300 per acre for soybeans and $80 per acrefor oats. Which of your two solutions in the prior part would have resulted in alarger revenue?2.23 Parametrize the solution set of this one-equation system.x1 + x2 + · · · + xn = 02.24 (a) Apply Gauss’s Method to the left-hand side to solvex + 2y − w = a2x + z = bx + y + 2w = cfor x, y, z, and w, in terms of the constants a, b, and c.(b) Use your answer from the prior part to solve this.x + 2y − w = 32x + z = 1x + y + 2w = −22.25 Why is the comma needed in the notation ‘ai,j’ for matrix entries?2.26 Give the 4×4 matrix whose i, j-th entry is(a) i + j; (b) −1 to the i + j power.2.27 For any matrix A, the transpose of A, written AT, is the matrix whose columnsare the rows of A. Find the transpose of each of these.(a)1 2 34 5 6(b)2 −31 1(c)5 1010 5(d)1102.28 (a) Describe all functions f(x) = ax2+bx+c such that f(1) = 2 and f(−1) = 6.(b) Describe all functions f(x) = ax2+ bx + c such that f(1) = 2.2.29 Show that any set of five points from the plane R2lie on a common conic section,that is, they all satisfy some equation of the form ax2+ by2+ cxy + dx + ey + f = 0where some of a, . . . , f are nonzero.
  29. 29. 20 Chapter One. Linear Systems2.30 Make up a four equations/four unknowns system having(a) a one-parameter solution set;(b) a two-parameter solution set;(c) a three-parameter solution set.? 2.31 [Shepelev] This puzzle is from a Russian web-site http://www.arbuz.uz/ andthere are many solutions to it, but mine uses linear algebra and is very naive.There’s a planet inhabited by arbuzoids (watermeloners, to translate from Russian).Those creatures are found in three colors: red, green and blue. There are 13 redarbuzoids, 15 blue ones, and 17 green. When two differently colored arbuzoidsmeet, they both change to the third color.The question is, can it ever happen that all of them assume the same color?? 2.32 [USSR Olympiad no. 174](a) Solve the system of equations.ax + y = a2x + ay = 1For what values of a does the system fail to have solutions, and for what valuesof a are there infinitely many solutions?(b) Answer the above question for the system.ax + y = a3x + ay = 1? 2.33 [Math. Mag., Sept. 1952] In air a gold-surfaced sphere weighs 7588 grams. Itis known that it may contain one or more of the metals aluminum, copper, silver,or lead. When weighed successively under standard conditions in water, benzene,alcohol, and glycerin its respective weights are 6588, 6688, 6778, and 6328 grams.How much, if any, of the forenamed metals does it contain if the specific gravitiesof the designated substances are taken to be as follows?Aluminum 2.7 Alcohol 0.81Copper 8.9 Benzene 0.90Gold 19.3 Glycerin 1.26Lead 11.3 Water 1.00Silver 10.8I.3 General = Particular + HomogeneousIn the prior subsection the descriptions of solution sets all fit a pattern. Theyhave a vector that is a particular solution of the system added to an unre-stricted combination of some other vectors. The solution set from Example 2.13illustrates.{04000particularsolution+ w1−1310+ u1/2−11/201unrestrictedcombinationw, u ∈ R}
  30. 30. Section I. Solving Linear Systems 21The combination is unrestricted in that w and u can be any real numbers —there is no condition like “such that 2w − u = 0” that would restrict which pairsw, u we can use.That example shows an infinite solution set fitting the pattern. The othertwo kinds of solution sets also fit. A one-element solution set fits because it hasa particular solution and the unrestricted combination part is trivial. (That is,instead of being a combination of two vectors or of one vector, it is a combinationof no vectors. We are using the convention that the sum of an empty set ofvectors is the vector of all zeros.) A zero-element solution set fits the patternbecause there is no particular solution and so there are no sums of that form.3.1 Theorem Any linear system’s solution set has the form{p + c1β1 + · · · + ckβk c1, . . . , ck ∈ R}where p is any particular solution and where the number of vectors β1, . . . ,βk equals the number of free variables that the system has after a Gaussianreduction.The solution description has two parts, the particular solution p and theunrestricted linear combination of the β’s. We shall prove the theorem in twocorresponding parts, with two lemmas.We will focus first on the unrestricted combination. For that we considersystems that have the vector of zeroes as a particular solution so that we canshorten p + c1β1 + · · · + ckβk to c1β1 + · · · + ckβk.3.2 Definition A linear equation is homogeneous if it has a constant of zero, sothat it can be written as a1x1 + a2x2 + · · · + anxn = 0.3.3 Example With any linear system like3x + 4y = 32x − y = 1we associate a system of homogeneous equations by setting the right side tozeros.3x + 4y = 02x − y = 0Our interest in this comes from comparing the reduction of the original system3x + 4y = 32x − y = 1−(2/3)ρ1+ρ2−→3x + 4y = 3−(11/3)y = −1with the reduction of the associated homogeneous system.3x + 4y = 02x − y = 0−(2/3)ρ1+ρ2−→3x + 4y = 0−(11/3)y = 0
  31. 31. 22 Chapter One. Linear SystemsObviously the two reductions go in the same way. We can study how to reducea linear systems by instead studying how to reduce the associated homogeneoussystem.Studying the associated homogeneous system has a great advantage overstudying the original system. Nonhomogeneous systems can be inconsistent.But a homogeneous system must be consistent since there is always at least onesolution, the zero vector.3.4 Example Some homogeneous systems have the zero vector as their onlysolution.3x + 2y + z = 06x + 4y = 0y + z = 0−2ρ1+ρ2−→3x + 2y + z = 0−2z = 0y + z = 0ρ2↔ρ3−→3x + 2y + z = 0y + z = 0−2z = 03.5 Example Some homogeneous systems have many solutions. One example isthe Chemistry problem from the first page of this book.7x − 7z = 08x + y − 5z − 2w = 0y − 3z = 03y − 6z − w = 0−(8/7)ρ1+ρ2−→7x − 7z = 0y + 3z − 2w = 0y − 3z = 03y − 6z − w = 0−ρ2+ρ3−→−3ρ2+ρ47x − 7z = 0y + 3z − 2w = 0−6z + 2w = 0−15z + 5w = 0−(5/2)ρ3+ρ4−→7x − 7z = 0y + 3z − 2w = 0−6z + 2w = 00 = 0The solution set{1/311/31w w ∈ R}has many vectors besides the zero vector (if we interpret w as a number ofmolecules then solutions make sense only when w is a nonnegative multiple of3).3.6 Lemma For any homogeneous linear system there exist vectors β1, . . . , βksuch that the solution set of the system is{c1β1 + · · · + ckβk c1, . . . , ck ∈ R}where k is the number of free variables in an echelon form version of the system.
  32. 32. Section I. Solving Linear Systems 23We will make two points before the proof. The first point is that the basicidea of the proof is straightforward. Consider a system of homogeneous equationsin echelon form.x + y + 2z + s + t = 0y + z + s − t = 0s + t = 0Start at the bottom, expressing its leading variable in terms of the free variableswith s = −t. For the next row up, substitute the expression giving s as acombination of free variables y + z + (−t) − t = 0 and solve for its leadingvariable y = −z + 2t. Iterate: on the next row up, substitute expressions derivedfrom prior rows x + (−z + 2t) + 2z + (−t) + t = 0 and solve for the leadingvariable x = −z − 2t. Now to finish, write the solution in vector notationxyzst=−1−1100z +−220−11t z, t ∈ Rand recognize that the β1 and β2 of the lemma are the vectors associated withthe free variables z and t.The prior paragraph is a sketch, not a proof; for instance, a proof would haveto hold no matter how many equations are in the system.The second point we will make about the proof concerns its style. Theabove sketch moves row-by-row up the system, using the equations derived forthe earlier rows to do the next row. This suggests a proof by mathematicalinduction.∗Induction is an important and non-obvious proof technique that weshall use a number of times in this book.We prove a statement by mathematical induction using two steps, a basestep and an inductive step. In the base step we establish that the statement istrue for some first instance, here that for the bottom equation we can write theleading variable in terms of the free variables. In the inductive step we mustverify an implication, that if the statement is true for all prior cases then itfollows for the present case also. Here we will argue that if we can express theleading variables from the bottom-most t rows in terms of the free variables thenwe can express the leading variable of the next row up — the t + 1-th row fromthe bottom — in terms of the free variables. Those two steps together prove thestatement because by the base step it is true for the bottom equation, and bythe inductive step the fact that it is true for the bottom equation shows thatit is true for the next one up. Then another application of the inductive stepimplies that it is true for the third equation up, etc.Proof Apply Gauss’s Method to get to echelon form. We may get some0 = 0 equations (if the entire system consists of such equations then the result∗ More information on mathematical induction is in the appendix.
  33. 33. 24 Chapter One. Linear Systemsis trivially true) but because the system is homogeneous we cannot get anycontradictory equations. We will use induction to show this statement: eachleading variable can be expressed in terms of free variables. That will finish theproof because we can then use the free variables as the parameters and the β’sare the vectors of coefficients of those free variables.For the base step, consider the bottommost equation that is not 0 = 0. Callit equation m so we haveam, mx m+ am, m+1x m+1 + · · · + am,nxn = 0where am, m= 0. (The ‘ ’ means “leading” so that x mis the leading variablein row m.) This is the bottom row so any variables x m+1, . . . after the leadingvariable in this equation must be free variables. Move these to the right sideand divide by am, mx m= (−am, m+1/am, m)x m+1 + · · · + (−am,n/am, m)xnto express the leading variable in terms of free variables. (If there are no variablesto the right of xlmthen x m= 0; see the “tricky point” following this proof.)For the inductive step assume that for the m-th equation, and the (m − 1)-thequation, etc., up to and including the (m − t)-th equation (where 0 t < m),we can express the leading variable in terms of free variables. We must verify thatthis statement also holds for the next equation up, the (m − (t + 1))-th equation.As in the earlier sketch, take each variable that leads in a lower-down equationx m, . . . , x m−tand substitute its expression in terms of free variables. (We onlyneed do this for the leading variables from lower-down equations because thesystem is in echelon form and so in this equation none of the variables leadinghigher up equations appear.) The result has the formam−(t+1), m−(t+1)x m−(t+1)+ a linear combination of free variables = 0with am−(t+1), m−(t+1)= 0. Move the free variables to the right side and divideby am−(t+1), m−(t+1)to end with x m−(t+1)expressed in terms of free variables.Because we have shown both the base step and the inductive step, by theprinciple of mathematical induction the proposition is true. QEDWe say that the set {c1β1 + · · · + ckβk c1, . . . , ck ∈ R} is generated by orspanned by the set of vectors {β1, . . . , βk }.There is a tricky point to this. We rely on the convention that the sum of anempty set of vectors is the zero vector. In particular, we need this in the casewhere a homogeneous system has a unique solution. Then the homogeneouscase fits the pattern of the other solution sets: in the proof above, we derive thesolution set by taking the c’s to be the free variables and if there is a uniquesolution then there are no free variables.Note that the proof shows, as discussed after Example 2.4, that we can alwaysparametrize solution sets using the free variables.The next lemma finishes the proof of Theorem 3.1 by considering the partic-ular solution part of the solution set’s description.
  34. 34. Section I. Solving Linear Systems 253.7 Lemma For a linear system, where p is any particular solution, the solutionset equals this set.{p + h h satisfies the associated homogeneous system}Proof We will show mutual set inclusion, that any solution to the system is inthe above set and that anything in the set is a solution to the system.∗For set inclusion the first way, that if a vector solves the system then it is inthe set described above, assume that s solves the system. Then s − p solves theassociated homogeneous system since for each equation index i,ai,1(s1 − p1) + · · · + ai,n(sn − pn)= (ai,1s1 + · · · + ai,nsn) − (ai,1p1 + · · · + ai,npn) = di − di = 0where pj and sj are the j-th components of p and s. Express s in the requiredp + h form by writing s − p as h.For set inclusion the other way, take a vector of the form p + h, where psolves the system and h solves the associated homogeneous system and notethat p + h solves the given system: for any equation index i,ai,1(p1 + h1) + · · · + ai,n(pn + hn)= (ai,1p1 + · · · + ai,npn) + (ai,1h1 + · · · + ai,nhn) = di + 0 = diwhere pj and hj are the j-th components of p and h. QEDThe two lemmas above together establish Theorem 3.1. We remember thattheorem with the slogan, “General = Particular + Homogeneous”.3.8 Example This system illustrates Theorem 3.1.x + 2y − z = 12x + 4y = 2y − 3z = 0Gauss’s Method−2ρ1+ρ2−→x + 2y − z = 12z = 0y − 3z = 0ρ2↔ρ3−→x + 2y − z = 1y − 3z = 02z = 0shows that the general solution is a singleton set.{100}∗ More information on equality of sets is in the appendix.
  35. 35. 26 Chapter One. Linear SystemsThat single vector is obviously a particular solution. The associated homogeneoussystem reduces via the same row operationsx + 2y − z = 02x + 4y = 0y − 3z = 0−2ρ1+ρ2−→ρ2↔ρ3−→x + 2y − z = 0y − 3z = 02z = 0to also give a singleton set.{000}So, as discussed at the start of this subsection, in this single-solution case thegeneral solution results from taking the particular solution and adding to it theunique solution of the associated homogeneous system.3.9 Example Also discussed at the start of this subsection is that the casewhere the general solution set is empty also fits the ‘General = Particular +Homogeneous’ pattern. This system illustrates. Gauss’s Methodx + z + w = −12x − y + w = 3x + y + 3z + 2w = 1−2ρ1+ρ2−→−ρ1+ρ3x + z + w = −1−y − 2z − w = 5y + 2z + w = 2shows that it has no solutions because the final two equations are in conflict.The associated homogeneous system has a solution, because all homogeneoussystems have at least one solution.x + z + w = 02x − y + w = 0x + y + 3z + 2w = 0−2ρ1+ρ2−→−ρ1+ρ3ρ2+ρ3−→x + z + w = 0−y − 2z − w = 00 = 0In fact the solution set of this homogeneous system is infinite.{−1−210z +−1−101w z, w ∈ R}However, because no particular solution of the original system exists, the generalsolution set is empty — there are no vectors of the form p + h because there areno p ’s.3.10 Corollary Solution sets of linear systems are either empty, have one element,or have infinitely many elements.Proof We’ve seen examples of all three happening so we need only prove thatthere are no other possibilities.First, notice a homogeneous system with at least one non-0 solution v hasinfinitely many solutions. This is because the set of multiples of v is infinite — if
  36. 36. Section I. Solving Linear Systems 27s, t ∈ R are unequal then sv = tv because sv − tv = (s − t)v is non-0, since anynon-0 component of v when rescaled by the non-0 factor s − t will give a non-0value.Now apply Lemma 3.7 to conclude that a solution set{p + h h solves the associated homogeneous system}is either empty (if there is no particular solution p), or has one element (if thereis a p and the homogeneous system has the unique solution 0), or is infinite (ifthere is a p and the homogeneous system has a non-0 solution, and thus by theprior paragraph has infinitely many solutions). QEDThis table summarizes the factors affecting the size of a general solution.number of solutions of thehomogeneous systemparticularsolutionexists?one infinitely manyyesuniquesolutioninfinitely manysolutionsnonosolutionsnosolutionsThe dimension on the top of the table is the simpler one. When we performGauss’s Method on a linear system, ignoring the constants on the right side andso paying attention only to the coefficients on the left-hand side, we either endwith every variable leading some row or else we find that some variable does notlead a row, that is, we find that some variable is free. (We formalize “ignoringthe constants on the right” by considering the associated homogeneous system.)A notable special case is systems having the same number of equations asunknowns. Such a system will have a solution, and that solution will be unique,if and only if it reduces to an echelon form system where every variable leads itsrow (since there are the same number of variables as rows), which will happen ifand only if the associated homogeneous system has a unique solution.3.11 Definition A square matrix is nonsingular if it is the matrix of coefficientsof a homogeneous system with a unique solution. It is singular otherwise, thatis, if it is the matrix of coefficients of a homogeneous system with infinitely manysolutions.3.12 Example The first of these matrices is nonsingular while the second issingular1 23 41 23 6because the first of these homogeneous systems has a unique solution while thesecond has infinitely many solutions.x + 2y = 03x + 4y = 0x + 2y = 03x + 6y = 0
  37. 37. 28 Chapter One. Linear SystemsWe have made the distinction in the definition because a system with the samenumber of equations as variables behaves in one of two ways, depending onwhether its matrix of coefficients is nonsingular or singular. A system where thematrix of coefficients is nonsingular has a unique solution for any constants onthe right side: for instance, Gauss’s Method shows that this systemx + 2y = a3x + 4y = bhas the unique solution x = b − 2a and y = (3a − b)/2. On the other hand, asystem where the matrix of coefficients is singular never has a unique solution —it has either no solutions or else has infinitely many, as with these.x + 2y = 13x + 6y = 2x + 2y = 13x + 6y = 3We use the word singular because it means “departing from general expecta-tion” and people often, naively, expect that systems with the same number ofvariables as equations will have a unique solution. Thus, we can think of theword as connoting “troublesome,” or at least “not ideal.” (That ‘singular’ appliesto those systems that do not have one solution is ironic, but it is the standardterm.)3.13 Example The systems from Example 3.3, Example 3.4, and Example 3.8each have an associated homogeneous system with a unique solution. Thus thesematrices are nonsingular.3 42 −13 2 16 −4 00 1 11 2 −12 4 00 1 −3The Chemistry problem from Example 3.5 is a homogeneous system with morethan one solution so its matrix is singular.7 0 −7 08 1 −5 −20 1 −3 00 3 −6 −1The above table has two dimensions. We have considered the one on top: wecan tell into which column a given linear system goes solely by considering thesystem’s left-hand side — the constants on the right-hand side play no role inthis factor.The table’s other dimension, determining whether a particular solution exists,is tougher. Consider these two3x + 2y = 53x + 2y = 53x + 2y = 53x + 2y = 4
  38. 38. Section I. Solving Linear Systems 29with the same left sides but different right sides. The first has a solution while thesecond does not, so here the constants on the right side decide if the system hasa solution. We could conjecture that the left side of a linear system determinesthe number of solutions while the right side determines if solutions exist butthat guess is not correct. Compare these two systems3x + 2y = 54x + 2y = 43x + 2y = 53x + 2y = 4with the same right sides but different left sides. The first has a solution but thesecond does not. Thus the constants on the right side of a system don’t decidealone whether a solution exists; rather, it depends on some interaction betweenthe left and right sides.For some intuition about that interaction, consider this system with one ofthe coefficients left as the parameter c.x + 2y + 3z = 1x + y + z = 1cx + 3y + 4z = 0If c = 2 then this system has no solution because the left-hand side has thethird row as a sum of the first two, while the right-hand does not. If c = 2then this system has a unique solution (try it with c = 1). For a system tohave a solution, if one row of the matrix of coefficients on the left is a linearcombination of other rows, then on the right the constant from that row mustbe the same combination of constants from the same rows.More intuition about the interaction comes from studying linear combinations.That will be our focus in the second chapter, after we finish the study of Gauss’sMethod itself in the rest of this chapter.Exercises3.14 Solve each system. Express the solution set using vectors. Identify the particularsolution and the solution set of the homogeneous system. (These systems alsoappear in Exercise 18.)(a) 3x + 6y = 18x + 2y = 6(b) x + y = 1x − y = −1(c) x1 + x3 = 4x1 − x2 + 2x3 = 54x1 − x2 + 5x3 = 17(d) 2a + b − c = 22a + c = 3a − b = 0(e) x + 2y − z = 32x + y + w = 4x − y + z + w = 1(f) x + z + w = 42x + y − w = 23x + y + z = 73.15 Solve each system, giving the solution set in vector notation. Identify theparticular solution and the solution of the homogeneous system.(a) 2x + y − z = 14x − y = 3(b) x − z = 1y + 2z − w = 3x + 2y + 3z − w = 7(c) x − y + z = 0y + w = 03x − 2y + 3z + w = 0−y − w = 0(d) a + 2b + 3c + d − e = 13a − b + c + d + e = 3
  39. 39. 30 Chapter One. Linear Systems3.16 For the system2x − y − w = 3y + z + 2w = 2x − 2y − z = −1which of these can be used as the particular solution part of some general solu-tion?(a)0−350 (b)2110 (c)−1−48−13.17 Lemma 3.7 says that we can use any particular solution for p. Find, if possible,a general solution to this systemx − y + w = 42x + 3y − z = 0y + z + w = 4that uses the given vector as its particular solution.(a)0004 (b)−51−710 (c)2−1113.18 One is nonsingular while the other is singular. Which is which?(a)1 34 −12(b)1 34 123.19 Singular or nonsingular?(a)1 21 3(b)1 2−3 −6(c)1 2 11 3 1(Careful!)(d)1 2 11 1 33 4 7 (e)2 2 11 0 5−1 1 43.20 Is the given vector in the set generated by the given set?(a)23, {14,15}(b)−101 , {210 ,101}(c)130 , {104 ,215 ,330 ,421}(d)1011 , {2101 ,3002}3.21 Prove that any linear system with a nonsingular matrix of coefficients has asolution, and that the solution is unique.3.22 In the proof of Lemma 3.6, what happens if there are no non-‘0 = 0’ equations?3.23 Prove that if s and t satisfy a homogeneous system then so do these vec-tors.
  40. 40. Section I. Solving Linear Systems 31(a) s + t (b) 3s (c) ks + mt for k, m ∈ RWhat’s wrong with this argument: “These three show that if a homogeneous systemhas one solution then it has many solutions — any multiple of a solution is anothersolution, and any sum of solutions is a solution also — so there are no homogeneoussystems with exactly one solution.”?3.24 Prove that if a system with only rational coefficients and constants has asolution then it has at least one all-rational solution. Must it have infinitely many?
  41. 41. 32 Chapter One. Linear SystemsII Linear GeometryIf you have seen the elements of vectors before then this section is anoptional review. However, later work will refer to this material so if this isnot a review then it is not optional.In the first section, we had to do a bit of work to show that there are onlythree types of solution sets — singleton, empty, and infinite. But in the specialcase of systems with two equations and two unknowns this is easy to see with apicture. Draw each two-unknowns equation as a line in the plane and then thetwo lines could have a unique intersection, be parallel, or be the same line.Unique solution3x + 2y = 7x − y = −1No solutions3x + 2y = 73x + 2y = 4Infinitely manysolutions3x + 2y = 76x + 4y = 14These pictures aren’t a short way to prove the results from the prior section,because those apply to any number of linear equations and any number ofunknowns. But they do help us understand those results. This section developsthe ideas that we need to express our results geometrically. In particular, whilethe two-dimensional case is familiar, to extend to systems with more than twounknowns we shall need some higher-dimensional geometry.II.1 Vectors in Space“Higher-dimensional geometry” sounds exotic. It is exotic — interesting andeye-opening. But it isn’t distant or unreachable.We begin by defining one-dimensional space to be R1. To see that thedefinition is reasonable, we picture a one-dimensional spaceand make a correspondence with R by picking a point to label 0 and another tolabel 1.0 1Now, with a scale and a direction, finding the point corresponding to, say, +2.17,is easy — start at 0 and head in the direction of 1, but don’t stop there, go 2.17times as far.The basic idea here, combining magnitude with direction, is the key toextending to higher dimensions.
  42. 42. Section II. Linear Geometry 33An object comprised of a magnitude and a direction is a vector (we use thesame word as in the prior section because we shall show below how to describesuch an object with a column vector). We can draw a vector as having somelength, and pointing somewhere.There is a subtlety here — these vectorsare equal, even though they start in different places, because they have equallengths and equal directions. Again: those vectors are not just alike, they areequal.How can things that are in different places be equal? Think of a vector asrepresenting a displacement (the word vector is Latin for “carrier” or “traveler”).These two squares undergo equal displacements, despite that those displacementsstart in different places.Sometimes, to emphasize this property vectors have of not being anchored, wecan refer to them as free vectors. Thus, these free vectors are equal as each is adisplacement of one over and two up.More generally, vectors in the plane are the same if and only if they have thesame change in first components and the same change in second components: thevector extending from (a1, a2) to (b1, b2) equals the vector from (c1, c2) to(d1, d2) if and only if b1 − a1 = d1 − c1 and b2 − a2 = d2 − c2.Saying ‘the vector that, were it to start at (a1, a2), would extend to (b1, b2)’would be unwieldy. We instead describe that vector asb1 − a1b2 − a2so that the ‘one over and two up’ arrows shown above picture this vector.12We often draw the arrow as starting at the origin, and we then say it is in thecanonical position (or natural position or standard position). When the
  43. 43. 34 Chapter One. Linear Systemsvectorv1v2is in canonical position then it extends to the endpoint (v1, v2).We typically just refer to “the point12”rather than “the endpoint of the canonical position of” that vector. Thus, wewill call each of these R2.{(x1, x2) x1, x2 ∈ R} {x1x2x1, x2 ∈ R}In the prior section we defined vectors and vector operations with an algebraicmotivation;r ·v1v2=rv1rv2v1v2+w1w2=v1 + w1v2 + w2we can now understand those operations geometrically. For instance, ifv represents a displacement then 3v represents a displacement in the samedirection but three times as far, and −1v represents a displacement of the samedistance as v but in the opposite direction.v−v3vAnd, where v and w represent displacements, v+w represents those displacementscombined.vwv + wThe long arrow is the combined displacement in this sense: if, in one minute, aship’s motion gives it the displacement relative to the earth of v and a passenger’smotion gives a displacement relative to the ship’s deck of w, then v + w is thedisplacement of the passenger relative to the earth.Another way to understand the vector sum is with the parallelogram rule.Draw the parallelogram formed by the vectors v and w. Then the sum v + wextends along the diagonal to the far corner.v + wvw
  44. 44. Section II. Linear Geometry 35The above drawings show how vectors and vector operations behave in R2.We can extend to R3, or to even higher-dimensional spaces where we have nopictures, with the obvious generalization: the free vector that, if it starts at(a1, . . . , an), ends at (b1, . . . , bn), is represented by this column.b1 − a1...bn − anVectors are equal if they have the same representation. We aren’t too carefulabout distinguishing between a point and the vector whose canonical representa-tion ends at that point.Rn= {v1...vn v1, . . . , vn ∈ R}And, we do addition and scalar multiplication component-wise.Having considered points, we now turn to the lines. In R2, the line through(1, 2) and (3, 1) is comprised of (the endpoints of) the vectors in this set.{12+ t2−1t ∈ R}That description expresses this picture.2−1=31−12The vector associated with the parameter t2−1=31−12has its whole body in the line — it is a direction vector for the line. Note thatpoints on the line to the left of x = 1 are described using negative values of t.In R3, the line through (1, 2, 1) and (2, 3, 2) is the set of (endpoints of) vectorsof this form{121 + t ·111 t ∈ R}
  45. 45. 36 Chapter One. Linear Systemsand lines in even higher-dimensional spaces work in the same way.In R3, a line uses one parameter so that a particle on that line is free tomove back and forth in one dimension, and a plane involves two parameters.For example, the plane through the points (1, 0, 5), (2, 1, −3), and (−2, 4, 0.5)consists of (endpoints of) the vectors in{105 + t11−8 + s−34−4.5 t, s ∈ R}(the column vectors associated with the parameters11−8 =21−3 −105−34−4.5 =−240.5 −105are two vectors whose whole bodies lie in the plane). As with the line, note thatwe describe some points in this plane with negative t’s or negative s’s or both.In algebra and calculus we often use a description of planes involving a singleequation as the condition that describes the relationship among the first, second,and third coordinates of points in a plane.P = {xyz 2x + y + z = 4}The translation from such a description to the vector description that we favorin this book is to think of the condition as a one-equation linear system andparametrize x = 2 − y/2 − z/2.P = {xyz =200 + y ·−1/210 + z ·−1/201 y, z ∈ R}Generalizing, a set of the form {p + t1v1 + t2v2 + · · · + tkvk t1, . . . , tk ∈ R}where v1, . . . , vk ∈ Rnand k n is a k-dimensional linear surface (or k-flat).For example, in R4{2π3−0.5+ t1000t ∈ R}
  46. 46. Section II. Linear Geometry 37is a line,{0000+ t110−1+ s2010t, s ∈ R}is a plane, and{31−20.5+ r000−1+ s1010+ t2010r, s, t ∈ R}is a three-dimensional linear surface. Again, the intuition is that a line permitsmotion in one direction, a plane permits motion in combinations of two directions,etc. (When the dimension of the linear surface is one less than the dimension ofthe space, that is, when we have an n − 1-flat in Rn, then the surface is called ahyperplane.)A description of a linear surface can be misleading about the dimension. Forexample, thisL = {10−1−2+ t110−1+ s220−2t, s ∈ R}is a degenerate plane because it is actually a line, since the vectors are multiplesof each other so we can merge the two into one.L = {10−1−2+ r110−1r ∈ R}We shall see in the Linear Independence section of Chapter Two what relation-ships among vectors causes the linear surface they generate to be degenerate.We finish this subsection by restating our conclusions from earlier in geometricterms. First, the solution set of a linear system with n unknowns is a linearsurface in Rn. Specifically, it is a k-dimensional linear surface, where k is thenumber of free variables in an echelon form version of the system. Second, thesolution set of a homogeneous linear system is a linear surface passing throughthe origin. Finally, we can view the general solution set of any linear systemas being the solution set of its associated homogeneous system offset from theorigin by a vector, namely by any particular solution.Exercises1.1 Find the canonical name for each vector.(a) the vector from (2, 1) to (4, 2) in R2
  47. 47. 38 Chapter One. Linear Systems(b) the vector from (3, 3) to (2, 5) in R2(c) the vector from (1, 0, 6) to (5, 0, 3) in R3(d) the vector from (6, 8, 8) to (6, 8, 8) in R31.2 Decide if the two vectors are equal.(a) the vector from (5, 3) to (6, 2) and the vector from (1, −2) to (1, 1)(b) the vector from (2, 1, 1) to (3, 0, 4) and the vector from (5, 1, 4) to (6, 0, 7)1.3 Does (1, 0, 2, 1) lie on the line through (−2, 1, 1, 0) and (5, 10, −1, 4)?1.4 (a) Describe the plane through (1, 1, 5, −1), (2, 2, 2, 0), and (3, 1, 0, 4).(b) Is the origin in that plane?1.5 Describe the plane that contains this point and line.203 {−10−4 +112 t t ∈ R}1.6 Intersect these planes.{111 t +013 s t, s ∈ R} {110 +030 k +204 m k, m ∈ R}1.7 Intersect each pair, if possible.(a) {112 + t011 t ∈ R}, {13−2 + s012 s ∈ R}(b) {201 + t11−1 t ∈ R}, {s012 + w041 s, w ∈ R}1.8 When a plane does not pass through the origin, performing operations on vectorswhose bodies lie in it is more complicated than when the plane passes through theorigin. Consider the picture in this subsection of the plane{200 +−0.510 y +−0.501 z y, z ∈ R}and the three vectors with endpoints (2, 0, 0), (1.5, 1, 0), and (1.5, 0, 1).(a) Redraw the picture, including the vector in the plane that is twice as longas the one with endpoint (1.5, 1, 0). The endpoint of your vector is not (3, 2, 0);what is it?(b) Redraw the picture, including the parallelogram in the plane that shows thesum of the vectors ending at (1.5, 0, 1) and (1.5, 1, 0). The endpoint of the sum,on the diagonal, is not (3, 1, 1); what is it?1.9 Show that the line segments (a1, a2)(b1, b2) and (c1, c2)(d1, d2) have the samelengths and slopes if b1 − a1 = d1 − c1 and b2 − a2 = d2 − c2. Is that only if?1.10 How should we define R0?? 1.11 [Math. Mag., Jan. 1957] A person traveling eastward at a rate of 3 miles perhour finds that the wind appears to blow directly from the north. On doubling hisspeed it appears to come from the north east. What was the wind’s velocity?1.12 Euclid describes a plane as “a surface which lies evenly with the straight lineson itself”. Commentators such as Heron have interpreted this to mean, “(A planesurface is) such that, if a straight line pass through two points on it, the linecoincides wholly with it at every spot, all ways”. (Translations from [Heath], pp.171-172.) Do planes, as described in this section, have that property? Does thisdescription adequately define planes?
  48. 48. Section II. Linear Geometry 39II.2 Length and Angle MeasuresWe’ve translated the first section’s results about solution sets into geometricterms, to better understand those sets. But we must be careful not to be misledby our own terms — labeling subsets of Rkof the forms {p + tv t ∈ R} and{p + tv + sw t, s ∈ R} as ‘lines’ and ‘planes’ doesn’t make them act like thelines and planes of our past experience. Rather, we must ensure that the namessuit the sets. While we can’t prove that the sets satisfy our intuition — wecan’t prove anything about intuition — in this subsection we’ll observe that aresult familiar from R2and R3, when generalized to arbitrary Rn, supports theidea that a line is straight and a plane is flat. Specifically, we’ll see how to doEuclidean geometry in a “plane” by giving a definition of the angle between twoRnvectors in the plane that they generate.2.1 Definition The length (or norm) of a vector v ∈ Rnis the square root of thesum of the squares of its components.v = v21 + · · · + v2n2.2 Remark This is a natural generalization of the Pythagorean Theorem. Aclassic discussion is in [Polya].Note that for any nonzero v, the vector v/ v has length one. We say thatthe second vector normalizes v to length one.We can use that to get a formula for the angle between two vectors. Considertwo vectors in R3where neither is a multiple of the othervu(the special case of multiples will prove below not to be an exception). Theydetermine a two-dimensional plane — for instance, put them in canonical poistionand take the plane formed by the origin and the endpoints. In that plane considerthe triangle with sides u, v, and u − v.Apply the Law of Cosines: u − v 2= u 2+ v 2− 2 u v cos θ where θ
  49. 49. 40 Chapter One. Linear Systemsis the angle between the vectors. The left side gives(u1 − v1)2+ (u2 − v2)2+ (u3 − v3)2= (u21 − 2u1v1 + v21) + (u22 − 2u2v2 + v22) + (u23 − 2u3v3 + v23)while the right side gives this.(u21 + u22 + u23) + (v21 + v22 + v23) − 2 u v cos θCanceling squares u21, . . . , v23 and dividing by 2 gives the formula.θ = arccos(u1v1 + u2v2 + u3v3u v)To give a definition of angle that works in higher dimensions we cannot drawpictures but we can make the argument analytically.First, the form of the numerator is clear — it comes from the middle termsof (ui − vi)2.2.3 Definition The dot product (or inner product or scalar product) of twon-component real vectors is the linear combination of their components.u • v = u1v1 + u2v2 + · · · + unvnNote that the dot product of two vectors is a real number, not a vector, andthat the dot product of a vector from Rnwith a vector from Rmis not definedunless n equals m. Note also this relationship between dot product and length:u • u = u1u1 + · · · + unun = u 2.2.4 Remark Some authors require that the first vector be a row vector and thatthe second vector be a column vector. We shall not be that strict and will allowthe dot product operation between two column vectors.Still reasoning with letters but guided by the pictures, we use the nexttheorem to argue that the triangle formed by u, v, and u − v in Rnlies in theplanar subset of Rngenerated by u and v.2.5 Theorem (Triangle Inequality) For any u, v ∈ Rn,u + v u + vwith equality if and only if one of the vectors is a nonnegative scalar multiple ofthe other one.This is the source of the familiar saying, “The shortest distance between twopoints is in a straight line.”uvu + vstartfinish

×