Chapter 3 VectorSpaces
. 3.1 Vectors in It"
. 3.2 Vector Spaces
• 3 3 Subspaces of' Vector Spaces
• 3.4 Spanning Sets and Linear Independence
• 3.5 Basis and Dimension
• 3.6 Rank of a Matrix and Systems of Linear Equations
. 3.7 Coordinates and Change of Basis
The idea of vectors dates back to the early 1800’s, but the generality of the
cOncept waited until PeanO's work in 1588. It took many years to understand
the importance and extent of the ideas involved.
The underlying idea can be used to describe the forces and accelerations in
Newtonian mechanics and the potential functions of electromagnetism and the
states of systems in quantum mechanics and the least-square fitting of
experimental data and much more.
2.
3.1 Vectors in
Theidea of a vector is far more general than the picture of a line with
an arrowhead attached to its end.
A short answer is “A vector is an element of a vector space”.
Vector ill ñ" is denoted as 3li t rrlci cd ii-ttmplc:
which is shown to be a sequence of n real number
(
‹.*2, L
K )
• o-space: ñ“ is defined to be the set of all ordered n-tuple
( I) An a-tuple (in, , mq,L , m„ ) can be viewed as a point in fi" with
the .r,’s as its coordinates.
(2) An a-tuple (+,› › › +„ ) can be
viewed as a vector ( ) in fi" with the
x,’s as its components.
• Ex:
a point a vector
3.
Note:
A vector spaceis some set of’ things for which the operation of‘
addition and the operation of multiplication by a scalar are defined.
You don't necessarily have to be able to multiply two vectors by each
other or even to be able to define the length of a vector, though those are
very useful operations.
The common example of directed line segments (arrows) in 2D or 3D
fits this idea, because you can add such arrows by the parallelogram
law and you can multiply them by numbers, changing their
length (and reversingdirection for negative numbers).
4.
• A completedefinition of a vector space requires
pinning down these pi operties ot the operators and
making the concept of vector space less vague.
A vector space is a set whose elements are called ‘c
vectors”
and such that there are two operations defined on them:
you can add vectors to each other and you can multiply them
by scalars (numbers). These operations must obey certain
simple rules, the axioms for a vector space.
5.
(two vectors in
fi")
4-5
2,. u„'
„
u = («1,u2,L , u,), v = (r„r2 ,L ,r,
• Equal:
u v if ilfld only if u1 = r1 . u2
• Vector addition (the sum of u and v):
• Scalar multiplication (the scalar multiple of u by c):
« = («„Cu2 , ,CM
6.
• Negative:
—u =(—1)u =
(—
• Difference:
4-6
l , — 2, — 3,‘‘‘, — n)
If —r u +
(—1)r
• Zero vector:
0 = (0, 0, ..., 0)
(uJ —
ri,
up — rd, ftp — rJ,..., Ifn
— Vn
Notes:
(1) The zero vector 0 in fi is called the additive
identity in R'.
(2) The vector —v is called the additive inverse
of v.
7.
Thm 3.1: (theaxioms for a vector space)
n
Let v1, v„ and v3 be vectors in R , and let Qand y be
scalars.
1 There is a function, addition of vectors, denoted -I-, so that i is another vector.
2 There is a function, multiplication by scalars, denoted by juxtaposition, so that n r' is a vector.
3 ( t t'/) + - + (‹'g -I- y) (the associative law).
4 There is a 2ero vector, so that for each , —I
—C — .
5 There is an additive inverse for each vector, so that for each , there is another vector so that 4 - .
d The commutative law of addition holds: i + z - z +
7 (a -I- J)u c r 4 ,.Jz'.
8 (n,ñ)t a(,/J ).
9 c•(*i -›- I›) ' c› L1 -!- c›t2.
10 It' .
4-7
8.
Ex : (Vectoi-operations in ñ‘J
Let u=(2, — 1, 5, 0), v=(4, 3, 1, — 1), and w=(— 6, 2, 0,
3) be
vectors in fi“.
Solve x for 3(x+w) 2u — v+x
Sol: 3(x + w) = 2u — v +
x
3x + 3w = 2u — v +
x
3x — x = 2u — v
— 3w 2x = 2u
— v — 3w
x u — 2 V
=(2,1,5,o)+(—2, 2 2
, —1
,t)+(g,—3,0,
2)
=(9, —
11
2 * 2
9
,—
4)
9.
. Thm 3.2:(Proper ties of additive identity and additive inverse)
Let v be a vector in fi“ and c be a scalar. Then the following is true.
(l) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = —
v
10.
• Thm 3.3:(Properties ot scalar
multiplication)
Let v be any element of a vector space V
, and let c be
any scalar. Then the following properties are true.
(1) 0v=0
(2) c0=0
(3) If cv=0, then c=0 or v=0
(4) (-1)v = -v and —(
— v)
= v
4 -
11.
Notes:
I
Z
)
A vector u= (U„u ,K in fi
can be viewed as:
or
a ›+x1 column matrix (column vector):
u
a 1xn row matrix (row vector): u = up , ff2,L ,
u„]
(The matrix operations of addition and scalar multiplication
give the same results as the corresponding vector operations)
4 - I
12.
Vector addition
u +v = (u, , u2
, L , o ) +
(r„
Scalar multiplication
r2
, L , r„)
<<
c(uJ, u2,L , up )
(cu1,cu2, L ,
cu„ )
' ( l + ’
l , 2 + ’2 , L ,u + r„ )
Matrix Algebra
u + v [u„ u2, L
[ 1
+
1
,
,u ] + [r„ r2 , L ,
r,]
2 + v2, L ,u +r
cu c[u„ u2,L , u„]
[cu1’cu2 ,L , ru„]
1 1+
’1
2+ ’2
M
4-
r
u
13.
Notes:
(1) A vectorspace consists of four entities:
a set of vectors, a set of scalars. and two operations
V • nonempty set
c • scalar
+ (u, v) = u +
v:
vector
addition
• {c,a) = pp;
(v, +, •)
scalar multiplication
is called a vector
space
(2) * = O : zero vector space
containing only additive
identity
4 - I
14.
• Examples ofvector spaces:
(1) x-tuple space: A•
‘1* 2* n)+ V1* 2* 2)' 1+ Vl*
2
+
2* i
i
+ n) Vector additi on
sc‹i1ar multiplication
n(u ,x ,L u„) = (eu„ex„L
au„)
(2) M a tr i x space: U - Mmzn 'the set of all ioxa matrices with real values)
12
22 21 22
2
1
21 + 21 u22
22
11
2
1
22
4-14
vector addition
scalar multiplication
15.
8
8
(3) n-th degreepolynomial space: W = }P„(x)}
(the set of all real polynomials of degree n or
less)
pax)+ q{x) — {at +b„)+{at +by)x +L+(n +
b„)x‘
(4) Function space: The set of square-integrable real-valued functions of
a
real variable on the domain [n<m < 6].
That is, those functions with J f(
2
simply note the ctimbinatitin
and dx 1 g{x)
12
So the axiom-1 is satisfied. You can verify the rest 9 axioms are also
satisfied.
16.
• Funcdon Spacm:
T|jeset of reaLvalued functions of a real variable, defined on the domain (r
pointwise. If {j and /_› are functions, then the value of the function /t -
I- /
4-
z' /›). Addition is
defined at the point . is the
number
//(z') + //(a ). That is, /t -I- /› - /y means /y(.‹ ) - //(.‹ ) + /t( ). Similarly, multiplication by a scalar
is defined as (n/){. ’) - ‹ ( {(.‹’)). Notice a small confusion of notation in this last expression.
multiplication, (n/), multiplies the scalar n by the vector {; the second multiplies the scalar ri by the
The first
number
Is this a rector space? How can a function be a
vector? This comes down to your understanding of the
word “function.” Is x) a function or is x) a number?
Answer It's a number. This is a confusion caused by the
conventional notation for functions. We routinely call
x) a function, but it is really the result of feeding
the particular value, x, to the function in order to get the
numberf(x).
Think o
T the traction fee the whole graph relating Input to output; the
peir (x,Jz)J is then)ust one polnt on the graph. Adding two functions is
adding thelr grapivn
17.
Notes: To showthat a set is not a vector space, you
need only find one axiom that is not satisfied.
4-
Ex1: The set of all integer is not a vector
space.
2
2
1
)(l) =
z R
q (it is not closed under scalar
multiplication)
scalar noninteger
idle
Rei’
Ex2: The set of all second-degree polynomials is not a vector space.
PI: Let pax) ——x’ and q(x) = —x2
+ x +1
(it is not closed under vectix addititri)
18.
3.3 Subspaces ofVector Spaces
• Subspace:
(V,+,•) : a vector space
a nonempty
subset
IV
( + •) a vector space (under the operations of
addition and scalar multiplication defined in V)
W is a subspace of U
• Trivial subspace:
Every vector space V has at least two
subspaces.
(l) Zero vector space (0} is a subspace
of V
4 - I
19.
• Thm 3.4:(Test for a subspace)
If W is a nonempty subset of a vector space V
, then IV
is
a subspace of V if and only if the following conditions
hold.
(1) If u and v are in W
, then u+v is in N
Axiom 1
(2) If u is in IV and c is any scalar, then en is in W.
Axiom 2
4 - 19
Theorem: If a subset of a vector s|›ace is closed I never aclclition ancl multiplication fry scalars, then it is
itself a vector space. This means that if you add two elements of tI›is s«hset to each other tl›ey remain in the
subset aurl multiplying any element of the subset by a scalar leaves it m the suhset. It is a "subspace."
Proof: The ass mption of the theorem is that axioms 1 ancJ 2 are satisfiecl as regarcls the s hset. That axioms 3
through 10 hold follows hecar›se the elements of the subset inherit I heir properties front the larger vector space of
which they are a part.
20.
• Ex: (Asubspace of M2, )
Let W be the set of all 2x2 symmetric matrices. Show
that W is a subspdce of the vector space M2x with the
standdrd operations of matrix addition and scalar
multiplication.
Sol:
QW c 4f2 ,2 M2 ,2 : vector spaces
4 - 20
A
T
Let A , A2 e IV Al, A2)
kAT
kA
k e R, A e W {kA)T
. IV is a subspace of
M,
21.
• Ex: (Determiningsubspaces of fi’)
Which of the following subsets is a subspace of fi3
?
(a) U = (x„ x2,1) | x,, x2• *1
(b) IV = (x„ x, + x„x ) x, , x, e fiJ
Sol:
(a) Let v = (0,0,1) e IV
’. W is not a subspace of fi3
(b) Let v = (ve v, + v3, v,) e W, u = (u„u, +
u3, u3) e W
w -(a„(a,)+(/=,),/=3)«w
22.
• Thm 3.5:(The intersection ot two subspaces is a subspace)
If Y and W are both subspaces of a vector
space U, then the intersection of Y and W
(denoted by Y m W) is also a subspace of U.
Proof: Automatically from Thm 3.4.
23.
3.4 Spanning Setsand Linear Independence
• Linear combination:
A vector v
in a vector
space the vectors u, ,
u2,L , u, in U
V is called a linear combination
of if v can be written in the
form
Ex
:
v = r,u, + c 2 u 2 + K + c h u b c ,C2,L , c, : scalars
= (0,1,4), 2'
(
Given v = (
— 1, — 2, — 2), u 1,1,2),
and
u3 = (3,1,2) in fi", find n, fi, and c’ such that v = nu, +fiu2
+ ru,.
SOl. a
+
—b
b
3c
c
—1
—
—
—2
4n + 2b + 2c = —
2
a ——1, b ———2,
c ———1
4-2?
**'••s v= nd—2u
—up
24.
Ex: (Finding alinear combination)
v1 (1,2,3) v2 (0,1,2) v (—1,0,1)
Prove w = (1,1,1) is a linear combination of
v1, v2, v,
/@}¿ W
CJ*1+ ‘2 2+ ‘ 3 V 3
(i,i,i)=r,(1,2,3)+r2(o,1,2)+‹,(—i,o,i)
’
1
3
1 ’2 1
-r 2«2 +c3
1
^
4-
25.
0 —
1
1 0
21
0
—1
1 2
0 0
(this system has infinitely many solutions)
w = 2 v l — 3 v2 + V3
4-2?
26.
• the spanof a set: span ‹S)
lf S
——| v; , v2,..., vk) is a set of vectors in a vector
space Y
,then the span of S is the set of all
linear combinations of the vectors in S,
f7 —
— spun (S) =
$c,ve+ c,v2 + L + c
v/$
#c, a R
(the set of all linear
combinations of vectors in S)
• a spanning set of a vector space:
If every vector in a given vector space U can be written as a
linear combindtion of vectors in a given set S, then S is
called a spanning set of the vector space U.
4 - 25
27.
• Notes:
S spans(generates)
Y
Y is spanned (generated) by
S
S is a spanning set of U
• Notes:
(1)
(2)
(3)
span )) ——JOJ
S cz span(S
jS ,S, V
St S2 »
spun{S )
spun{S
)
28.
• Ex: (Aspanning set f‹)i fi’)
Show that the set S -- (v, , v2, r, } = $(1,2, 3),(0,1, 2),
(—2,0,1)$
Sol:
spans fi"
We must determine whether an arbitrary vector u — (ul,
u2,ir,)
in fi’ can be as a linear combination of v , v2 , and v,.
The problem thus reduces to determining whether this
system
is consistent for all values of ul, o2 , and up.
4 - 2S
29.
1 0 —2
QA = 2 1 0 0
3 2 1
4-29
y-
F
C2
-2r3 =u,
’2
Ac u has exactly one solution c for every
u.
scan{S) R’
30.
• Thm 3.6:(Span (S) is a subspace ot
I S
—- V , V„..., Yk} is a set of vectors in a vector space V,
then
(a) span (5) is a subspace of Y
.
(b) span [S) is the smallest subspace of F that contains the
spaning
S.
i.e.,
Every other subspace of Y that contains S must contain span {S).
4 - 50
31.
• Linear Independent(L.I.) and Linear Dependent (L.D.):
S $v„ v2,L , vk : a set of vectors in a vector space
V
For the equation c, v, + c v2 +L +ck v = 0
(1) If the equation has only the trivial solution (c, = c2 = L =
c, = 0) then S is called
linearly independent.
(2) If the equation has a nontrivial solution (i.e., not all
zeros), then S is called linearly
dependent.
4 - 5
32.
• Notes:
4-
(1) /is linearly independent
(2) 0 e S =» S is linearly
dependent.
(3)
(4)
v a 0 => Single nanzero vector set vJ is linearly independent
Soz S2
S, is linearly dependent S is linearly dependent
S2 is linearly independent SI is linearly independent
33.
• Ex: (Testingtor linearly independent)
Determine whether the following set of vectors in fi’ is L.l. or L.D.
s = ; › „ 2 , ,/=((1, 2, 3),(0, 1, 2),(—2, 0, 1))
Sol:
rl v1+ ‘2 V2 + e , v, — 0
m
Gauss - Jordan
Elimination
c
——
c
,
0 — 2
1 0
2 1
——ct ——0 (only the trivial
solution)
c, 2C, = 0
2r
, + c2 + = 0
3• + 2*2 + 3 0
0 0
1 0
0 l
S is linearly independent
34.
. Ex: (Testingl'or linearly
independent)
4-54
Determine whether the following set of vectors in P is L.t. or L.D.
S 1+x — 2x2 , 2+5x —x2 , x+x2)
V V2 Vj
i.e. c (1+x — 2x2
) + °2(2+5x —x2
) + c (x+x2
) 0+0x+0x2
2
0
0
0
C
’
.
J
.
This system has infinitely many solutions.
{i.e., This system has nontrivial solutions.)
xi S ’is linearly dependent. (Ex: cl —2 , 1 , c =3)
35.
. Ex: (Testingfor lineaily independent)
Determine whether the following set of vectors in
2x2 matrix space is L.I. or L.D.
V
Sol:
lvl + 2v2+c
v3 = 0
VJ Vj
36.
2cl +3c2+
c
c 1
2c2+2c
r,+r2
-— 0
= 0
= 0
= 0
231 0 0 0
1 0 0 Gauss - Jordan Elimination 1 0 0
0 2 2 0 1 0
110 0 0 0
4 - 35
c = c2 = c = 0 (This system has only the trivial solution.)
S is linearly independent.
37.
■Thm 3.7: (Apropei'ty of lineail} ttepelitleiit sets)
A set S ——{v ,v„...,v ), kñ2, is linearly dependent if and only if at least
one of the vectors rd in S can be written as a linear combination of the other
vector s in S.
: (<-) Since S is linearly dependent
l V I
+ 2V2“- + kVt =
ci 0 for some i
4-
M V - V -i- L + ' '
1
' 1
V _1 + ' + V +J +
+ ‘ k
Vk
38.
Let v, ——dv +...+d,1 vf—
+ d+, v,q +...+dkvk
=> d v +...*d;—•: +d;+ vf
+,—v,+...
+dkvk —- 0
=- • —dl , 2=d2 ,..., r,=-1 ,..., wk-dk (nontrivial solution)
=* S is linearly dependent
• Corollary to Theorem 3.7:
Two vectors u and v in a vector space V are linearly
dependent
if and only if one is a scalar multiple of the other.
4 - 5g
39.
3.5 Basis andDimension
4 - 39
• Basis:
F : a vector space
S spans V ‹i.e., span (S) = Y)
S is linearly independent
S is called a basis for V
Bases and Dimension
A basis for a vector space V is a linearly independent spanning set of the
vector space Y
,
i.e.,
any vector in the space can be wi‘itten as a linear combination of
elements of this set. The dimension of the space is the number of
elements in this basis.
40.
Note:
Beginning with themost elementary problems in physics and mathematics, it
is clear that the choice of an appropriate coordinate system can provide
great computatioDal advantages.
For examples,
1. for the usual two and three dimensional vectors it is useful to express
an arbitrary vector as a sum ot unit vectors.
2. Similarly, the use of Fourier series for the analysis of functions is a
very powerful tool in analysis.
These two ideas are essentially the sanne thing wheD you look at thern as
aspects of vector spaces.
• Notes:
(1) Ø is a basis for (0}
(2) the standard basis for fi*:
i, j, k) i = (1, 0, 0), y = (0, 1, 0), k = (0,
0, 1) 4 - 40
41.
VI
(3) the standardbasis for R“ :
|el , e„ ..., e„) e,=(1,0,...,0), e2=(0,1,...,0), e„=(0,0,...,
I)
Ex: fi4 ( (1,0,0,0), (0,1,0,0), (0,0, 1,0),
(0,0,0,1) }
(4) the standard basis for mxn matrix space:
E ,
Ex: 2 x 2 matrix space:
(5) the standard basis for P
{x):
4 - 4
42.
. Thm 3.8:(Uniqueness of basis
repi'esentation)
If S -- {v1,
v2
,
, is a basis for a vector space V
, then
every
vector in V can be written as a linear combination of vectors
in
S in one and only one way.
Pf:
Note S is a
basis
1. Span (S) = Y
2. S is linearly
independent
Q S is linearly independent
» •l' l • 2'
2
., c ——h [i.e., uniqueness)
43.
. Thm 3.9:(Bases ‹ind linear
rlcpcndcncc)
If S -- {v1, v2
,L , v,
is a basis for a vector space Y
, then every
set containing more than n vectors in F is linearly
dependent.
Let Sl (ul , u2, ..., uq} , m > n
Qspun(S) V
us - • v, +•21V2 ML +
6„JV ,
M
4 - 4?
44.
Let k uJ+k2u2+... +k„u„,= 0
dl v +d2v,+... +d vn = 0 with
di—
Q S is L.I.
4 - 44
ilk + 2k2+ +
inik
•2 k +r22k2 + L +•2
M
kg,
—— 0
‘ n 1 ' 1 + ‘ n 2 ' 2 + L + ‘ n o k „ —— 0
If the homogeneous system (mm) has fewer equations
than variables, then it must have infinitely many
solution.
m > n kg ul+k2l22+...+k„up, ——0 has
nontrivial solution
45.
• Notm:
(l) dim((0))= 0 =
#(id)
4 - 45
(2) dim(Y) = n , So
V
Spanning
Sets
dim(Y) =
n Linearly
Independent
Sets
S • a spanning
set
.S a L.I. set
S • a basis
=> éls) n
=> éfs)
n
=> #($)
n
(3) dim(Y) n , W is a subspace of
Y
=> dim(W) 3
n
46.
• Thm 3.10:(Number of vector's in o
r basis)
If a vector space Y has one basis with n vectors, then
every basis for Y has o vectors. i.e.,
All bases for a finite-dimensional vector space has the
same
number of vectors.)
are two bases for a vector space
S
i
s
4 - 45
47.
. Finite dimensional:
4- 47
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of
elements.
Infinite dimensional:
If a vector space V is not finite
dimensional, then it is called infinite
dimensional.
• Dimension:
The dimension of a finite dimensional vector
space V is defined to be the number of vectors in
a basis for V
.
48.
. Ex: (Findingthe dimension of a subspace)
{a) W = (d, c—d, c): c and d are real numbers}
(b) Up=( (2b, b, 0): b is d real number}
Sol: Find a set of L.I. vectors th‹it spans the subspace.
(a) {d, c
— d, r) = c(0,l, 1) + d(l, — l, 0)
S -- ((0, 1, 1) , (1, — 1, 0)) (S is L.I. and S spans
W1)
S is a basis for IV
dim(WJ) = #{S) ——2
(b) $(2b,b,0)= b(2,t,0)
m• S —— ((2, 1, 0)} spans I ¿ and
5 is L.I.
m S is a basis for W
49.
• Ex: (Findingthe dimension of a subspace)
4 - 49
Let W be the subspace of all symmetric matrices in
M2,2 What is the dimension ofW?
Sol:
=> S
——
spans W and
S is L.1.
dim(W) H(S) 3
S is a basis for
W
50.
Thtn 3 11(Basis tests in an n-dimensional
space)
4 - 50
Let U be a vector space of dimension
n.
(1) If S = Lvl, v2,L , vn is a linearly independent set of
vectors in P, then S is a basis for P.
V1.V2.L , ¥
(2) If S = spans V, then S is a basis for F.
dim(V) =
n
Spanning
Sets
éfs) > n
d(S)
Linearly
Independent
Sets
51.
3.6 Rank ofa Matrix and Systems of Linear Equations
• row vectors:
L
L a2 ,
A(1)
A(2 )
M
A = a 2
’
all a12
a 22
M
a m 2
Row vectors oL
A
M
,
K , n
„
, )
A
„
,
L ffl
n
A
(
)
( y
e
g
,
• column vectors:
a,2
L
a 2. a22
M
ap2
A =
L a
2
n
2
[ A ( ’ ) u ( )a « + ( °
) a
2 1 a
2 2
L
a
2 n
4 - 5
z’ » 2 A“
52.
Let A bean rixn matrix.
• Row space:
The row space of A is the subspace of fi"
spanned by
the m row vectors of A.
RS[A) ——!aiA i + e2A‹2,+ •
•
• + e. A „
,
›
ct.e2,...,
• Column space:
„ G
R}
The column space of A is the subspace of fi '
spanned by the n column vectors of A.
2
2
cs( i) = (p,a + p2A' +L + Q„A
Q„Q ,L Q„ « It}
• Null space:
The null space of A is the set of all solutions of Ax=0
and
it is a subspace
of
n
g
NSIA) —
—{x z p" I Ax
53.
. Thm 3.12:(Row-equivalent mati ices have the same row
›p‹ice)
If an axe matrix A is row equivalent to an axe
matrix B,
then the ror space of A is equal to the row space of
B.
.
Notes: (1)The row space of a matrix is not
changed by elementary row operations.
RS(QA)) RS{A) p :
elementary row operations
(2)However, elementary row operations do change
the column space.
54.
. Thm 3.13:(Boris foi the row space of .i m itrix)
If a matrix A is row equivalent to a matrix B in row-echelon
form, then the nonzero row vectors of B form d basis for the
row space of A.
4-54
55.
• Ex: (Finding a basis for a row space)
3 1
i I
Find a basis of row space of A —— 0 6
4 —
2
0 —4
Sol:
A
—
—
3
1
1
1
0
6
4 —
2
a] a4
* 1 3 1 3
0 1 1 0 2
W 3
G.E.
B
0 0 0 1
0 0 0 0
0 0 0 0
be b2 b›
b4
4 - 5I
56.
a basis forRS(A) —- {the nonzero row vectors of B)
(Thm 3.13) (wl , w2, wk ) (1, 3, 1, 3) , (0, 1, l , 0) ,(0, 0,
0, 1) )
4 - 55
. Notes:
(1)
(2)
b, = —2b, +b2 m a, = —2al + a2
(be b , b, } is L.I. => {a , a, , a4 )
is L.I.
57.
Ex: (Finding abasis for the column space of a matrix)
4-57
Find a basis for the column space of the matrix
A.
A
T
3 1
1 l
A o 6
4 -2
() -4
Ci C2 Ca
0 3 3 0 3 3 2 wl
1 0 4 B 1 9 — 5— 6 w2
1 6 —2 0 1 — I — I w,
0 —l l 0 0 0 0
58.
% CS‹A)——
RS{AT)
4-58
a basisfor CS(A)
a basis for RS(AT)
= (the nonzero row vectors of
B
(a basis for the column space of
A)
—6
• Note: This basis is not a subset of {cl , c2, c„ c4}.
59.
. Sol. 2:3 I 1 3 1 3
1 1 tJ 1 1 0
A tJ 6 0 0 0 1
4 —
2
0 0 0 0
0 —
4
0 0 0 0
Ci C2 CI C‹ VI V2 V3 Y‹
The column vectors with leading 1 locate
m(v , v2, v4) is a basis for CS{B)
m|c„ c2, c,} is a basis for CS{A)
• Notes: (1) This basis is a subset of (ct, c2 ct
c4).
(2) v = —2v + v2, thus ct =
4-59
60.
. Thm 3.14:(Solutions of a homogeneous system)
If A is an iDxo matrix, then the set of all solutions of Az —
— 0 is a
subspace of A^ called the nullspace of A.
4-60
Proof: NS A) « R•
QA0 = 0 => NS{A)I
)
Let x‹, xc e NS
A)
(i.e. Ax = 0, AJ2 =
0)
Then (1)a(x + x.) = Axe + Ax2 = 0 + 0 = 0
Addition
(2)a(exe) = c(Axe) = c(0) = 0
Scalar multiplication
Thus NS A) is a subspace of fi"
Notes: The nullspace o1 A is also called the solution space
of’ the homogeneous systcm lx = 0.
61.
A
SOl: The nullspaceo1 A is the solution space ofAx = 0.
4-61
• Ex: Find the solution space of a homogeneous system Ax
0.
2 — 2
6
—5
2
0
2 —2 2 0
A 6 —5 01
2 0 0 0
—2i —
3r
62.
. Thm 3.15:(Row and column space have equal dimensions)
If A is an nixn matrix, then the row space and the column
space of A have the same dimension.
4-62
dim(AS(A)) dim(C5(A))
• Rank:
The dimension of the row (or column) space of a matrix
A
is called the rank of A.
rank(A) = dim(AS(A)) = dim(CS(A))
63.
• Nullity:
The dimensionof the nullspace of A is called the nullity of
A.
nullity(A) = dim!NS!A))
• Notes: rank(AQ —- dim(US(AQ) —- dim(CS(A))
—- rank(A)
Therefore rank(›47) —— rank(›4)
4-63
64.
. Thm 3.16:(Dimension o1’ the solution space)
If A is an wxn matrix of rank r, then the dimension
of the solution space of Ax 0 is
n — r. That is
4-64
.
Notes:
nullity(4) =it - rank(A)= zt-r
o=rank(A)+nullity(if)
( n = #variables= Illeading variables +
#nonleading variables )
(1) rank(A): l“he number of leading variables in the solution of Ax-0.
(i.e., The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number or’ tree varialiles (non leading variables)
in the solution of Ax = 0.
65.
NOlG'i:
If A isan axe matrix and rank(A) r, then
Fundamental Space
Dimension
RSIA)=CS‹A’f r
CS(A)-—RS{A r
NS‹A) n - r
NS[A m —
r
4-6?
66.
• Ex: (Rankanal nullity o1 i matrix)
Let the column vectors of the matrix A be denoted by a„
a2,
at, at, and a5.
A
4-
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis
for the column space of A
67.
Sol:
4-
B is thereduced row-echelon form of A.
0 —2 1 0 —2 0
—1 —3 1 1 3 0
—l 1 —1 0 01
3 9 0 0 0 0
a, at ay a5 be b b, by b
y
al
(a) rank(A) = 3 (the number of nonzero rows in
B)
nullity(A) ——n — raiñctA) ——5 —3 = 2
68.
(b) Leading
1
4-
=> |bl,b2 ,b4 }is a basis for CS
‹B)
{a ,a, ,a4 ) is a basis for CS
A)
ai a 2
and a4
(c) b ——2b +
3bz
a, —2al + 3a2
69.
• Thm 3.17:(Solutions ot an inhomogeneous linear system)
If x is a particular solution of the inhomogeneous system
Ax —- b
, then every solution of this system can be written
in
the form x = x + xh , wher x , is a solution of the
corresponding homogeneous system Ax ——0.
Pf: Let x be any solution of tx ——b.
A(x — x ) = Ax — Az ——b —b = 0.
(x — x ,) is a solution of Az 0
4-
h
Let x x„+ xh
p
70.
• Ex: (Findingthe solution set o1 an inhomogeneous system)
Find the set of all solution vectors of the system of linear equaiions.
5
5x 9
4-
0 —2 1
1 —5 0
2 0 —5 —
9
0 —2 1
1 1 —
3
0 0 0
71.
i..e.
x„
2r
+
t
St
+
—
+ 0t +
+t +
—
7
is a particular solution vector of lx—
b.
xh —- Su, +
du
is a solution
of
Ax —-
0
4 - 7
72.
• Thm 3.18:(Solution of a system of linear equations)
The system of linear equations Ax -- b is consistent if and
only
if b is in the column space of A (i.e., bsCS(A)).
Let
4-72
A
L
a,2
L
n22
M
nq2 L
and
b
be the coefficient matrix, the column matrix of unknowns,
and the right-hand side, respectively, of the system Ax =
b.
73.
Then
4-73
a22
L
‘ l l‘ ,
‘ 2 '
+
+ ‘ l 2 ‘ 2 + L +
22 2
+ L -I- a2 , x„
Hence, Ax -— b is consistent if and only if b is a linear
combination of the columns of A. That is, the system is consistent
if and only if b is in the subspace of fi" spanned by the columns
of A.
74.
• Notes:
4-74
If rank([Alb])=rank(A)(Thm
3.18) Then the system Ax=b is
consistent.
• Ex: (Consistency of o
r system of line ir
cqucitions)
Sol:
1
—
A —— 0
0
1
—
0
75.
i
—1
[ANb)—- 0 1
2
—1
ct
c2
c,b
4-75
0 1
1 —2
0 0
W W W j
V
Q v = 3w, —4w2 Note : wk is not the leading-1 column
vector)
=> b = 3c, —4c2 + 0c, (b is in the column space of A)
The system of linear equations is consistent.
• Check:
rank(A) rank([A b]) 2
76.
• Summary o1equivalent conditions lor squaie matrices:
If A is an nxs matrix, then the following conditions are
equivalent.
(1) A is invertible
(2)Ax b has a unique solution for any nxl matrix b.
(3) Ax 0 has only the trivial solution
(4) A ’is row-equivalent to If
(5) IA l> 0
(6) rank(A) = n
(7) The n row vectors of A are linearly independent.
4-76
77.
3.7 Coordinates andChange of Basis
4-77
. Coordinate repiese8tation relative to a basis
Let B ——{v„ v2, ..., vn) be an ordered basis for a vector space
Yand let x be a vector in V such that
The scalars c„ c2, ..., c are called the coordinates ‹)f x
relative to the basis B. The coordinate matrix (or coordinate
vector) of x relative to B is the column matrix in n
whose components are the coordinates of x.
78.
• Ex: (Coordinatesand coiiiponeias in R"
Find the coordinate matrix of x = (—2, l, 3) in
fi3
relative to the standard basis
S = ((1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
Qx = (—2, 1, 3) = —2(1, 0, 0) +1(0, 1, 0) + 3(0,
0, 1)
4-78
79.
• Ex: (Findinga coordinate matrix relative to a nonstandard basis)
Find the coordinate matrix of x=(1, 2, —1) in fi
relative to the (nonstandard) basis
B ' (u1, u2, u, }={(1, 0, 1), (0, — 1, 2), (2,
3, — 5)}
4 - 79
SO’
x — cl ip , + c2 2 + C3
u3
(1, 2, —1) = c,(1, 0, 1) +c2(0, —1, 2) + c,(2, 3,
—5)
c
+ 2c, = 1 0
— c2
2c2
+
—
3r3
5c,
= 2
—1
—1
2
cv +
80.
• Change ofbasis:
You were given the coordinates of a vector relative to one
basis B and were asked to find the coordinates relative to
another basis B'.
• Ex: (Change o1’ basis)
Consider two bases for a vector space V
B ——(ul ,u2 ), B' ——{u , u2 )
4 - 80
i.e., u = ou, + bu2 , u} = cu + du2
81.
Let v eV, [v]p, =
v klu, + k,u2
k, (a• + ßu2 ) + k2(c 1 + du
)
—— k
ț • + k2c)ul +
kIb + k2 d)u2
k2
c
M [V]B
4 - 8
82.
Transition matrix tromB' to B:
Let B ——(ul ,u2 , .,u,} and B' ——(u1,u2...,u', ) be
two bases
for a vector space U
If {v!B is the coordinate matrix of v relative to B
[v]B is the coordinate matrix of v relative to B'
then [v], = P[v]p,
where
is called the transition matrix from B' to B
4 - 82
83.
• Thm 3.19:(The inverse ot a transition matt ix)
If P is the transition matrix from a basis B' to a basis B in
fi',
then
(1) P is invertible
(2) The transition matrix from B to B' is P 1
.
Notes:
B
(up , u2 , ..., u
},
B
'
——
(u,
, u2,..., u
)
[v]
, I[ 1i. ,[u2].
...,[u„], ] [v R
[
4 - 8?
84.
• Thm 3.20:(Transition matrix fi'om B to
B')
4-84
Let B——(v„ v2, ... , v ) and B' ——(
u
„ u2, ... ,
u, } be two bases for R
n
. Then the transition matrix A1 from B
to B' can be found by using Gauss-Jordan elimination on the
9x2D matrix [B's] as follows.
[B'hS][J„V'
85.
• Ex: (Findinga transition iratri x)
4-85
are two bases for fi2
B Find the
t4an2tion
ri f
of
t2B2
(b) Let [v]B find [v]B
(c) Find the tran2tion matrix from B to B'
(c)
2 PW-3
—2 M
2
0M-1
1 M
—2
B' B I P
4-87
• Check:
— 2
—
2
(the transition mdtrix from B to B')
88.
• Ex: (Cooidin‹ite representation in P
{.i)}
4 - 85
Find the coordinate matrix of p -— 3x*-M2+4 relative to
the
standard basis in P (x), S , I+x, l + x2,
1+ x3 }.
Sol:
p = 3(1) + 0(l +x) + (—2)(l +x2) + 3(1+x3)
89.
• Ex: (Cooidin‹ite representation in /lf„ )
4 - 89
Find the coordinate matrix of
x
the standardbasis in M2x2
B
Si)l:
5
6
7
8
relative to
x +
7
+
8