.
VECTOR SPACE
1
Dr A K TIWARI
Dr A K TIWARI
VECTOR
ALGEBRA
VECTOR
GEOMETRY
VECTOR
CALCULUS
VECTOR SPACE
VECTOR SPACE
FIELD F
VECTOR V
V(F) VECTOR SPACE
F (+,.) , a,b,c,…..scalers
Dr A K TIWARI
+ . a b c
Dr A K TIWARI
CONTENTS
1) Real Vector Spaces
2) Sub Spaces
3) Linear combination
4) Linear independence
5) Span Of Set Of Vectors
6) Basis
7) Dimension
8) Row Space, Column Space, Null Space
9) Rank And Nullity
10)Coordinate and change of basis
5
Dr A K TIWARI
Vector Space Axioms
Real Vector Spaces
Let V be an arbitrary non empty set of objects on which two operations are defined, addition and
multiplication by scalar (number). If the following axioms are satisfied by all objects u, v, w in V
and all scalars k and l, then we call V a vector space and we call the objects in V vectors.
1) If u and v are objects in V, then u + v is in V
2) u + v = v + u
3) u + (v + w) = (u + v) + w
4) There is an object 0 in V, called a zero vector for V, such that 0 + u = u + 0 = u for all u in V
5) For each u in V, there is an object –u in V, called a negative of u, such that u + (-u) = (-u) + u = 0
6) If k is any scalar and u is any object in V then ku is in V
7) k(u+v) = ku + kv
8) (k+l)(u) = ku + lu
9) k(lu) = (kl)u
10) 1u = u
6
Dr A K TIWARI
Example: Rn is a vector space
Ku =(ku1,ku2,ku3,. . . . . kun)
Step :1- identify the set V of
objects that will become
vectors.
Let, V=Rn
u+v = (u1,u2,u3,. . . . . un)+(v1,v2,v3,. . . . vn)
= (u1+v1,u2+v2,u3+v3, . . . .un+vn )
Step :2- identify the edition and
scalar multiplication operations
on V.
Step :3- verify axioms 1 & 6, adding two
vectors in a V produces a vector in a V , and
multiplying a vector in a V by scalar
produces a vector in a V.
Step :4- confirm that axioms
2,3,4,5,7,8,9 and 10 hold.
7
Dr A K TIWARI
Example: Set Is Not A Vector
8
Dr A K TIWARI
Examples of vector spaces
(1) n-tuple space: Rn
(u1,u2, ,un )(v1,v2, ,vn) (u1v1,u2 v2, ,un vn )
k(u1,u2 , ,un )  (ku1,ku2 , ,kun )
(2) Matrix space: (the set of all m×nmatrices with real values)
V  Mmn Ex: :(m = n = 2)
21 22 22 
22   21
 21 22   21 v u v 
v   u
u
u v
u11 u12  v11 v12  u11 v11 u12 v12 
 21 22   21 22 
ku 
ku12 
  ku
ku11
u
u12 
u11
ku
vector addition
scalar multiplication
vector addition
scalar multiplication
9
Dr A K TIWARI
(3) n-th degree polynomial space: (the set of all real polynomials of degree n or less)
n
 b )xn
n
 b0 )  (a1  b1)x   (a
0
p(x)  q(x)  (a
n
kp(x)  ka0  ka1x   kan x
(4) Function space:(the set of all real-valued continuous functions defined on the entire
real line.)
V  c(,)
( f  g)(x)  f (x)  g(x)
(kf )(x)  kf (x)
vector addition
scalar multiplication
vector addition
scalar multiplication
1
0
Dr A K TIWARI
PROPERTIES OF VECTORS
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
(1) 0v  0
(2) c0  0
(3) If cv  0, then c  0 or v  0
(4) (1)v  v
11
Dr A K TIWARI
Definition:
(V,,) : a vector space
: a non empty subset
W V

(W,,)
W 
:a vector space (under the operations of addition and
scalar multiplication defined in V)
Subspaces
 W is a subspace of V
If W is a set of one or more vectors in a vector space V
, then W is a sub space of V if
and only if the following condition hold;
a)If u,v are vectors in a W then u+v is in a W.
b)If k is any scalar and u is any vector In a W then ku is in W. 12
Dr A K TIWARI
11
Every vector space V has at least two subspaces
(1)Zero vector space {0} is a subspace of V
.
(2) V is a subspace of V
.
 Ex: 2
Subspace of R
(1) 
0 0  0, 0
(2) Lines through the origin
R 2
(3)
• Ex: Subspace of R3
(1) 
0 0  0, 0, 0
(2) Lines through the origin
(3) Planes through the origin
(4) R3
If w1,w2,. . .. wr subspaces of vector space V then the intersection is this subspaces is also
subspace of V.
Dr A K TIWARI
Let W be the set of all 2×2symmetric matrices. Show that W is a subspace of the
vector space M2×2,with the standard operations of matrix addition and scalar
multiplication.
Sol:
W  M22 M22 : vector sapces
Let A , A W (AT
 A , AT
 A )
1 2 1 1 2 2
A W, A W  (A  A )T
 AT
 AT
1 2 1 2 1 2
k  R, AW  (kA)T
 kAT
 kA
W is a subspace of M22
 A1 A2 (A1  A2 W)
(kAW)
Ex : (A subspace of M2×2)
Dr A K TIWARI
 
0 1
0
W
 A B 
1
W2 is not a subspace of M22
Let W be the set of singular matrices of order 2 Show that W is not a subspace of M2×
2 with
the standard operations.
Sol:
 
 
0 1
0 0
A 
1 0
W ,B 
0 0
W
Ex : (The set of singular matrices is not a subspace of M2×2)
Dr A K TIWARI
14
v  c1u1  c2u2 … ckuk
Definition
A vector v in a vector space V is calleda linear combination of
the vectors u1,u2, ,uk in V if v can be written in the form
c1,c2, ,ck :scalars
Linear combination
If S = (w1,w2,w3, . . . . , wr) is a nonempty set of vectors in a vector space V, then:
(a) The set W of all possible linear combinations of the vectors in S is a subspace of V.
(b)The set W in part (a) is the “smallest” subspace of V that contains all of the vectors in S
in the sense that any other subspace that contains those vectors contains W.
Dr A K TIWARI
15


1 0 1 1  1 0 1 1 
2 1 0 2 
G

u
a
s

s

J
o

r
d
a
n

E
l
i
m

i
n
a

t
i
o
n
0 1 2  4

3 2 1 2  0 0 0 7 
 
 this system has no solution ( 0  7)
 w  c1v1  c2v2  c3v3
V1=(1,2,3) v2=(0,0,2) v3= (-1,0,1)
Prove W=(1,-2,2) is not a linear combination of v1,v2,v3
w = c1v1 + c2v2 + c3v3
Finding a linear combination
Sol :
Dr A K TIWARI
Definition:
S  v1,v2 , , vk  : a set of vectors in a vector space V
c1v1  c2v2   ck vk  0
(1) If theequation has only thetrivial solution (c1  c2   ck  0)
then S is calledlinearlyindependent.
(2) If theequation has a nontrivial solution (i.e., not all zeros),
then S is calledlinearlydependent.
Theorem
A set S with two or more vectors is
(a) Linearly dependent if and only if at least one of the vectors in S is expressible as a
linear combination of the other vectors in S.
Linear Independent (L.I.) and Linear Dependent (L.D.):
16
(b) Linearly independent if and only if no vector in S is expressible as a linear
combination of the other vectors in S.
Dr A K TIWARI
1  is linearly independent
(2) 0 S  S is linearly dependent.
(3) v  0  vis linearly independent
(4) S1  S2
S1 is linearly dependent S2 is linearly dependent
S2 is linearly independent  S1 is linearly independent
Notes
Dr A K TIWARI
Determine whether the following set of vectors in R
3
is L.I. or L.D.
S 
1, 2, 3,0,1, 2,2, 0,1
v1 v2 v3
c1  2c3  0
2c1  c2   0
c1v1  c2v2  c3v3  0 
Sol:
  
3c1  2c2  c3  0
1 0  2 0 1 0 0 0
2 1 0 0G

aus

s - Jo

rdan

Elim

inati

o

n

0
1 0 0

3 2 1 0 0 0 1 0

 c1  c2  c3  0 only thetrivial solution
 S is linearly independent
Ex : Testing for linearly independent
Dr A K TIWARI
Span of set of vectors
span(S)  c1v1  c2v2 ci  R
(the set of all linear combinations of vectors in S)
If every vector in a given vector space can be written as a linear combination of vectors in
a given set S, then S is called a spanning set of the vector space.
  ck vk
Definition:
If ISf=S{=v{,vv1
,,
v
…
2
,
…
,v,}vk
i}siassaetseotfovfevcteocrtsorisnianvaevcteocrtosrpaspceacVe,
1 2 k
theVn,tthheenspthaensopfanSoifstSheissethteofseatllolfinaellarlicnoemarbinationsof
thecovmecbtionrastiinonSs.
of the vectors in S.
Dr A K TIWARI
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then
(a)span (S) is a subspace of V.
(b)span (S) is the smallest subspace of V that contains S.
(Every other subspace of V that contains S must contain span (S).
 Notes:
(1) span()  0
(2) S  span(S)
1 2
S1  S2  span(S1)  span(S2 )
(3) S ,S V
span (S) V
 S spans (generates) V
V is spanned (generated) by S
S is a spanning set of V
Dr A K TIWARI
Ex: A spanning set for R3
Show that the set S  (1,2,3),(0,1,2), (2,0,1) sapns R3
Sol:
We must determine whether an arbitrary vector u  (u1,u2 ,u3 )
in R3
can be as a linear combination of v , v , and v .
1 2 3
u R3
 u  c v  c v  c v
1 1 2 2 3 3
3c1  2c2  c3  u3
The problem thus reduces to determining whether this system
is consistent for all values of u1, u2 , and u3.
 c1  2c3  u1
2c1  c2  u2
1 0  2
A  2 1 0  0
3 2 1
 Ax  b has exactlyonesolution for every u.
 span(S)  R3
Dr A K TIWARI
22
Basis
• Definition:
V:a vector space
S ={v1, v2, …, vn}V
Generating
Sets
Bases
Linearly
Independent
Sets
• S spans V (i.e., span(S) = V )
• S is linearly independent
 S is called a basis for V
 Notes:
(1) Ø is a basis for {0}
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
Dr A K TIWARI
(3) the standard basis for Rn
:
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
 
 
 
    1  
0   0
0   1
0   0
  0
0 
,
 0 1 
,
 0 0 
,
 0 0  
Ex: matrix space:
  1
2  2
(4) the standard basis for mn matrix space:
{ Eij | 1im , 1jn }
(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x) {1, x, x2, x3}
Dr A K TIWARI
THEOREMS
Uniqueness of basis representation
If S= {v1,v2,…,vn} is a basis for a vector space V, then every vector in V can be
written in one and only one way as a linear combination of vectors in S.
Bases and linear dependence
If S= {v1,v2,…,vn} is a basis for a vector space V, then every set containing more
than n vectors in V is linearly dependent.
Dr A K TIWARI
If a vector space V has one basis with n vectors, then every basis for V has n
vectors. (All bases for a finite-dimensional vector space has the same number of
vectors.)
Number of vectors in a basis
An INDEPENDENT set of vectors that SPANS a vector
space V is called a BASIS for V.
Dr A K TIWARI
Dimension
 Definition:
The dimension of a finite dimensional vector space V is defined to be the number of vectors
in a basis for V. V: a vector space S: a basis for V
• Dimension of vector space V is denoted by dim(V).
Finite dimensional
A vector space V is called finite dimensional, if it has a basis consisting of a finite
number of elements
Infinite dimensional
If a vector space V is not finite dimensional,then it is called infinite dimensional.
Dr A K TIWARI
Theorems for dimention
THEOREM 1
All bases for a finite-dimensional vector space have the same
number of vectors.
THEOREM 2
Let V be a finite-dimensional vector space, and let be any basis.
(a) If a set has more than n vectors, then it is linearly dependent.
(b) If a set has fewer than n vectors, then it does not span V.
Dr A K TIWARI
Dimensions of Some Familiar Vector Spaces
(1) Vector space Rn  basis {e1 , e2 ,  , en}
 dim(Rn) = n
(2) Vector space Mm  basis {Eij | 1im , 1jn}
 dim(Mmn)=mn
(3) Vector space Pn(x)  basis {1, x, x2,  , xn}
 dim(Pn(x)) = n+1
(4) Vector space P(x)  basis {1, x, x2, }
 dim(P(x)) = 
Dr A K TIWARI
29
Dimension of a Solution Space
EXAMPLE
Dr A K TIWARI
Let A be an m×
n matrix
 Row space:
The row space of A is the subspace of Rn spanned by the row vectors ofA.
 Column space:
The column space of A is the subspace of Rm spanned by the column vectors ofA.
2
(1)
1
 A  
CSA{
 Null space:
n
(2) (n)
n A 1,2 ,   R}
A   
The null space of A is the set of all solutions of Ax=0 and it is a subspace of Rn.
N S ( A )  { x  R n
| A x  0}
The null space of A is also called the solution space of the homogeneous system Ax = 0.
Raw space ,column space and null space
30
Dr A K TIWARI

 


m


mn 

A

A

a
A  
 ⁝ 
a
⁝ 
 ⁝
a
⁝
m2
a a 2
 A1 
 m1
2n   
21 22
a1n 
a11 a12
a 21 22 2n (2)
Row vectors of A
[a , a ,…, a ]  A
11 12 1n (1)
[a , a ,…, a ]  A
⁝
[a , a ,…, a ]  A
m1 m2 mn (n)
Column vectors of A
 Row Vectors:
1 2 n
 A ⁝A ⁝ ⁝A 
a a
⁝
am2
2n
22
21

⁝ 

amn 
a1n 


 ⁝
am1
a11 a12
a
A  
 mn 
a  a  a 
     
 ⁝   ⁝   ⁝ 
 m1   m2 
a  a  a 
21 22 2n
a11   a12  a1n 
 Column Vectors:
|| ||
A
(1)
A
(2)
||
A
(n)
Dr A K TIWARI
Elementary row operations do not change the null space of a matrix.
THEOREM 2
The row space of a matrix is not changed by elementary row operations.
RS(r(A)) = RS(A) r: elementary row operations
THEOREM 3
If a matrix R is in row echelon form, then the row vectors with the leading 1′s (the nonzero
row vectors) form a basis for the row space of R, and the column vectors with the leading 1′s
of the row vectors form a basis for the column space of R.
32
THEOREM 1
Dr A K TIWARI
33
Find a basis of row space of A =





 0
Sol:






1 3 1 3
0 1 1 0

3 0 6 1

3 4  2 1
2 0  4 2
a1 a2 a3 a4
A=
0


0
0
1
0
0 w 3
2
w1
w
G

.E.
 B =
1 3 1 3
0 1 1 
3 0 6 1

3 4  2 1
2 0  4 2
1 3 1 3
 1 1 
0 0
 0 0 
0 0 0 0
b1 b2 b3 b4
Finding a basis for a row space
EXAMPLE
Abasis for RS(A) = {the nonzero row vectors of B} (Thm 3)
= {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)}
Dr A K TIWARI
THEOREM 4
If A and B are row equivalent matrices, then:
(a)A given set of column vectors of A is linearly independent if and only if the corresponding
column vectors of B are linearly independent.
(b)A given set of column vectors of A forms a basis for the column space of A if and only if the
corresponding column vectors of B form a basis for the column space of B.
THEOREM 5
If a matrix A is row equivalent to a matrix B in row-echelon form, then the nonzero
row vectors of B form a basis for the row space of A.
Dr A K TIWARI
Finding a basis for the column space of a matrix





 0
1 3 1 3
0 1 1 
A  3 0 6 1

3 4  2 1
2 0  4  2

Sol:
3
2
1
2 w


1 w
0
 6 w

 2
0 3 3 2 1 0 3 3
1 0 4 0 0 1 9 5
 B  
1 6  2  4 0 0 1 1
0 1 1
 
0 0 0 0
G

.E.

3
1
3
1
 
AT
EXAMPLE
Dr A K TIWARI
(a basis for the column space of A)
   
 
     1



2  6 1

3
 
 5
 


 3,  9 ,  1 
CS(A)=RS(AT)
ABasis For CS(A) = a basis for RS(AT)
= {the nonzero vectors of B}
= {w1, w2, w3}
 1   0   0 
 0   1   0 
Dr A K TIWARI
Finding the solution space of a homogeneous system
Ex:
Find the null space of the matrix A.
Sol: The null space of A is the solution space of Ax = 0.

3

1
4
1 2  2 1
6 5
2 0
A  3

0

0
1

3

1
1 2  2 1 1 2 0 3
6 5 0 1
2 0 0 0
4 G

.J

.E
0
A  3  x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
2
1
  sv tv

 t
    
  s
s
t
t
 
x  
x  
 x  
1
1
3
0
  0
  1  0
 2
 2s 3t
x4    
3

2   
x1 
 NS(A) {sv1  tv2 | s,t  R}
Dr A K TIWARI
Rank and Nullity
If A is an mn matrix, then the row space and the column space of A have the
same dimension.
dim(RS(A)) = dim(CS(A))
THEOREM
The dimension of the row (or column) space of a matrix A is called the rank of
A and is denoted by rank(A).
rank(A) = dim(RS(A)) = dim(CS(A))
Rank:
Nullity: The dimension of the null space of A is called the nullity of A.
nullity(A) = dim(NS(A))
Dr A K TIWARI
THEOREM
If A is an mn matrix of rank r, then the dimension of the solution space of Ax = 0 is n – r.
That is
n = rank(A) + nullity(A)
• Notes:
(1) rank(A): The number of leading variables in the solution of Ax=0.
(The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables in the solution of Ax = 0.
Dr A K TIWARI
Rank and nullity of a matrix
EXAMPLE
Let the column vectors of the matrix A be denoted by a1, a2, a3, a4, and a5.




0

A  
1 0  2 1
0  1  3 1 3
 2  1 1  1 3

0 3 9 0 12
a1 a2 a3 a4 a5
Find the Rank and nullity of the matrix.
Sol: Let B be the reduced row-echelon form of A.






0
1
0
B  0
A  
1 0 2 1 0
0 1 3 1 3

 2 1 1 1 3
0 3 9 0 12
a1 a2 a3 a4 a5
1 0 2 0 1
 1 3 0 4

0 0 1
 0 0 0
b1 b2 b3 b4 b5
G.E.
rank(A) = 3 (the number
0 of nonzero rows in B)
nuillity(A)  n rank(A)  53  2
nuillity (A)  n  rank(A)  5  3
40
 2
Dr A K TIWARI
41
Coordinates and change of basis
B

c

 n 
c 
 ⁝
2 
• Coordinate representation relative to a basis Let B = {v1, v2, …, vn} be an ordered basis for a
vector space V and let x be a vector in V such that
x  c1v1  c2v2   cn vn.
The scalars c1, c2, …, cn are called the coordinates of x relative to the basis B. The
coordinate matrix (or coordinate vector) of x relative to B is the column matrix in Rn
whose components are the coordinates of x.
c1 
x  
Dr A K TIWARI
Find the coordinate matrix of x=(1, 2, –1) in R3 relative to the (nonstandard) basis
B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)}
Sol:
x  c u  c u  c u  (1, 2, 1)  c (1, 0,1)  c (0, 1, 2)  c (2, 3,  5)
1 1 2 2 3 3 1 2 3

1
 0
1

  
 3   
2 

0

1
5c  1
3c    2
2c   1
3
2
1
c  2c  5c  1 
0 2 1 1 0 0 5
1 3 2 G.J
.E.
0 1 0 8
 
2 5 1 0 0 1  2

c  2c  1 1 0
1 3
 c  3c  2 i.e. 1
2 3
1 2
 

 2 

5 

B 
 [ x ]    8 
Finding a coordinate matrix relative to a nonstandard basis
Dr A K TIWARI
Change of basis problem
B B B B



1
  u  
u  ,...,  v
u  , 2 n
then [v]B  P[v]B
where
You were given the coordinates of a vector relative to one basis B and were asked to find
the coordinates relative to another basis B'.
Transition matrix from B' to B:
Let B {u1,u2 ,...,un} and B {u1
,u2...,un} be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B‘ is the coordinate matrix of v relative to B'
n
B B
P  u , u  ,..., u  
1 B 2
is called the transition matrix from B' to B
Dr A K TIWARI
If P is the transition matrix from a basis B' to a basis B in Rn, then
(1) P is invertible
–1
(2) The transition matrix from B to B' is P
THEOREM 1 The inverse of a transition matrix
THEOREM 2
n
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases for R . Then the transition
matrix P–1 from B to B' can be found by using Gauss-Jordan elimination on the n×2n
matrix as follows.
Transition matrix from B to B'
B⁝B 1
n
I ⁝P 
Dr A K TIWARI
2
B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R
(a) Find the transition matrix from B' to B.
B
B'
(b) Let [v] find [v]

2


1
,
Ex : (Finding a transition matrix)


0
1 0 ⁝ 3  2
I
1 ⁝ 2 1
P
(c) Find the transition matrix from B to B' .
Sol:





 2
2 
 2
2



 3
3 4
4 ⁝
⁝ 

1
1 22
2
2 
2
2 ⁝
⁝
B

1

3  2
P  2
(the transition matrix from B' to B)
G.J.E.
B'
(a)
Dr A K TIWARI
      
2 12  0 

3  21

1
2 B B
B
[v] 
1
 [v]  P[v]
(b)


 2
1 2 ⁝ 3 4
0 1 ⁝  2 3
1 0 ⁝ 1 2
 
G.J.E.
B'
 2 ⁝ 2  2
B I P
-1

3
2 (the transition matrix from B to B')
 2
P1

1

 Check: 2
1
3  0   I
  

1 2

3  2 1 2 1 0
PP  2
1
 0
2
Check:
 
[v] 
1
 v  (1)(3,2)  (0)(4,2)  (3,2)
 
[v] 
1
 v  (1)(1,2)  (2)(2,2)  (3,2)
B
B
(c)
Dr A K TIWARI

Vector Space.pptx

  • 1.
  • 2.
    Dr A KTIWARI VECTOR ALGEBRA VECTOR GEOMETRY VECTOR CALCULUS VECTOR SPACE
  • 3.
    VECTOR SPACE FIELD F VECTORV V(F) VECTOR SPACE F (+,.) , a,b,c,…..scalers Dr A K TIWARI + . a b c
  • 4.
    Dr A KTIWARI
  • 5.
    CONTENTS 1) Real VectorSpaces 2) Sub Spaces 3) Linear combination 4) Linear independence 5) Span Of Set Of Vectors 6) Basis 7) Dimension 8) Row Space, Column Space, Null Space 9) Rank And Nullity 10)Coordinate and change of basis 5 Dr A K TIWARI
  • 6.
    Vector Space Axioms RealVector Spaces Let V be an arbitrary non empty set of objects on which two operations are defined, addition and multiplication by scalar (number). If the following axioms are satisfied by all objects u, v, w in V and all scalars k and l, then we call V a vector space and we call the objects in V vectors. 1) If u and v are objects in V, then u + v is in V 2) u + v = v + u 3) u + (v + w) = (u + v) + w 4) There is an object 0 in V, called a zero vector for V, such that 0 + u = u + 0 = u for all u in V 5) For each u in V, there is an object –u in V, called a negative of u, such that u + (-u) = (-u) + u = 0 6) If k is any scalar and u is any object in V then ku is in V 7) k(u+v) = ku + kv 8) (k+l)(u) = ku + lu 9) k(lu) = (kl)u 10) 1u = u 6 Dr A K TIWARI
  • 7.
    Example: Rn isa vector space Ku =(ku1,ku2,ku3,. . . . . kun) Step :1- identify the set V of objects that will become vectors. Let, V=Rn u+v = (u1,u2,u3,. . . . . un)+(v1,v2,v3,. . . . vn) = (u1+v1,u2+v2,u3+v3, . . . .un+vn ) Step :2- identify the edition and scalar multiplication operations on V. Step :3- verify axioms 1 & 6, adding two vectors in a V produces a vector in a V , and multiplying a vector in a V by scalar produces a vector in a V. Step :4- confirm that axioms 2,3,4,5,7,8,9 and 10 hold. 7 Dr A K TIWARI
  • 8.
    Example: Set IsNot A Vector 8 Dr A K TIWARI
  • 9.
    Examples of vectorspaces (1) n-tuple space: Rn (u1,u2, ,un )(v1,v2, ,vn) (u1v1,u2 v2, ,un vn ) k(u1,u2 , ,un )  (ku1,ku2 , ,kun ) (2) Matrix space: (the set of all m×nmatrices with real values) V  Mmn Ex: :(m = n = 2) 21 22 22  22   21  21 22   21 v u v  v   u u u v u11 u12  v11 v12  u11 v11 u12 v12   21 22   21 22  ku  ku12    ku ku11 u u12  u11 ku vector addition scalar multiplication vector addition scalar multiplication 9 Dr A K TIWARI
  • 10.
    (3) n-th degreepolynomial space: (the set of all real polynomials of degree n or less) n  b )xn n  b0 )  (a1  b1)x   (a 0 p(x)  q(x)  (a n kp(x)  ka0  ka1x   kan x (4) Function space:(the set of all real-valued continuous functions defined on the entire real line.) V  c(,) ( f  g)(x)  f (x)  g(x) (kf )(x)  kf (x) vector addition scalar multiplication vector addition scalar multiplication 1 0 Dr A K TIWARI
  • 11.
    PROPERTIES OF VECTORS Letv be any element of a vector space V, and let c be any scalar. Then the following properties are true. (1) 0v  0 (2) c0  0 (3) If cv  0, then c  0 or v  0 (4) (1)v  v 11 Dr A K TIWARI
  • 12.
    Definition: (V,,) : avector space : a non empty subset W V  (W,,) W  :a vector space (under the operations of addition and scalar multiplication defined in V) Subspaces  W is a subspace of V If W is a set of one or more vectors in a vector space V , then W is a sub space of V if and only if the following condition hold; a)If u,v are vectors in a W then u+v is in a W. b)If k is any scalar and u is any vector In a W then ku is in W. 12 Dr A K TIWARI
  • 13.
    11 Every vector spaceV has at least two subspaces (1)Zero vector space {0} is a subspace of V . (2) V is a subspace of V .  Ex: 2 Subspace of R (1)  0 0  0, 0 (2) Lines through the origin R 2 (3) • Ex: Subspace of R3 (1)  0 0  0, 0, 0 (2) Lines through the origin (3) Planes through the origin (4) R3 If w1,w2,. . .. wr subspaces of vector space V then the intersection is this subspaces is also subspace of V. Dr A K TIWARI
  • 14.
    Let W bethe set of all 2×2symmetric matrices. Show that W is a subspace of the vector space M2×2,with the standard operations of matrix addition and scalar multiplication. Sol: W  M22 M22 : vector sapces Let A , A W (AT  A , AT  A ) 1 2 1 1 2 2 A W, A W  (A  A )T  AT  AT 1 2 1 2 1 2 k  R, AW  (kA)T  kAT  kA W is a subspace of M22  A1 A2 (A1  A2 W) (kAW) Ex : (A subspace of M2×2) Dr A K TIWARI
  • 15.
      0 1 0 W A B  1 W2 is not a subspace of M22 Let W be the set of singular matrices of order 2 Show that W is not a subspace of M2× 2 with the standard operations. Sol:     0 1 0 0 A  1 0 W ,B  0 0 W Ex : (The set of singular matrices is not a subspace of M2×2) Dr A K TIWARI
  • 16.
    14 v  c1u1 c2u2 … ckuk Definition A vector v in a vector space V is calleda linear combination of the vectors u1,u2, ,uk in V if v can be written in the form c1,c2, ,ck :scalars Linear combination If S = (w1,w2,w3, . . . . , wr) is a nonempty set of vectors in a vector space V, then: (a) The set W of all possible linear combinations of the vectors in S is a subspace of V. (b)The set W in part (a) is the “smallest” subspace of V that contains all of the vectors in S in the sense that any other subspace that contains those vectors contains W. Dr A K TIWARI
  • 17.
    15   1 0 11  1 0 1 1  2 1 0 2  G  u a s  s  J o  r d a n  E l i m  i n a  t i o n 0 1 2  4  3 2 1 2  0 0 0 7     this system has no solution ( 0  7)  w  c1v1  c2v2  c3v3 V1=(1,2,3) v2=(0,0,2) v3= (-1,0,1) Prove W=(1,-2,2) is not a linear combination of v1,v2,v3 w = c1v1 + c2v2 + c3v3 Finding a linear combination Sol : Dr A K TIWARI
  • 18.
    Definition: S  v1,v2, , vk  : a set of vectors in a vector space V c1v1  c2v2   ck vk  0 (1) If theequation has only thetrivial solution (c1  c2   ck  0) then S is calledlinearlyindependent. (2) If theequation has a nontrivial solution (i.e., not all zeros), then S is calledlinearlydependent. Theorem A set S with two or more vectors is (a) Linearly dependent if and only if at least one of the vectors in S is expressible as a linear combination of the other vectors in S. Linear Independent (L.I.) and Linear Dependent (L.D.): 16 (b) Linearly independent if and only if no vector in S is expressible as a linear combination of the other vectors in S. Dr A K TIWARI
  • 19.
    1  islinearly independent (2) 0 S  S is linearly dependent. (3) v  0  vis linearly independent (4) S1  S2 S1 is linearly dependent S2 is linearly dependent S2 is linearly independent  S1 is linearly independent Notes Dr A K TIWARI
  • 20.
    Determine whether thefollowing set of vectors in R 3 is L.I. or L.D. S  1, 2, 3,0,1, 2,2, 0,1 v1 v2 v3 c1  2c3  0 2c1  c2   0 c1v1  c2v2  c3v3  0  Sol:    3c1  2c2  c3  0 1 0  2 0 1 0 0 0 2 1 0 0G  aus  s - Jo  rdan  Elim  inati  o  n  0 1 0 0  3 2 1 0 0 0 1 0   c1  c2  c3  0 only thetrivial solution  S is linearly independent Ex : Testing for linearly independent Dr A K TIWARI
  • 21.
    Span of setof vectors span(S)  c1v1  c2v2 ci  R (the set of all linear combinations of vectors in S) If every vector in a given vector space can be written as a linear combination of vectors in a given set S, then S is called a spanning set of the vector space.   ck vk Definition: If ISf=S{=v{,vv1 ,, v … 2 , … ,v,}vk i}siassaetseotfovfevcteocrtsorisnianvaevcteocrtosrpaspceacVe, 1 2 k theVn,tthheenspthaensopfanSoifstSheissethteofseatllolfinaellarlicnoemarbinationsof thecovmecbtionrastiinonSs. of the vectors in S. Dr A K TIWARI
  • 22.
    If S={v1, v2,…,vk} is a set of vectors in a vector space V, then (a)span (S) is a subspace of V. (b)span (S) is the smallest subspace of V that contains S. (Every other subspace of V that contains S must contain span (S).  Notes: (1) span()  0 (2) S  span(S) 1 2 S1  S2  span(S1)  span(S2 ) (3) S ,S V span (S) V  S spans (generates) V V is spanned (generated) by S S is a spanning set of V Dr A K TIWARI
  • 23.
    Ex: A spanningset for R3 Show that the set S  (1,2,3),(0,1,2), (2,0,1) sapns R3 Sol: We must determine whether an arbitrary vector u  (u1,u2 ,u3 ) in R3 can be as a linear combination of v , v , and v . 1 2 3 u R3  u  c v  c v  c v 1 1 2 2 3 3 3c1  2c2  c3  u3 The problem thus reduces to determining whether this system is consistent for all values of u1, u2 , and u3.  c1  2c3  u1 2c1  c2  u2 1 0  2 A  2 1 0  0 3 2 1  Ax  b has exactlyonesolution for every u.  span(S)  R3 Dr A K TIWARI
  • 24.
    22 Basis • Definition: V:a vectorspace S ={v1, v2, …, vn}V Generating Sets Bases Linearly Independent Sets • S spans V (i.e., span(S) = V ) • S is linearly independent  S is called a basis for V  Notes: (1) Ø is a basis for {0} (2) the standard basis for R3: {i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1) Dr A K TIWARI
  • 25.
    (3) the standardbasis for Rn : {e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1) Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}           1   0   0 0   1 0   0   0 0  ,  0 1  ,  0 0  ,  0 0   Ex: matrix space:   1 2  2 (4) the standard basis for mn matrix space: { Eij | 1im , 1jn } (5) the standard basis for Pn(x): {1, x, x2, …, xn} Ex: P3(x) {1, x, x2, x3} Dr A K TIWARI
  • 26.
    THEOREMS Uniqueness of basisrepresentation If S= {v1,v2,…,vn} is a basis for a vector space V, then every vector in V can be written in one and only one way as a linear combination of vectors in S. Bases and linear dependence If S= {v1,v2,…,vn} is a basis for a vector space V, then every set containing more than n vectors in V is linearly dependent. Dr A K TIWARI
  • 27.
    If a vectorspace V has one basis with n vectors, then every basis for V has n vectors. (All bases for a finite-dimensional vector space has the same number of vectors.) Number of vectors in a basis An INDEPENDENT set of vectors that SPANS a vector space V is called a BASIS for V. Dr A K TIWARI
  • 28.
    Dimension  Definition: The dimensionof a finite dimensional vector space V is defined to be the number of vectors in a basis for V. V: a vector space S: a basis for V • Dimension of vector space V is denoted by dim(V). Finite dimensional A vector space V is called finite dimensional, if it has a basis consisting of a finite number of elements Infinite dimensional If a vector space V is not finite dimensional,then it is called infinite dimensional. Dr A K TIWARI
  • 29.
    Theorems for dimention THEOREM1 All bases for a finite-dimensional vector space have the same number of vectors. THEOREM 2 Let V be a finite-dimensional vector space, and let be any basis. (a) If a set has more than n vectors, then it is linearly dependent. (b) If a set has fewer than n vectors, then it does not span V. Dr A K TIWARI
  • 30.
    Dimensions of SomeFamiliar Vector Spaces (1) Vector space Rn  basis {e1 , e2 ,  , en}  dim(Rn) = n (2) Vector space Mm  basis {Eij | 1im , 1jn}  dim(Mmn)=mn (3) Vector space Pn(x)  basis {1, x, x2,  , xn}  dim(Pn(x)) = n+1 (4) Vector space P(x)  basis {1, x, x2, }  dim(P(x)) =  Dr A K TIWARI
  • 31.
    29 Dimension of aSolution Space EXAMPLE Dr A K TIWARI
  • 32.
    Let A bean m× n matrix  Row space: The row space of A is the subspace of Rn spanned by the row vectors ofA.  Column space: The column space of A is the subspace of Rm spanned by the column vectors ofA. 2 (1) 1  A   CSA{  Null space: n (2) (n) n A 1,2 ,   R} A    The null space of A is the set of all solutions of Ax=0 and it is a subspace of Rn. N S ( A )  { x  R n | A x  0} The null space of A is also called the solution space of the homogeneous system Ax = 0. Raw space ,column space and null space 30 Dr A K TIWARI
  • 33.
         m   mn   A  A  a A   ⁝  a ⁝   ⁝ a ⁝ m2 a a 2  A1   m1 2n    21 22 a1n  a11 a12 a 21 22 2n (2) Row vectors of A [a , a ,…, a ]  A 11 12 1n (1) [a , a ,…, a ]  A ⁝ [a , a ,…, a ]  A m1 m2 mn (n) Column vectors of A  Row Vectors: 1 2 n  A ⁝A ⁝ ⁝A  a a ⁝ am2 2n 22 21  ⁝   amn  a1n     ⁝ am1 a11 a12 a A    mn  a  a  a         ⁝   ⁝   ⁝   m1   m2  a  a  a  21 22 2n a11   a12  a1n   Column Vectors: || || A (1) A (2) || A (n) Dr A K TIWARI
  • 34.
    Elementary row operationsdo not change the null space of a matrix. THEOREM 2 The row space of a matrix is not changed by elementary row operations. RS(r(A)) = RS(A) r: elementary row operations THEOREM 3 If a matrix R is in row echelon form, then the row vectors with the leading 1′s (the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 1′s of the row vectors form a basis for the column space of R. 32 THEOREM 1 Dr A K TIWARI
  • 35.
    33 Find a basisof row space of A =       0 Sol:       1 3 1 3 0 1 1 0  3 0 6 1  3 4  2 1 2 0  4 2 a1 a2 a3 a4 A= 0   0 0 1 0 0 w 3 2 w1 w G  .E.  B = 1 3 1 3 0 1 1  3 0 6 1  3 4  2 1 2 0  4 2 1 3 1 3  1 1  0 0  0 0  0 0 0 0 b1 b2 b3 b4 Finding a basis for a row space EXAMPLE Abasis for RS(A) = {the nonzero row vectors of B} (Thm 3) = {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)} Dr A K TIWARI
  • 36.
    THEOREM 4 If Aand B are row equivalent matrices, then: (a)A given set of column vectors of A is linearly independent if and only if the corresponding column vectors of B are linearly independent. (b)A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B. THEOREM 5 If a matrix A is row equivalent to a matrix B in row-echelon form, then the nonzero row vectors of B form a basis for the row space of A. Dr A K TIWARI
  • 37.
    Finding a basisfor the column space of a matrix       0 1 3 1 3 0 1 1  A  3 0 6 1  3 4  2 1 2 0  4  2  Sol: 3 2 1 2 w   1 w 0  6 w   2 0 3 3 2 1 0 3 3 1 0 4 0 0 1 9 5  B   1 6  2  4 0 0 1 1 0 1 1   0 0 0 0 G  .E.  3 1 3 1   AT EXAMPLE Dr A K TIWARI
  • 38.
    (a basis forthe column space of A)            1    2  6 1  3    5      3,  9 ,  1  CS(A)=RS(AT) ABasis For CS(A) = a basis for RS(AT) = {the nonzero vectors of B} = {w1, w2, w3}  1   0   0   0   1   0  Dr A K TIWARI
  • 39.
    Finding the solutionspace of a homogeneous system Ex: Find the null space of the matrix A. Sol: The null space of A is the solution space of Ax = 0.  3  1 4 1 2  2 1 6 5 2 0 A  3  0  0 1  3  1 1 2  2 1 1 2 0 3 6 5 0 1 2 0 0 0 4 G  .J  .E 0 A  3  x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t 2 1   sv tv   t        s s t t   x   x    x   1 1 3 0   0   1  0  2  2s 3t x4     3  2    x1   NS(A) {sv1  tv2 | s,t  R} Dr A K TIWARI
  • 40.
    Rank and Nullity IfA is an mn matrix, then the row space and the column space of A have the same dimension. dim(RS(A)) = dim(CS(A)) THEOREM The dimension of the row (or column) space of a matrix A is called the rank of A and is denoted by rank(A). rank(A) = dim(RS(A)) = dim(CS(A)) Rank: Nullity: The dimension of the null space of A is called the nullity of A. nullity(A) = dim(NS(A)) Dr A K TIWARI
  • 41.
    THEOREM If A isan mn matrix of rank r, then the dimension of the solution space of Ax = 0 is n – r. That is n = rank(A) + nullity(A) • Notes: (1) rank(A): The number of leading variables in the solution of Ax=0. (The number of nonzero rows in the row-echelon form of A) (2) nullity (A): The number of free variables in the solution of Ax = 0. Dr A K TIWARI
  • 42.
    Rank and nullityof a matrix EXAMPLE Let the column vectors of the matrix A be denoted by a1, a2, a3, a4, and a5.     0  A   1 0  2 1 0  1  3 1 3  2  1 1  1 3  0 3 9 0 12 a1 a2 a3 a4 a5 Find the Rank and nullity of the matrix. Sol: Let B be the reduced row-echelon form of A.       0 1 0 B  0 A   1 0 2 1 0 0 1 3 1 3   2 1 1 1 3 0 3 9 0 12 a1 a2 a3 a4 a5 1 0 2 0 1  1 3 0 4  0 0 1  0 0 0 b1 b2 b3 b4 b5 G.E. rank(A) = 3 (the number 0 of nonzero rows in B) nuillity(A)  n rank(A)  53  2 nuillity (A)  n  rank(A)  5  3 40  2 Dr A K TIWARI
  • 43.
    41 Coordinates and changeof basis B  c   n  c   ⁝ 2  • Coordinate representation relative to a basis Let B = {v1, v2, …, vn} be an ordered basis for a vector space V and let x be a vector in V such that x  c1v1  c2v2   cn vn. The scalars c1, c2, …, cn are called the coordinates of x relative to the basis B. The coordinate matrix (or coordinate vector) of x relative to B is the column matrix in Rn whose components are the coordinates of x. c1  x   Dr A K TIWARI
  • 44.
    Find the coordinatematrix of x=(1, 2, –1) in R3 relative to the (nonstandard) basis B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)} Sol: x  c u  c u  c u  (1, 2, 1)  c (1, 0,1)  c (0, 1, 2)  c (2, 3,  5) 1 1 2 2 3 3 1 2 3  1  0 1      3    2   0  1 5c  1 3c    2 2c   1 3 2 1 c  2c  5c  1  0 2 1 1 0 0 5 1 3 2 G.J .E. 0 1 0 8   2 5 1 0 0 1  2  c  2c  1 1 0 1 3  c  3c  2 i.e. 1 2 3 1 2     2   5   B   [ x ]    8  Finding a coordinate matrix relative to a nonstandard basis Dr A K TIWARI
  • 45.
    Change of basisproblem B B B B    1   u   u  ,...,  v u  , 2 n then [v]B  P[v]B where You were given the coordinates of a vector relative to one basis B and were asked to find the coordinates relative to another basis B'. Transition matrix from B' to B: Let B {u1,u2 ,...,un} and B {u1 ,u2...,un} be two bases for a vector space V If [v]B is the coordinate matrix of v relative to B [v]B‘ is the coordinate matrix of v relative to B' n B B P  u , u  ,..., u   1 B 2 is called the transition matrix from B' to B Dr A K TIWARI
  • 46.
    If P isthe transition matrix from a basis B' to a basis B in Rn, then (1) P is invertible –1 (2) The transition matrix from B to B' is P THEOREM 1 The inverse of a transition matrix THEOREM 2 n Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases for R . Then the transition matrix P–1 from B to B' can be found by using Gauss-Jordan elimination on the n×2n matrix as follows. Transition matrix from B to B' B⁝B 1 n I ⁝P  Dr A K TIWARI
  • 47.
    2 B={(–3, 2), (4,–2)}and B' ={(–1, 2), (2,–2)} are two bases for R (a) Find the transition matrix from B' to B. B B' (b) Let [v] find [v]  2   1 , Ex : (Finding a transition matrix)   0 1 0 ⁝ 3  2 I 1 ⁝ 2 1 P (c) Find the transition matrix from B to B' . Sol:       2 2   2 2     3 3 4 4 ⁝ ⁝   1 1 22 2 2  2 2 ⁝ ⁝ B  1  3  2 P  2 (the transition matrix from B' to B) G.J.E. B' (a) Dr A K TIWARI
  • 48.
          2 12  0   3  21  1 2 B B B [v]  1  [v]  P[v] (b)    2 1 2 ⁝ 3 4 0 1 ⁝  2 3 1 0 ⁝ 1 2   G.J.E. B'  2 ⁝ 2  2 B I P -1  3 2 (the transition matrix from B to B')  2 P1  1   Check: 2 1 3  0   I     1 2  3  2 1 2 1 0 PP  2 1  0 2 Check:   [v]  1  v  (1)(3,2)  (0)(4,2)  (3,2)   [v]  1  v  (1)(1,2)  (2)(2,2)  (3,2) B B (c) Dr A K TIWARI