© Art Traynor 2011
Mathematics
Definition
Mathematics
Wiki: “ Mathematics ”
1564 – 1642
Galileo Galilei
Grand Duchy of Tuscany
( Duchy of Florence )
City of Pisa
Mathematics – A Language
“ The universe cannot be read until we have learned the language and
become familiar with the characters in which it is written. It is written
in mathematical language…without which means it is humanly
impossible to comprehend a single word.
Without these, one is wandering about in a dark labyrinth. ”
© Art Traynor 2011
Mathematics
Definition
Algebra – A Mathematical Grammar
Mathematics
A formalized system ( a language ) for the transmission of
information encoded by number
Algebra
A system of construction by which
mathematical expressions are well-formed
Expression
Symbol Operation Relation
Designate expression
elements or Operands
( Terms / Monomials )
Transformations or LOC’s
capable of rendering an
expression into a relation
A mathematical Structure
between operands
represented by a well-formed
Expression
A well-formed symbolic representation of Operands ( Terms or Monomials ) ,
of discrete arity, upon which one or more Operations ( Laws of Composition - LOC’s )
may structure a Relation
1. Identifies the explanans
by non-tautological
correspondences
Definition
2. Isolates the explanans
as a proper subset from
its constituent
correspondences
3. Terminology
a. Maximal parsimony
b. Maximal syntactic
generality
4. Examples
a. Trivial
b. Superficial
Mathematics
Wiki: “ Polynomial ”
Wiki: “ Degree of a Polynomial ”
© Art Traynor 2011
Mathematics
Disciplines
Algebra
One of the disciplines within the field of Mathematics
Mathematics
Others are Arithmetic, Geometry,
Number Theory, & Analysis

The study of expressions of symbols ( sets ) and the
well-formed rules by which they might be manipulated
to preserve validity .

Algebra
Elementary Algebra
Abstract Algebra
A class of Structure defined by the object Set and
its Operations ( or Laws of Composition – LOC’s )

Linear Algebra
Mathematics
© Art Traynor 2011
Mathematics
Definitions
Expression
Symbol Operation Relation
Designate expression
elements or Operands
( Terms / Monomials )
Transformations or LOC’s
capable of rendering an
expression into a relation
A mathematical structure
between operands represented
by a well-formed expression
A well-formed symbolic representation of Operands ( Terms or Monomials ) ,
of discrete arity, upon which one or more Operations ( LOC’s ) may structure a Relation
Expression – A Mathematical Sentence
Proposition
A declarative expression
asserting a fact, the truth
value of which can be
ascertained
Formula
A concise symbolic
expression positing a relation
VariablesConstants
An alphabetic character
representing a number the
value of which is arbitrary,
unspecified, or unknown
Operands ( Terms / Monomials )
A transformation
invariant scalar quantity
Mathematics
Predicate
A Proposition admitting the
substitution of variables
O’Leary, Section 2.1,
Pg. 41
Expression constituents consisting of Constants and
Variables exhibiting exclusive parity
Polynomial
An Expression composed of Constants ( Coefficients ) and Variables ( Unknowns) with
an LOC’s of Addition, Subtraction, Multiplication and Non-Negative Exponentiation
Wiki: “ Polynomial ”
Wiki: “ Degree of a Polynomial ”
© Art Traynor 2011
Mathematics
Definitions
Expression
Symbol Operation Relation
Designate expression
elements or Operands
( Terms / Monomials )
Transformations capable of
rendering an expression
into a relation
A mathematical structure between operands represented
by a well-formed expression
Expression – A Mathematical Sentence
Proposition
A declarative expression
the truth value of which can
be ascertained
Formula
A concise symbolic
expression positing a relation
VariablesConstants
An alphabetic character
representing a number the
value of which is arbitrary,
unspecified, or unknown
Operands ( Terms / Monomials )
A transformation
invariant scalar quantity
Equation
A formula stating an
equivalency class relation
Inequality
A formula stating a relation
among operand cardinalities
Function
A Relation between a Set of inputs and a Set of permissible
outputs whereby each input is assigned to exactly one output
Univariate: an equation containing
only one variable
( e.g. Unary )
Multivariate: an equation containing
more than one variable
( e.g. n-ary )
Mathematics
Expression constituents consisting of Constants and
Variables exhibiting exclusive parity
Polynomial
© Art Traynor 2011
Mathematics
Definitions
Expression
Symbol Operation Relation
Expression – A Mathematical Sentence
Proposition Formula
VariablesConstants
Operands ( Terms )
Equation
A formula stating an
equivalency class relation
Linear Equation
An equation in which each term is either
a constant or the product of a constant
and (a) variable[s] of the first degree
Mathematics
Polynomial
© Art Traynor 2011
Mathematics
Expression
Mathematical Expression
A representational precursive discrete composition to a
Mathematical Statement or Proposition ( e.g. Equation )
consisting of :

Operands / Terms
Expression
A well-formed symbolic
representation of Operands
( Terms or Monomials ) ,
of discrete arity, upon which one
or more Operations ( LOC’s ) may
structure a Relation
Mathematics
n Scalar Constants ( i.e. Coefficients )
n Variables or Unknowns
The Cardinality of which is referred to as the Arity of the Expression
Constituent representational Symbols composed of :
Algebra
Laws of Composition ( LOC’s )
Governs the partition of the Expression
into well-formed Operands or Terms
( the Cardinality of which is a multiple of Monomials )
© Art Traynor 2011
Mathematics
Arity
Arity
Expression
The enumeration of discrete symbolic elements ( Variables )
comprising a Mathematical Expression
is defined as its Arity

The Arity of an Expression can be represented by
a non-negative integer index variable ( ℤ + or ℕ ),
conventionally “ n ”

A Constant ( Airty n = 0 , index ℕ )or Nullary
represents a term that accepts no Argument

A Unary expresses an Airty n = 1
A relation can not be defined for
Expressions of Arity less than
two: n < 2
A Binary expresses Airty n = 2
All expressions possessing Airty n > 1 are n-ary, Multary, Multiary, or Polyadic
VariablesConstants
Operands
Expression
Polynomial
© Art Traynor 2011
Mathematics
Expression
Arity
Operand
 Arithmetic : a + b = c
The distinct elements of an Expression
by which the structuring Laws of Composition ( LOC’s )
partition the Expression into discrete Monomial Terms
 “ a ” and “ b ” are Operands
 The number of Variables of an Expression is known as its Arity
n Nullary = no Variables ( a Scalar Constant )
n Unary = one Variable
n Binary = two Variables
n Ternary = three Variables…etc.
VariablesConstants
Operands
Expression
Polynomial
n “ c ” represents a Solution ( i.e. the Sum of the Expression )
Arity is canonically
delineated by a Latin
Distributive Number,
ending in the suffix “ –ary ”
© Art Traynor 2011
Mathematics
Arity
Arity ( Cardinality of Expression Variables )
Expression
A relation can not be defined for
Expressions of Arity less than
two: n < 2
Nullary
Unary
n = 0
n = 1
Binary n = 2
Ternary n = 3
1-ary
2-ary
3-ary
Quaternary n = 4 4-ary
Quinary n = 5 5-ary
Senary n = 6 6-ary
Septenary n = 7 7-ary
Octary n = 8 8-ary
Nonary n = 9 9-ary
n-ary
VariablesConstants
Operands
Expression
Polynomial
0-ary
© Art Traynor 2011
Mathematics
Operand
Parity – Property of Operands
Parity
n is even if $ k | n = 2k
n is odd if $ k | n = 2k+1
Even  Even
Integer Parity
Same Parity
Even  Odd Opposite Parity
© Art Traynor 2011
Mathematics
Polynomial
Expression
A well-formed symbolic
representation of operands, of
discrete arity, upon which one
or more operations can
structure a Relation
Expression
Polynomial Expression
A Mathematical Expression ,
the Terms ( Operands ) of which are a compound composition of :
Polynomial
Constants – referred to as Coefficients
Variables – also referred to as Unknowns
And structured by the Polynomial Structure Criteria ( PSC )
arithmetic Laws of Composition ( LOC’s ) including :
Addition / Subtraction
Multiplication / Non-Negative Exponentiation
LOC ( Pn ) = { + , – , x  bn ∀ n ≥ 0 }
Wiki: “ Polynomial ”
An excluded equation by
Polynomial Structure Criteria ( PSC )
Σ an xi
n
i = 0
P( x ) = an xn + an – 1 xn – 1 +…+ ak+1 xk+1 + ak xk +…+ a1 x1 + a0 x0
Variable
Coefficient
Polynomial Term
From the Greek Poly meaning many,
and the Latin Nomen for name




© Art Traynor 2011
Mathematics
Degree
Expression
Polynomial
Degree of a Polynomial
Polynomial
Wiki: “ Degree of a Polynomial ”
The Degree of a Polynomial Expression ( PE ) is supplied by that
of its Terms ( Operands ) featuring the greatest Exponentiation
For a multivariate term PE , the Degree of the PE is supplied by that
Term featuring the greatest summation of Variable exponents

P = Variable Cardinality & Variable Product
Exponent Summation
& Term Cardinality
Arity
Latin “ Distributive ” Number
suffix of “ – ary ”
Degree
Latin “ Ordinal ” Number
suffix of “ – ic ”
Latin “ Distributive ” Number
suffix of “ – nomial ”
0 =
1 =
2 =
3 =
Nullary
Unary
Binary
Tenary
Constant
Linear
Quadratic
Cubic
Monomial
Binomial
Trinomial
An Expression composed of
Constants ( Coefficients ) and
Variables ( Unknowns) with an
LOC of Addition, Subtraction,
Multiplication and Non-
Negative Exponentiation
© Art Traynor 2011
Mathematics
Degree
Polynomial
Degree of a Polynomial
Nullary
Unary
p = 0
p = 1 Linear
Binaryp = 2 Quadratic
Ternaryp = 3 Cubic
1-ary
2-ary
3-ary
Quaternaryp = 4 Quartic4-ary
Quinaryp = 5 5-ary
Senaryp = 6 6-ary
Septenaryp = 7 7-ary
Octaryp = 8 8-ary
Nonaryp = 9 9-ary
“ n ”-ary
Arity Degree
Monomial
Binomial
Trinomial
Quadranomial
Terms
Constant
Quintic
P
Wiki: “ Degree of a Polynomial ”
Septic
Octic
Nonic
Decic
Sextic
aka: Heptic
aka: Hexic
© Art Traynor 2011
Mathematics
Degree
Expression
Polynomial
Degree of a Polynomial
Polynomial
Wiki: “ Degree of a Polynomial ”
An Expression composed of
Constants ( Coefficients ) and
Variables ( Unknowns) with an
LOC of Addition, Subtraction,
Multiplication and Non-
Negative Exponentiation
The Degree of a Polynomial Expression ( PE ) is supplied by that
of its Terms ( Operands ) featuring the greatest Exponentiation
For a PE with multivariate term(s) ,
the Degree of the PE is supplied by
that Term featuring the greatest summation
of individual Variable exponents

P( x ) = ai xi
0 Nullary Constant Monomial
P( x ) = ai xi
1
Unary Linear Monomial
P( x ) = ai xi
2
Unary Quadratic Monomial
ai xi
1 yi
1P( x , y ) =
Binary Quadratic Monomial
Univariate
Bivariate
© Art Traynor 2011
Mathematics
Degree
Expression
Polynomial
Degree of a Polynomial
Polynomial
Wiki: “ Degree of a Polynomial ”
The Degree of a Polynomial Expression ( PE ) is supplied by that
of its Terms ( Operands ) featuring the greatest Exponentiation
For a multivariate term PE , the Degree of the PE is supplied by that
Term featuring the greatest summation of Variable exponents

P( x ) = ai xi
0 Nullary Constant Monomial
P( x ) = ai xi
1
Unary Linear Monomial
P( x ) = ai xi
2
Unary Quadratic Monomial
ai xi
1 yi
1P( x , y ) = Binary Quadratic Monomial
ai xi
1 yi
1zi
1P( x , y , z ) = Ternary Cubic Monomial
Univariate
Bivariate
Trivariate
Multivariate
© Art Traynor 2011
Mathematics
Quadratic
Expression
Polynomial
Quadratic Polynomial
Polynomial
Wiki: “ Degree of a Polynomial ”
A Unary or greater Polynomial
composed of at least one Term and :
Degree precisely equal to two
Quadratic ai xi
n ∀ n = 2
 ai xi
n yj
m ∀ n , m n + m = 2|:
Etymology
From the Latin “ quadrātum ” or “ square ” referring
specifically to the four sides of the geometric figure
Wiki: “ Quadratic Function ”
Arity ≥ 1
 ai xi
n ± ai + 1 xi + 1
n ∀ n = 2
Unary Quadratic Monomial
Binary Quadratic Monomial
Unary Quadratic Binomial
 ai xi
n yj
m ± ai + 1 xi + 1
n ∀ n + m = 2 Binary Quadratic Binomial
© Art Traynor 2011
Mathematics
Equation
Equation
Expression
An Equation is a statement or Proposition
( aka Formula ) purporting to express
an equivalency relation between two Expressions :

Expression
Proposition
A declarative expression
asserting a fact whose truth
value can be ascertained
Equation
A symbolic formula, in the form of a
proposition, expressing an equality relationship
Formula
A concise symbolic
expression positing a
relationship between
quantities
VariablesConstants
Operands
Symbols
Operations
The Equation is composed of
Operand terms and one or more
discrete Transformations ( Operations )
which can render the statement true
( i.e. a Solution )
Polynomial
© Art Traynor 2011
Mathematics
Equation
Solution
Solution and Solution Sets
 Free Variable: A symbol within an expression specifying where
a substitution may be made
Contrasted with a Bound Variable
which can only assume a specific
value or range of values
 Solution: A value when substituted for a free variable which
renders an equation true
Analogous to independent &
dependent variables
Unique Solution: only one solution
can render the equation true
(quantified by $! )
General Solution: constants are
undetermined
General Solution: constants are
value-specified (bound?)
Unique Solution
Particular Solution
General Solution
Solution Set
n A family (set) of all solutions –
can be represented by a parameter (i.e. parametric representation)
 Equivalent Equations: Two (or more) systems of equations sharing
the same solution set
Section 1.1, (Pg. 3)
Section 1.1, (Pg. 3)
Section 1.1, (Pg. 6)
Any of which could include a Trivial Solution
Section 1.2, (Pg. 21)
© Art Traynor 2011
Mathematics
Equation
Solution
Solution and Solution Sets
 Solution: A value when substituted for a free variable which
renders an equation true
Unique Solution: only one solution
can render the equation true
(quantified by $! )
General Solution: constants are
undetermined
General Solution: constants are
value-specified (bound?)
Solution Set
n For some function f with parameter c such that
f(xi , xi+1 ,…xn – 1 , xn ) = c
the family (set) of all solutions is defined to include
all members of the inverse image set such that
f(x) = c  f -1(c) = x
f -1(c) = {(ai , ai+1 ,…an-1 , an )  Ti· Ti+1 ·…· Tn-1· Tn | f(ai , ai+1 ,…an-1 , an ) = c }
where Ti· Ti+1 ·…· Tn-1· Tn is the domain of the function f
o f -1(c) = { }, or empty set ( no solution exists )
o f -1(c) = 1, exactly one solution exists ( Unique Solution, Singleton)
o f -1(c) = { cn } , a finite set of solutions exist
o f -1(c) = {∞ } , an infinite set of solutions exists
Inconsistent
Consistent
Section 1.1,
(Pg. 5)
© Art Traynor 2011
Mathematics
Linear Equation
Linear Equation
Equation
An Equation consisting of:
Operands that are either
Any Variables are restricted to the First Order n = 1
Linear Equation
An equation in which each term
is either a constant or the
product of a constant and (a)
variable[s] of the first order
Expression
Proposition
Equation
Formula
n Constant(s) or
n A product of Constant(s) and
one or more Variable(s)
The Linear character of the Equation derives from the
geometry of its graph which is a line in the R2 plane

As a Relation the Arity of a Linear Equation must be
at least two, or n ≥ 2 , or a Binomial or greater Polynomial

Polynomial
© Art Traynor 2011
Mathematics
Equation
Linear Equation
Linear Equation
 An equation in which each term is either a constant or the product
of a constant and (a) variable[s] of the first order
Term ai represents a Coefficient
b = Σi= 1
n
ai xi = ai xi + ai+1 xi+1…+ an – 1 xn – 1 + an xn
Equation of a Line in n-variables
 A linear equation in “ n ” variables, xi + xi+1 …+ xn-1 + xn
has the form:
n Coefficients are distributed over a defined field
( e.g. N , Z , Q , R , C )
Term xi represents a Variable ( e.g. x, y, z )
n Term a1 is defined as the Leading Coefficient
n Term x1 is defined as the Leading Variable
Section 1.1, (Pg. 2)
Section 1.1, (Pg. 2)
Section 1.1, (Pg. 2)
Section 1.1, (Pg. 2)
Coefficient = a multiplicative factor
(scalar) of fixed value (constant)
Section 1.1, (Pg. 2)
© Art Traynor 2011
Mathematics
Linear Equation
Equation
Standard Form ( Polynomial )
 Ax + By = C
 Ax1 + By1 = C
For the equation to describe a line ( no curvature )
the variable indices must equal one

 ai xi + ai+1 xi+1 …+ an – 1 xn –1 + an xn = b
 ai xi
1 + ai+1 x 1 …+ an – 1 x 1 + a1 x 1 = bi+1 n – 1 n n
ℝ
2
: a1 x + a2 y = b
ℝ
3
: a1 x + a2 y + a3 z = b
Blitzer, Section 3.2, (Pg. 226)
Section 1.1, (Pg. 2)
Test for Linearity
 A Linear Equation can be expressed in Standard Form
As a species of Polynomial , a Linear Equation
can be expressed in Standard Form
 Every Variable term must be of precise order n = 1
Linear Equation
An equation in which each term
is either a constant or the
product of a constant and (a)
variable[s] of the first order
Expression
Proposition
Equation
Formula
Polynomial
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Solution Consistency
 Solution: A value when substituted for a free variable which
renders an equation true
Unique Solution
Particular Solution
General Solution
Solution Set
n A family (set) of all solutions – can be represented by a parameter
No Solution - Inconsistent
1
0
0
2
1
0
– 1
0
0
4
3
– 2
Represents “ 0 = – 2 ” ,
a contradiction,
and thus no solution {  }
to the LE system for which
the augmented matrix stands
1x1 + 0x2 – 3x3 = – 1
System
0x1 + 1x2 – 1x3 = 0
x1 – 3x3 = – 1
System
x2 = x3
Section 1.1, (Pg. 8)
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Solution Consistency
 Solution: A value when substituted for a free variable which
renders an equation true
Solution Set - Consistent
n A family (set) of all solutions – can be represented by a parameter
1x1 + 0x2 – 3x3 = – 1
System
0x1 + 1x2 – 1x3 = 0
x1 = 3x3 – 1
System
x2 = x3
Note that x3 can be parameterized
( as a composite function f ○ g → ( f ○ g )( x ) = f ( g ( x )) with y = f ( u ) and u = g ( x3 ) )
x2 = x3 = u Tautology/Identity*
x1 = 3u – 1 The solution set for “ f(u) ” can thus be indexed by/over Z+
representing a countably infinte solution set
*
Section 1.1, (Pg. 3)
© Art Traynor 2011
Mathematics
‘ f(x) ’
‘– f ’
‘ x ’
‘ f(x) ’
‘ +f -1 ’
‘ x ’
Linear Algebra
Solution
Linear Equation – Solution Set
 Solution: A value when substituted for a free variable which
renders an equation true
Solution Set
n For some function f with parameter c such that
f ( xi , xi+1 ,…xn – 1 , xn ) = c
the family ( set ) of all solutions is defined to include
all members of the inverse image set such that
f ( x ) = c  f -1( c ) = x
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
aij xi + aij+1 xi+1 + . . . + ain – 1 xn – 1 + ain xn = bi
ai+1j xi + ai+1j+1 xi+1 + . . . + ai+1n – 1 xn – 1 + ai+1n xn = bi+1
am – 1j xi + am – 1j+1 xi+1 + . . . + am – 1n – 1 xn – 1 + am – 1n xn = bm – 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
amj xi + amj+1 xi+1 + . . . + amn xn – 1 + amn xn = bm
Linear Equation – System
 A system of m linear equations in n variables
is a set of m equations ,
each of which is linear in the same n variables
Linear Equation System
Solution Set
The set S = { si , si+1 ,…sn-1 , sn } which renders
each of the equations in the system true
Section 1.1, (Pg. 4)
Section 1.1, (Pg. 4)
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Back Substitution
x – 2y = 5
y = – 2
System
x – 2 y = 5
x – 2 (– 2 ) = 5
x + 4 = 5
x = 5 – 4
x = 1
Solution Set – Singleton, Unique Solution, ( exactly one solution )
S = { 1, – 2 }
Section 1.1, (Pg. 6)
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation Equivalence
 Equivalent Linear Equations: Two (or more) systems of
linear equations sharing the same solution set
 Gaussian Elimination
Operations Producing Linear Equation Equivalent Systems
Permutation/Interchange – of two equations
Multiply – an equation by a non-zero constant
Add – a multiple of an equation to another equation
Section 1.1, (Pg. 6)
Section 1.1, (Pg. 7)
Otherwise known as Elementary Row Operations Section 1.2, (Pg. 14)
n ERO’s should always proceed with an Augend/Multiplicand of lesser
rank and Summand/Multiplier of greater rank ( Aij < Amn ) yielding
a Sum/Product substituted for the second Operand
n Multiplication by a scalar ( non-zero constant ) need not affect any change
in rank for the resultant row
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Row Echelon Form (REF)
1
0
0
2
1
0
– 1
0
1
4
3
– 2
 A matrix in Row-Echelon Form ( REF ) has three distinguishing characteristics
Any rows consisting entirely of zeros is positioned at the bottom of the matrix
For each row that does not consist entirely of zeros, the first non-zero entry is
a “ 1 ” ( called the Leading One, aka Pivot )

1
0
0
2
1
0
– 1
0
1
4
3
– 2
Section 1.1, (Pg. 6)
Section 1.2, (Pg. 15)
Section 1.2, (Pg. 15)
0 0 0 0
0 0 0 0
Section 1.2, (Pg. 16) Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF )
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Row Echelon Form (REF)
 A matrix in Row-Echelon Form ( REF ) has three distinguishing characteristics
For two successive (non-zero) rows, the leading one in the higher row is farther
to the left than the leading one ( Pivot ) in the lower row

1
0
0
2
1
0
– 1
0
1
4
3
– 2
Section 1.1, (Pg. 6)
Section 1.2, (Pg. 15)
0 0 0 0
 A matrix in Reduced Row-Echelon Form ( RREF ) has one additional characteristic
Every column that has a leading one ( Pivot ) has zeros in every position
above and below its leading one ( Pivot )

1
0
0
0
1
0
0
0
1
4
3
– 2
Section 1.2, (Pg. 15)
0 0 0 0
 Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF )
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
1
– 1
2
– 2
3
– 5
3
0
5
9
– 4
17
x – 2y + 3z = 9
– x + 3y = – 4
2x – 5y + 5z = 17
System Augmented Matrix
R2 :
R2 + R1  R2´
– 1 3 0 – 4
+ R1 : 1 – 2 3 9
= R2´ : 0 1 3 5
1
0
2
– 2
1
– 5
3
3
5
9
5
17
R3 – 2R1  R3´
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
1
2
– 2
– 5
3
5
9
17
Augmented Matrix
R3 :
– 2R1 : – 2 4 – 6 – 18
= R3´ : 0 – 1 – 1 – 1
1
0
0
– 2
1
– 1
3
3
– 1
9
5
– 1
R3 + R2  R3´´
0 1 3 5 R3 – 2R1  R3´
2 – 5 5 17
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
Augmented Matrix
R3 :
+ R2 :
= R3´´ : 0 0 2 4
1
0
– 2
1
3
3
9
5
R3 + R2  R3´´
1 – 2 3 9
0 1 3 5
0 – 1 – 1 – 1
0 – 1 – 1 – 1
0 1 3 5
0 0 2 4
R3  R3´´ ´1
2
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
Augmented Matrix
R3 : 1 – 2 3 9
0 1 3 5
R3  R3´´ ´1
2
1 – 2 3 9
0 1 3 5
0 0 2 4
R3 :
1
2
R3´´ ´ :
0 0 2 4
0 0 1 2
0 0 1 2 0 0 1 2
Matrix is in
Row-Echelon Form ( REF )

Proceeding to
Reduced
Row-Echelon Form
( RREF )
R1 + 2R2  R1´
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
Augmented Matrix
R1 :
+ 2R2 :
R1´ :
0 2 6 10
1 0 9 19 1 0 9 19
0 1 3 5
0 0 1 2
1 – 2 3 9
0 1 3 5
0 0 1 2
R1 + 2R2  R1´
1 – 2 3 9
R2 – 3R3  R2´´
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
Augmented Matrix
R2 :
– 3R3 :
R2´´ :
0 0 – 3 – 6
0 1 0 – 1
1 0 9 19
0 1 0 – 1
0 0 1 2
R1 + 9R3  R1´´
1 0 9 19
0 1 3 5
0 0 1 2
R2 – 3R3  R2´´
0 1 3 5
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Linear Equation – Reduced Row Echelon Form (RREF)
Augmented Matrix
R1 :
– 9R3 :
= R1´´ :
0 0 – 9 – 18
1 0 0 1
0 1 0 – 1
0 0 1 2
R1 – 9R3  R1´´
1 0 9 19
0 1 0 – 1
0 0 1 2
1 0 0 1
1 0 9 19
Matrix is in
Reduced
Row-Echelon Form
( RREF )
© Art Traynor 2011
Mathematics
Matrices
Matrix
For positive integers m and n, an m x n (“m by n”) matrix is a rectangular array
populated by entries aij , located at the i-th row and the j-th column:
Linear Algebra
 M = N: the matrix is a square of order n
 The a11 , a22 , a33 , amn , sequence of entries is the
main diagonal ( ↘ ) of the matrix
M = # of Rows
i = Row Number Index
N = # of Columns
j = Column Number Index
Row 1 a11
Row 2
Row 3
Row m
.
.
.
a21
a31
am1
.
.
.
a12
a22
a32
am2
.
.
.
a13
a23
a33
am3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
a1n
a2n
a3n
amn
.
.
.
mi
mi+1
mi+2
mm
nj nj+1 nj+2 nn
C1 C2 C3 . . . C4
Section 1.2, (Pg. 13)
Section 1.2, (Pg. 13)
Section 1.2, (Pg. 13)
© Art Traynor 2011
Mathematics
Matrices
Matrix
For positive integers m and n, an m x n (“m by n”) matrix is a rectangular array
Row 1
populated by entries aij , located at the i-th row and the j-th column:
Linear Algebra
a11
Row 2
Row 3
Row m
.
.
.
a21
a31
am1
.
.
.
a12
a22
a32
am2
.
.
.
a13
a23
a33
am3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
a1n
a2n
a3n
amn
.
.
.
mi
mi+1
mi+2
mm
nj nj+1 nj+2 nn
M = # of Rows
i = Row Number Index
N = # of Columns
j = Column Number Index
C1 C2 C3 . . . Cn
Section 1.2, (Pg. 13)
 “ i ” is the row subscript
 “ j ” is the column subscript
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Diagonal Matrix
A matrix Anx n is said to be diagonal when all of the entries outside the
main diagonal ( ↘ ) are zero
Section 2.1, (Pg. 50)
The matrix Dnx n is diagonal if :
dij = 0 if i ≠ j "ij  { dii , di+1,i+1 ,…, dn–1,n–1 , dnn }


Any square diagonal matrix is also a Symmetric Matrix
A diagonal matrix is also both Upper-Triangular and Lower-Triangular
The Identity Matrix In is a diagonal matrix
Any square Zero Matrix is a diagonal matrix
d11
d21
d31
dm1
.
.
.
d12
d22
d32
dm2
.
.
.
d13
d23
d33
dm3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
d1n
d2n
d3n
dmn
.
.
.
Dnx n =
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Tr ( A ) = Σi = 1
n
aii = a11 + a22 +…+ an –1n – 1 + ann
Matrix Trace
The trace of a matrix Anx n is the sum of the main diagonal entries Section 2.1, (Pg. 50)
a11
a21
a31
am1
.
.
.
a12
a22
a32
am2
.
.
.
a13
a23
a33
am3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
a1n
a2n
a3n
amn
.
.
.
A
© Art Traynor 2011
Mathematics
Matrices
Linear Algebra
1
– 1
2
– 4
3
0
3
– 1
– 4
5
– 3
6
x – 4y + 3z = 5
– x + 3y – z = – 3
2x – 4z = – 6
System Augmented Matrix
1
– 1
2
– 4
3
0
3
– 1
– 4
Coefficient Matrix
M = # of Rows
i = Row Number Index
N = # of Columns
j = Column Number Index
Augmented Matrix
 A matrix representing a system of linear equations including both
the coefficient and constant terms
Coefficient = a multiplicative factor
(scalar) of fixed value (constant)
Section 1.2, (Pg. 13)
Coefficient Matrix
 A augmented matrix excluding any constant terms and populated
only by the variable coefficients
Section 1.2, (Pg. 13)
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Gaussian Elimination With Back-Substitution
 Express the system of linear equations as an Augmented Matrix
Interchange – of two equations
Multiply – an equation by a non-zero constant
Add – a multiple of an equation to another equation
Section 1.2, (Pg. 16)
Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF ) Section 1.2, (Pg. 16)
 Apply ERO’s to restate the matrix in Row Echelon Form (REF)
Section 1.2, (Pg. 13)
Section 1.2, (Pg. 14)
Section 1.1, (Pg. 6)
Section 1.2, (Pg. 15)
 Use Back Substitution to solve for unknown variables Section 1.1, (Pg. 6)
Order Matters! Operate from left-to-right
Multiply – an equation by a non-zero constant
© Art Traynor 2011
Mathematics
Linear Algebra
Solution
Gauss-Jordan Elimination
 Follow steps 1 & 2 of Gaussian Elimination
Section 1.2, (Pg. 19)
Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF ) Section 1.2, (Pg. 16)
Apply ERO’s to restate the matrix in Row Echelon Form (REF)
Section 1.2, (Pg. 13)
Section 1.2, (Pg. 14)
Section 1.1, (Pg. 6)
Section 1.2, (Pg. 15)
 Use Back Substitution to solve for unknown variables Section 1.1, (Pg. 6)
Order Matters! Operate from left-to-right
Multiply – an equation by a non-zero constant

Express the system of linear equations as an Augmented Matrix
n Interchange – of two equations
n Multiply – an equation by a non-zero constant
n Add – a multiple of an equation to another equation
 Keep Going!
Continue to apply ERO’s until matrix assumes
Reduced Row Echelon Form ( RREF )
 Section 1.2, (Pg. 15)
© Art Traynor 2011
Mathematics
Linear Algebra
Homogeneity
Homogenous Systems of Linear Equations
A linear equation system in which each of the constant terms is zero
Section 1.2, (Pg. 21)
aij xi + aij+1 xi+1 + . . . + ain – 1 xn – 1 + ain xn = 0
ai+1j xi + ai+1j+1 xi+1 + . . . + ai+1n – 1 xn – 1 + ai+1n xn = 0
am – 1j xi + am – 1j+1 xi+1 + . . . + am – 1n – 1 xn – 1 + am – 1n xn = 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
amj xi + amj+1 xi+1 + . . . + amn xn – 1 + amn xn = 0
 A homogenous LE system Must have At Least One Solution Section 1.2, (Pg. 21)
Every homogenous LE system is Consistent
# Equations < # Variables  Infinitely Many Solutions
© Art Traynor 2011
Mathematics
Matrix Representation
Linear Algebra
Matrix Representation Methods
 Uppercase Letter Designation
Section 1.2, (Pg. 40)
A , B , C
 Bracket-Enclosed Representative Element
[ aij ] , [ bij ] , [ cij ]
a11
a21
a31
am1
.
.
.
a12
a22
a32
am2
.
.
.
a13
a23
a33
am3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
a1n
a2n
a3n
amn
.
.
.
 Rectangular Array
Brackets denote a Matrix ( i.e. not a specific element/real number)
© Art Traynor 2011
Mathematics
Matrix Equality
Linear Algebra
Matrix Equality Section 1.2, (Pg. 40)
A = [ aij ]
B = [ bij ]
are equal
when
Amxn = Bmxn
aij = bij
1 ≤ i ≤m
1 ≤ j ≤n
a1 a2 a3 . . . ana =
C1 C2 C3 . . . Cn
Row Matrix / Row Vector
A 1 x n (“ 1 by n ”) matrix is a single row
Column Matrix / Column Vector
b1
b2
b3
bm
.
.
.
C1
An m x 1 (“ m by 1 ”) matrix is a single column
© Art Traynor 2011
Mathematics
Matrix Operations
Linear Algebra
Matrix Summation Section 1.2, (Pg. 41)
A = [ aij ]
B = [ bij ]
is given
by
+ A + B = [ aij + bij ]
– 1
0
2
1
1
– 1
3
2
+ = ( – 1 + 1 ) =
( 0 + [ – 1] )
( 2 + 3 )
( 1 + 2 )
0
– 1
5
3
Scalar Multiplication
1
– 3
2
2
0
1
4
– 1
2
A = 3A =
3 ( 1 ) 3 ( 2 ) 3 ( 4 )
3 ( – 3 ) 3 ( 0 ) 3 ( – 1 )
3 ( 2 ) 3 ( 1 ) 3 ( 2 )
3
– 9
6
6
0
3
12
– 3
6
3A =
© Art Traynor 2011
Mathematics
Matrix Operations
Linear Algebra
Section 1.2, (Pg. 42)Matrix Multiplication
– 1
4
5
3
– 2
0
A =
A = [ aij ] Amx n
B = [ bij ] Bnx p
then AB = [ cij ] = Σk = 1
n
aik bkj = ai 1 b1 j + ai2 b2j +…+ ain –1 bn-1j + ain bnj
The entries of Row “ Aik” ( the i-th row ) are multiplied by the entries of “ Bkj” ( the j-th column )
and sequentially summed through Row “ Ain” and Column “ Bnj” to form the entry at [ cij ]
– 3
– 4
2
1
B =
c11 c12
C = c21 c22
c31 c32
a11b11 + a12b21 a11b12 + a12b22
= a21b11 + a22b21 a21b12 + a22b22
a31b11 + a32b21 a31b12 + a32b22
Product Summation Operand Count
For Each Element of AB (single entry)
Product Summation (Column-Row) Index
 For the product of two matrices to be defined, the column count of the multiplicand matrix
must equal the row count of the multiplier matrix ( i.e. Ac = Br )

ABmx p
© Art Traynor 2011
Mathematics
Systems Of Linear Equations
Linear Algebra
Linear Equation System
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3
Matrix-Vector Notation
a11 a13
Ax = b  a21 a23
a31 a33
=
a12
a22
a32
A
x1
x2
x3
x
b1
b2
b3
b
© Art Traynor 2011
Mathematics
Systems Of Linear Equations
Linear Algebra
Partitioned Matrix Form (PMF)
Ax = b  =
A x
b
a11
a21
am1
.
.
.
a12
a22
am2
.
.
.
. . .
. . .
. . .
.
.
.
a1n
a2n
amn
.
.
.
x1
x2
xn
.
.
.
Ax = b  =
ai1
b
a11
a21
am1
.
.
.
x1
ai2
a12
a22
am2
.
.
.
+ x2 + . . . + xn
ain
a1n
a2n
amn
.
.
.
Ax = b  =
Ax
b
a11 x1
a21 x1
am1 x1
.
.
.
+ a12 x2 +
+ a22 x2 +
+ am2 x2 +
.
.
.
. . .
. . .
. . .
.
.
.
+ a1n xn
+ a2n xn
+ amn xn
.
.
.
Ax = x1 a1 + x2 a2 + . . . + xn an = b
© Art Traynor 2011
Mathematics
Section 2.1 Review
Linear Algebra
Section 2.1 Review
 Introduce Three Basic Martrix Operations
Matrix Addition
Scalar Multiplication
Matrix Multiplication
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Addition & Scalar Multiplication
Commutative
(Addition)
Associative
(Addition)A +( B + C ) = ( A +B ) + C
Changes Order of Operations
as per “PEM-DAS”, Parentheses
are the principal or first operation
A +B = B +A
Re-Orders Terms
Does Not Change
Order of Operations – PEM-DAS
Associative
(Multiplication)( cd ) A = c ( dA )
Distributive
( Scalar Over Matrix Addition )c ( A + B ) = c A + cB
Distributive
( Scalar Addition Over
Matrix Addition )
( c + d ) A = c A + dA
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Proofs
Let A = [ aij ], B = [ bij ]
 Introduce/Define/Declare The Constituents to be Proven
This statement declares A & B to be Matrices
Specifies the row & column count index variables
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Identities & Zero Matrices
Multiplicative Identity
Multiplicative Zero Identity
1A = A
Additive IdentityA + 0mx n = A
Additive InverseA + ( – A ) = 0mx n
c A = 0mx n
if c = 0
or A = 0mx n
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Matrix Multiplication
Distributive
( LHS )
A( BC ) = ( AB ) C
Order of Terms is preserved
Affects Order of Operations
Sequence – PEM-DAS
Distributive
( RHS )
Associative
( Scalar Over Matrix
Multiplication )
A( B + C ) = AB + AC
( A + B ) C = AC + BC
c ( AB ) = ( c A )B
= A ( c B )
Associative
(Multiplication)
Order of Terms is preserved
Order of Terms is preserved
Order of Terms is preserved
Order of Terms is preserved
AC = BC
CA = CB
( C is invertible )
Right Cancellation Property
A = B
if then
Left Cancellation Property
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Proofs
A( BC ) = ( AB ) C
Order of Terms is preserved
Affects Order of Operations
Sequence – PEM-DAS
Associative
(Multiplication)
Σk = 1
n
T = [ tij ] = ( aik bkj )ckjΣk = 1
n
Σi = 1
n
( yj )Σj = 1
n
( xi )
Σi = 1
n
( xi ) ( y1 + y2 +…+ yn –1 + yn )
( x1 + x2 +…+ xn –1 + xn ) y1 + ( x1 + x2 +…+ xn –1 + xn ) y2 +…
( x1 + x2 +…+ xn –1 + xn ) yn –1 + ( x1 + x2 +…+ xn –1 + xn ) yn
x1 y1 + x2 y1 +…+ xn –1 y1 + xn y1 + x1 y2 + x2 y2 +…+ xn –1 y2 + xn y2 +…
x1 yn –1 + x2 yn –1 +…+ xn –1 yn –1 + xn yn –1 + x1 yn + x2 yn +…+ xn –1 yn + xn yn
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Proofs
A( BC ) = ( AB ) C
Order of Terms is preserved
Affects Order of Operations
Sequence – PEM-DAS
Associative
(Multiplication)
Σk = 1
n
T = [ tij ] = ( aik bkj )ckjΣk = 1
n
Σi = 1
n
( yj )Σj = 1
n
( xi )
Σi = 1
n
( xi ) ( y1 + y2 +…+ yn –1 + yn )
( x1 + x2 +…+ xn –1 + xn ) y1 + ( x1 + x2 +…+ xn –1 + xn ) y2 +…
( x1 + x2 +…+ xn –1 + xn ) yn –1 + ( x1 + x2 +…+ xn –1 + xn ) yn
x1 y1 + x2 y1 +…+ xn –1 y1 + xn y1 + x1 y2 + x2 y2 +…+ xn –1 y2 + xn y2 +…
x1 yn –1 + x2 yn –1 +…+ xn –1 yn –1 + xn yn –1 + x1 yn + x2 yn +…+ xn –1 yn + xn yn
Σi = 1
n
Σj = 1
n
xi yj
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Proofs
A = [ aij ] Amx n
B = [ bij ] Bnx p
then AB = [ cij ] = Σk = 1
n
aik bkj = ai 1 b1 j + ai2 b2j +…+ ain –1 bn-1j + ain bnj
Product Summation Operand Count
Product Summation (Column-Row) Index
ABmx p
The entries of Row “ Aik” ( the i-th row ) are multiplied by the entries of “ Bkj” ( the j-th column )
and sequentially summed through Row “ Ain” and Column “ Bnj” to form the entry at [ cij ]

ai,1 b1, j + ai,1 b1, j +1 ai,1 b1,n – 1 + ai,1 b1,n
ai+1,2 b2, j + ai+1,2 b2, j +1 ai+1,2 b2, n – 1 + ai+1,2 b2, n
.
.
.
.
.
.
.
.
.
+ . . .+
+ . . .+
.
.
.
.
.
.
an – 1,n – 1bn – 1, j + an – 1,n – 1bn – 1, j +1 an – 1,n – 1bn – 1, n – 1 + an – 1,n – 1bn
an,nbn, j + an,n bn, j +1 an,n bn,n – 1 + an,n bn,n
+ . . .+
+ . . .+
Section 1.2, (Pg. 42)
For Each Element of AB (single entry)
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Identity Matrix
For Amx n

A In = A
A Im = A
Matrix Exponentiation
For Ak = AA…A
K factors
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Transpose Matrix
a11
a21
am1
.
.
.
a12
a22
am2
.
.
.
. . .
. . .
. . .
.
.
.
a1n
a2n
amn
.
.
.
A =
a11
a12
a1n
.
.
.
a21
a22
a2n
.
.
.
. . .
. . .
. . .
.
.
.
am1
am2
amn
.
.
.
AT =
1
2
0
2
1
0
0
0
1
C =
1
2
0
2
1
0
0
0
1
CT =
Symmetric Matrix: C = CT
If C = [ cij ] is a symmetric matrix, cij = cji for i ≠ j
C = [ cij ] is a symmetric matrix, Cmx n = CT
nx p for m = n = p
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
1
2
0
2
1
0
0
0
1
C =
1
2
0
2
1
0
0
0
1
CT =
If C = [ cij ] is a symmetric matrix, cij = cji , "i,j | i ≠ j
C = [ cij ] is a symmetric matrix, Cmx n = CT
nx p , "m,n, p | m = n = p
Symmetric Matrix
A Symmetric Matrix is a
Square Matrix that is
equal to it Transpose ( e.g. Cmx n = CT
mx n , "m,n | m = n)

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Matrices – Transposes
( AT ) T = A
Transpose of a
Scalar Multiple
( A + B ) T = A T + B T
( c A ) T = c ( A T )
Transpose of a Transpose
Transpose of Sum
( AB ) T = B T A T Transpose of a Product
Reverse Order of Terms
( interchange multiplicand & multiplier
terms in the product expression )
Symmetry of A
Matrix & The Product
of Its Transpose
AAT = ( AAT ) T
ATA is also symmetric
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse
A matrix Anx n is Invertible or Non-Singular when
$ matrix Bnx n | AB = BA = In
In : Identity Matrix of Order n
Bnx n : The Multiplicative Inverse of A
A matrix that does not have an inverse is Non-Invertible or Singular
Non-square matrices do not have inverses
n For matrix products Amx n Bnx p where m ≠ n ≠ p,
AB ≠ BA as [ aij ≠ bij ] ??


© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse
There are two methods for determining an inverse matrix A-1
of A ( if an inverse exists):
Solve Ax = In for X

Adjoin the Identity Matrix In (on RHS ) to A forming the doubly-
augmented matrix [ A In ] and perform EROs concluding in RREF to
produce an [ In A-1 ] solution

A test for determining whether an inverse matrix A-1 of A exists:
Demonstrate that either/or AB = In = BA
Section 2.3 (Pg. 64)
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse
Uniqueness Property
If A is invertible, then its inverse is Unique
Notation: The inverse of A is denoted as A-1
If A is invertible, then the LE system represented by Ax = b
has a Unique Solution given by x = A– 1 b

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – by Matrix Equation
x11 x12
x21 x22
1
– 1
4
– 3
+ =
A x
1
0
0
1
In
For a coefficient matrix Anx n the A-1
nx n matrix is that whose product
yields a solution matrix to the corresponding In identity matrix

1x11 +
– 1x21 +
4x21
( – 3x21 )
=
Ax
1x11 +
– 1x21 +
4x21
( – 3x21 )
1
0
0
1
In
1x11 + 4x21 = 1
– 1x21 + ( – 3x21 ) = 0
1x11 + 4x21 = 0
– 1x21 + ( – 3x21 ) = 1  – 3
1
– 4
1
A-1Ax = In Ax = In
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – by Gauss-Jordan Elimination
x11 x12
x21 x22
1
– 1
4
– 3
+ =
A x
1
0
0
1
In
An invertible coefficient Anx n matrix can be combined with its corresponding
xnx n unknown/variable matrix to form an Axnx n = In equation matrix

This equation matrix is composed itself of identical coefficient
column vectors

1 x11 +
– 1 x21 +
4 x21
( – 3 x21 )
=
Ax
1 x11 +
– 1 x21 +
4 x21
( – 3 x21 )
1
0
0
1
In
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – by Gauss-Jordan Elimination
An invertible coefficient Anx n matrix can be combined with its corresponding
xnx n unknown/variable matrix to form an Axnx n = In equation matrix

This equation matrix is composed itself of identical coefficient
column vectors

1 x11 +
– 1 x21 +
4 x21
( – 3 x21 )
=
Ax
1 x11 +
– 1 x21 +
4 x21
( – 3 x21 )
1
0
0
1
In
1x11 + 4x21 = 1
– 1x21 + ( – 3x21 ) = 0
1x11 + 4x21 = 0
– 1x21 + ( – 3x21 ) = 1
Ax = In Ax = In
Rather than solve the two column equation vectors separately,
they can be solved simultaneously by adjoining the identity
matrix to the shared coefficient matrix

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – by Gauss-Jordan Elimination ( GJE )
An invertible coefficient Anx n matrix can be combined with its corresponding
xnx n unknown/variable matrix to form an Axnx n = In equation matrix

This equation matrix is composed itself of identical coefficient
column vectors

1 x11 + 4 x21 = 1
– 1 x21 + ( – 3 x21 ) = 0
1 x11 + 4 x21 = 0
– 1 x21 + ( – 3 x21 ) = 1
Ax = In Ax = In
Rather than solve the two column equation vectors separately,
they can be solved simultaneously by adjoining the identity
matrix to the shared coefficient matrix…

1
– 1
4
– 3
A
1
0
0
1
In
…then execute ERO’s to effect a GJ-Elimination of the
“ doubly augmented ” [ A I ] matrix the conclusion of
which will yield an [ I A-1 ] inverse matrix
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – by Gauss-Jordan Elimination
An invertible coefficient Anx n matrix can be combined with its corresponding
xnx n unknown/variable matrix to form an Axnx n = In equation matrix

This equation matrix is composed itself of identical coefficient
column vectors

1 x11 + 4 x21 = 1
– 1 x21 + ( – 3 x21 ) = 0
1 x11 + 4 x21 = 0
– 1 x21 + ( – 3 x21 ) = 1
Ax = In Ax = In
The adjoined, “ doubly-augmented ” coefficient matrix , by means of ERO’s , is reduced by
GJ-Elimination to produce the [ I A-1 ] inverse matrix

1
– 1
4
– 3
A
1
0
0
1
In
 – 3
1
– 4
1
A-1
1
0
0
1
In
Which is confirmed by verifying either of the following
n AA-1 = I
n AA-1 = A-1 A
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Matrix Inverse – 2x2 Matrix ( Special Case )
For a square matrix A2x 2 , given by:
The inverse A-1 of the root matrix A2x 2 is given by the following product:
a
c
b
d = ad – cb
d
– c
– b
a
The difference of the diagonal products forms the
multiplicand denominator of the matrix whose
product yields the inverse of the root matrix
1
ad – cb
A-1 = NegateSwitcheroo
Abstract Algebra,
Lecture 2 @ 18:30
The scalar multiple is the
inverse of the root matrix
Determinant!
The “multiplier” matrix is a
half-negated permutation of
the root matrix!
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties Of Inverse Matrices
A : An Invertible Matrix
k : A positive integer , Z+
c : A non-zero scalar, c ≠ 0
A– 1
Ak
c A
AT
are Invertible
and the following are true:
( A– 1 ) – 1 = A
( Ak ) – 1 = A– 1 A– 1 … A– 1 = ( A– 1 ) k
K factors
( cA ) – 1 = A– 1 1
c
( AT ) – 1 = ( A– 1 ) T
Aj Ak = Aj+k
( Aj ) k = Ajk
( AB ) – 1 = B– 1 A– 1 ( B is also invertible )
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Properties of Matrix Exponentiation
Aj Ak = Aj+k
( Aj ) k = Ajk
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Elementary Matrices
An Elementary Matrix, Anx n is:
A square matrix ( n x n )
Obtained from a corresponding Identity Matrix In

Results from a single Elementary Row Operation ( ERO )
If E is an Elementary Matrix, then:
E is obtained from an ERO on a corresponding Identity Matrix Im

EA is the product of the same ERO performed on an Am x n matrix
Matrices Amx n & Bmx n are Row Equivalent when:
$ a finite set of Elementary Matrices E1 , E2 ,… , Ek such that
B = Ek Ek – 1 … E2 E1 A
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Elementary Matrices - Properties
If E is an Elementary Matrix then:
E– 1 exists
E– 1 is an Elementary Matrix
A square matrix A is invertible if-and-only-if:
A can be expressed as the product of elementary matrices
Every Elementary Matrix has an inverse
Matrix Equivalency conditions, for Anx n matrix:
“ A ” is invertible
Ax = b has a unique solution for every n x 1 column matrix b
Ax = 0 has only the trivial solution
“ A ” is row-equivalent to In

“ A ” can be written as the product of elementary matrices
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Upper & Lower Triangular Matrices
a11
a21
a31
am1
.
.
.
0
a22
a32
am2
.
.
.
0
0
a33
am3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
0
0
0
amn
.
.
.
L
For an Anx n square matrix:
“ L ” is a lower triangular matrix where all entries above the Main Diagonal
are zero, and only the lower half is populated with non-zero entries.

a11
0
0
0
.
.
.
a12
a22
0
0
.
.
.
a13
a23
a33
0
.
.
.
. . .
. . .
. . .
. . .
.
.
.
a1n
a2n
a3n
amn
.
.
.
U
“ U ” is a lower triangular matrix where all
entries below the Main Diagonal are zero, and
only the upper half is populated with non-zero
entries.

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
LU Factorization
A square matrix Anx n can be written as a product A = LU if:
“ L ” is a lower triangular matrix where all entries above the Main Diagonal are
zero, and only the lower half is populated with non-zero entries, and …

“ U ” is a lower triangular matrix where all entries below the Main Diagonal are
zero, and only the upper half is populated with non-zero entries

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Determinants
Every square matrix Anx n can be associated with a real number
defined as its Determinant

Notation: det ( A ) = |a |
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
Example:
2-LE System with (2) unknowns  yields solutions with common denominators
b1 a22 – b2 a12
a11 a22 – a21 a12
x1 =
b2 a11 – b1 a21
a11 a22 – a21 a12
x2 =
Determinant of a 2 x 2 Matrix
a11 a12
a21 a22
A = = det ( A ) = |A | = a11 a22 – a21 a12
a11
a21
a12
a22
= a11a22 – a21a12
Determinant is the difference
of the product of the diagonals
The Determinant is a
polynomial of Order “ n ”
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Determinants
Every square matrix Anx n can be associated with a real number
defined as its Determinant

Notation: det ( A ) = |a |
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
Example:
2-LE System with (2) unknowns  yields solutions with common denominators
b1 a22 – b2 a12
a11 a22 – a21 a12
x1 =
b2 a11 – b1 a21
a11 a22 – a21 a12
x2 =
Determinant of a 2 x 2 Matrix
Determinant is the difference
of the product of the diagonals
The Determinant is a
polynomial of Order “ n ”
a
c
b
d = ad – cb
The Determinant is the Area
(an n-Manifold ) of the
parallelogram suggested by
the addition of the vectors
represented by the matrix
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Minors & Cofactors
For a square matrix Anx n

The Minor Mij
of the entry aij
is the determinant of the matrix
obtained by deleting the ith row and jth column of A

The Cofactor Cij
of the entry aij
is Cij = ( – 1 )i+j Mij

Example:
a11 a13
a21 a23
a31 a33
a12
a22
a32
Minor of a21
a11 a13
a21 a23
a31 a33
a12
a22
a32
Minor of a22
a12
a32
a13
a33
, M21 =
a11
a31
a13
a33
, M22 =
A Minor IS A DETERMINANT!!
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Minors & Cofactors
For a square matrix Anx n

The Cofactor Cij
of the entry aij
is Cij = ( – 1 )i+j Mij

Example:
a11 a13
a21 a23
a31 a33
a12
a22
a32
Minor of a21
a11 a13
a21 a23
a31 a33
a12
a22
a32
Minor of a22
a12
a32
a13
a33
, M21 =
a11
a31
a13
a33
, M22 =
Cofactor of a21 Cofactor of a22
C21 = ( – 1 )2+1 M21 = – M21 C22 = ( – 1 )2+2 M22 = M22
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Determinant of a Square Matrix
For a square matrix Anx n of order n ≥ 2 , then:
The Determinant of A is
the sum of the entries in the first row of A
multiplied by their respective Cofactors

det ( A ) = |A | = a1j C1j = a11 C11 + a12 C12 +…+ a1, n –1 Cn, n –1 + a1n C1nΣj = 1
n
The process of determining this sum is Expanding The Cofactors ( in the first row )
Section 3.1, (Pg. 106)
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Expansion By Cofactors
For a square matrix Anx n of order n , the determinant of A is given by:
An ith row expansion
det ( A ) = |A | = aij Cij = ai1 Ci1 + ai2 Ci2 +…+ ai, n –1 Ci, n –1 + ain CinΣj = 1
n
A jth column expansion
det ( A ) = |A | = aij Cij = a1j C1j + a2j C2j +…+ an –1, j Cn –1, j + anj CnjΣj = 1
n
Section 3.1, (Pg. 107)
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Determinants – 3x3 Matrix ( Special Case )
For a square matrix A3x 3 , the determinant of A is given by:
The first two columns are adjoined to the RHS of the matrix
a11 a13
a21 a23
a31 a33
a12
a22
a32
a11
a21
a31
a12
a22
a32
Product sums are formed by first multiplying along the main diagonal proceeding to the right
a11 a13
a21 a23
a31 a33
a12
a22
a32
a11
a21
a31
a12
a22
a32
➀ ➁ ➂
= a11a22 a33 + a12a23 a31 + a13a21 a32 = UD
Upper
Diagonal
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Determinants – 3x3 Matrix ( Special Case )
For a square matrix A3x 3 , the determinant of A is given by:
Remaining product differences are then formed by multiplying along the LHS bottom diagonal
proceeding to the right

a11 a13
a21 a23
a31 a33
a12
a22
a32
a11
a21
a31
a12
a22
a32
➃ ➄ ➅
= UD – a31a22 a13 – a32a23 a11 – a33a21 a12
= UD – LD Upper
Diagonal
minus
Lower
Diagonal
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Diagonal Matrix
A matrix Anx n is said to be diagonal when all of the entries outside the
main diagonal ( ↘ ) are zero
Section 2.1, (Pg. 50)
The matrix Dnx n is diagonal if :
dij = 0 if i ≠ j "ij  { dii , di+1,i+1 ,…, dn–1,n–1 , dnn }


d11
d21
d31
dm1
.
.
.
d12
d22
d32
dm2
.
.
.
d13
d23
d33
dm3
.
.
.
. . .
. . .
. . .
. . .
.
.
.
d1n
d2n
d3n
dmn
.
.
.
Dnx n =
A matrix Anx n that is both upper AND lower triangular is said to be
diagonal

The determinant of a triangular matrix Dnx n is the product of its main diagonal elements
det ( D ) = |D | = aii = a11 a22 … a n –1, n –1 ain

Πi = 1
n
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
EROs & Determinants (Properties)
Permutation:
[ A ]  Pij  [ B ]
det ( B ) = – det ( A )
|B | = – | A |

Multiplication by a Scalar:
[ A ]  cRi  [ B ]
det ( B ) = c det ( A )
|B | = c | A |

Addition to a Row Multiplied by a Scalar:
[ A ]  Ri + cRj  [ B ]
det ( B ) = det ( A )
|B | = | A |

There are three “effects” to a resultant
matrix which are unique to each of the
three EROs
Permutation
Scalar Multiplication
Row Addition
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Zero Determinants
A matrix Anx n will feature a determinant of zero
det ( A ) = 0
|A | = 0
if any of the following pertain

One row/column of “ A ” consists of all zeros
Two rows/columns of “ A ” are equal
One row/column of “ A ” is a multiple of another
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Determinant of a Matrix Product
For matrices Anx n & Bnx n , of order “ n ”
det ( AB ) = det ( A ) det ( B )
|AB | = | A | | B |

Determinant of a Scalar Multiple of a Matrix
For matrix Anx n of order “ n ” , and Scalar “ c ”
det ( cA ) = cn det ( A )
|A | = cn | A |

Determinant of an Invertible Matrix
For matrix Anx n
A is invertible if-and-only-if
det ( A ) ≠ 0
|A | ≠ 0

Factors are not row-column specific
(for whatever reason??)
An invertible matrix must have a non-
zero determinant, elsewise one would be
dividing by zero to obtain the inverse of
the matrix (undefined)
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Determinant of an Inverse Matrix
For matrix Anx n
A is invertible if-and-only-if
det ( A– 1 ) =
|A | =

1
det ( A )
1
|A |
Determinant of a Transpose
For matrix Anx n
det ( A ) = det ( AT )
|A | = |AT |

An invertible matrix must have a non-
zero determinant, elsewise one would be
dividing by zero to obtain the inverse of
the matrix (undefined)
© Art Traynor 2011
Mathematics
Linear Algebra
Definitions
Equivalent Conditions For A Non-Singular Matrix
For matrix Anx n , the following statements are equivalent
“ A ” is invertible
Ax = b has a unique solution for every n x 1 column matrix b
Ax = 0 has only the trivial solution
“ A ” is row-equivalent to In

“ A ” can be written as the product of elementary matrices
det ( A ) ≠ 0 ; |A | ≠ 0
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Adjoint of a Matrix
For a square matrix Anx n

The Cofactor Cij
of the entry aij
is Cij = ( – 1 )i+j Mij
 C11
C21
Cn1
.
.
.
C12
C22
Cn2
.
.
.
. . .
. . .
. . .
.
.
.
C1n
C2n
Cnn
.
.
.
Cofactor Matrix of A
C11
C12
C1n
.
.
.
C21
C22
C2n
.
.
.
. . .
. . .
. . .
.
.
.
Cn1
Cn2
Cnn
.
.
.
adj ( A ) =
Adjoint Matrix of A
The transpose of the Cofactor Matrix Cij
of “ A ” is Cij
T

© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Adjoint Equivalence with Matrix Inverse
For invertible matrix Anx n , A– 1 is defined by
A– 1 = adj ( A )
A– 1 = adj ( A )
 1
det ( A )
1
|A |
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Cramer’s Rule
Given a square matrix Anx n in “ n ” equations ( i.e. LE count = Airty )
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
2-LE System with (2) unknowns  yields solutions with common denominators
b1 a22 – b2 a12
a11 a22 – a21 a12
x1 =
b2 a11 – b1 a21
a11 a22 – a21 a12
x2 =
which denominator forms the Determinant of the matrix “ A ”
b1
b2
a12
a22
x1 =
a11
a21
a12
a22
a11
a21
b1
b2
, x2 =
a11
a21
a12
a22
, a11 a22 – a21 a12 ≠ 0
|A1 | =
b1
b2
a12
a22
|A2 | =
a11
a21
b1
b2
|A1 |
|A |
x1 =
|A2 |
|A |
x2 =
© Art Traynor 2011
Mathematics
Algebra Of Matrices
Linear Algebra
Cramer’s Rule
Given a system of “ n ” linear equations
in “ n ” variables ( i.e. LE count = Airty )
with coefficient matrix “ A ”
and non-zero determinant |A |
the solution of the system is given as:

|A1 |
|A |
x1 = , x2 = , … , xn = where the ith column of Ai is the
“ constant ” vector in the LE system
|A2 |
|A |
|An |
|A |
Linear Equation System
a11 x1 + a12 x2 + a13 x3 = b1
a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3
Matrix-Vector Notation
Example:
a11 b1
a21 b2
a31 b3
a12
a22
a32
a11 a13
a21 a23
a31 a33
a12
a22
a32
|A3 |
|A |
x3 = =
© Art Traynor 2011
Mathematics
Topological
Spaces
Space
Mathematical Space
Mathematical Space
A Mathematical Space is a
Mathematical Object that is
regarded as a species of Set
characterized by:

Structure
Heirarchy
and Inner Product
Spaces
Normed
Vector Spaces
Vector Spaces
Metric Spaces
Subordinate Spaces ( Subspaces ) inherit
the properties of Parent Spaces
such that subordinate Subspaces are said to Induce their
properties onto the parent spaces in a recursive fashion
e.g. an Algebra or Algebraic Structure
© Art Traynor 2011
Mathematics
ProjectiveEuclidean
Mathematical Space
Distance between two
points is defined
Distance is Undefined
Space
Mathematical Space
Heirarchy
Upper Level Classification
Second Level Classification Non-EuclideanEuclidean
Finite Dimensional Infinite Dimensional
Compact Non-Compact
Second Level Classification N
n
, Z
n
, Q
n
, R
n
, C
n
, E
n
This slide is very slippery
It really needs a deeper dive
to achieve necessary cogency
© Art Traynor 2011
Mathematics
MapFunction Morphism
A Relation between a Set of
inputs and a Set of permissible
outputs whereby each input is
assigned to exactly one output
A Relation as a Function but
endowed with a specific
property of salience to a
particular Mathematical Space
A Relation as a Map with the
additional property of Structure
preservation as between the
sets of its operation
Structure
A Set attribute by which several species of Mathematical
Object are permitted to attach or relate to the Set
which expand the enrichment of the Set

Space
Mathematical Space
Measure
The manner by which a
Number or Set Element is
assigned to a Subset
Algebraic Structure
A Carrier Set defined by one or
more Finitary Operations
Field
A non-zero Commutative Ring
with Multiplicative Inverses for all
non-zero elements
(an Abelian Group under Multiplication)
 
© Art Traynor 2011
Mathematics
FMM
A unique Relation between Sets
Structure
A Set attribute by which several species of Mathematical
Object are permitted to attach or relate to the Set
which expand the enrichment of the Set

Space
Mathematical Space
Measure
The manner by which a
Number or Set Element is
assigned to a Subset
Algebraic Structure
A Carrier Set defined by one or
more Finitary Operations
Field
A non-zero Commutative Ring
with Multiplicative Inverses for all
non-zero elements
(an Abelian Group under Multiplication)
Satisfies Group Axioms plus Commutativity
Arithmetic Operations are defined ( +, – , x ,÷ )
Salient to a Mathematical Space
Preserving of Structure
FMM = Function~Map~Morphism
Akin to the Holy Trinity
Topology
Those properties of a
Mathematical Object
which are invariant under
Transformation or Equivalence
Metric Space
A Set for which distance between
all Elements of the Set are defined
The Triangle Inequality
constitutes the principle Axiom
from which three subsidiary
axioms are derived
F ≡ C R Q Z N
© Art Traynor 2011
Mathematics
Topology
Structure
A Set attribute by which several species of Mathematical
Object are permitted to attach or relate to the Set
which expand the enrichment of the Set

Space
Mathematical Space
Manifold
A Topologic Space resembling a
Euclidean Space whose features
may be charted to Euclidean
Space by Map Projection
Metric Space
A Carrier Set defined by one or
more Finitary Operations
Riemann Manifold
Order
A Binary Set Relation exhibiting
the Reflexive, Antisymmetric,
and Transitive properties
Equivalence Class
Those properties of a
Mathematical Object
which are invariant under
Transformation or Equivalence
Surface of a Sphere is not a
Euclidean Space!
A Real Manifold enriched with an
inner product on the Tangent Space
varying smoothly at each point
Geometry
A Complete, Locally Homogenous,
Reimann Manifold
Scale Invariant - Exhibits Multiplicative Scaling
Convergent
A Binary Set Relation exhibiting
the Reflexive, Symmetric, and
Transitive properties
© Art Traynor 2011
Mathematics
Topology
Structure
A Set attribute by which several species of Mathematical
Object are permitted to attach or relate to the Set
which expand the enrichment of the Set

Space
Mathematical Space
Manifold
A Topologic Space resembling a
Euclidean Space whose features
may be charted to Euclidean
Space by Map Projection
Order
A Binary Set Relation exhibiting
the Reflexive, Antisymmetric,
and Transitive properties
Equivalence Class
Those properties of a
Mathematical Object
which are invariant under
Transformation or Equivalence
Surface of a Sphere is not a
Euclidean Space!
A Binary Set Relation exhibiting
the Reflexive, Symmetric, and
Transitive properties
Differential Structures
A Structure on a Set rendering the
Set into a Differential Manifold with
n-dimensional Continuity defined
by a CK Atlas of Bijection/Charts
Categories
Comprised of
Object and Morphism Classes
and Morphisms relating the Objects
admitting Composition
and satisfying the
Associativity and Identity Axioms
© Art Traynor 2011
Mathematics
FMM
A unique Relation between Sets
Structure
A Set attribute by which several species of Mathematical
Object are permitted to attach or relate to the Set
which expand the enrichment of the Set

Space
Mathematical Space
Measure
The manner by which a
Number or Set Element is
assigned to a Subset
Salient to a Mathematical Space
Preserving of Structure
FMM = Function~Map~Morphism
Akin to the Holy Trinity
SurjectionInjective
Functions
One-to-One Onto
Bijection
Inversive
One-to-One & Onto
f : X  Y
A Function which returns
a CoDomain equivalent to
the Domain of another
Function returning that
same CoDomain
aka: Automorphism
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M “①
Example:
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
Proof:
•Let M be a Space over some field F.
•Every Space must contain at least two elements:
the empty set { } , and itself Ms
P ( M )
Ms
© Art Traynor 2011
Mathematics
Subspace
Subspace ( General )
Mathematical Space
Somewhat trivially, a mathematical Subspace
is a Subset
of a parent Mathematical Space
which inherits and enriches the Structure
of the superordinating Mathematical Space
 Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Mathematical
Space
M
Mathematical
Space
Given Mathematical Space “ M ”①
Example:
Set Theory entails that at least two improper subspaces
are constituent of the Space: the Empty Set and “ M ” itself
P ( M )
Ss
② We next introduce into this Power Set / Spanning space
a well defined non-zero element “ S ” and at least one
additional Structuring operation (∗) such that
Si ∗ Si Fi  Ss
Si
∗
© Art Traynor 2011
Mathematics
Vector Space
Vector Space ( General )
Mathematical Space
Vector Spaces
Vector
Spaces
Metric
Spaces
Topological
Spaces
A Vector Space V
is a species of Set over a Field F of scalars (e.g. R or C )
whose constituent point elements can be uniquely characterized by
an ordered tuple of n-dimension ( Vectors )
Structured the Superposition Principle ( and its derivative
Linear Operations ):

Addition ( aka Additivity Property )
A function that assigns to the combination of any two or more elements
of the space a resultant unique n-tuple ( Vector ) composed of the
sum of the respective operand vector components.
f ( 〈 a , b 〉 ) = 〈 an + bn 〉 = 〈 rn 〉 = r ( e.g. rn  R n
)
f : V + V  V = a  V  f ( a )  V
Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
The simplest Vector Space is the
space populated by only the Field
itself, known as a Coordinate Space
Vector Addition corresponds to the Motion of Translation
© Art Traynor 2011
Mathematics
Addition
Vectors
Vector (General )
x
y
O initial point
terminal point
Free Vector
r
A
 Sum of Vectors – Vector Addition (Tail –to–Tip)
B
a ( ax, ay )
b ( bx, by )
║a ║
║b ║
 Any two (or more) vectors can be summed by positioning the operand
vector (or its corresponding-equivalent vector) tail at the tip of the
augend vector.
 The summation (resultant) vector is then extended from (tail) the
origin (tail) of the augend vector to the terminal point (tip) of the
operand vector (tip-to-tip/head-to-head).
ry
rx
r ( rx , ry )
θ
“ Tail-to-Tip ”
“ Tip-to-Tip ”
Same procedure, sequence of
operations whether for vector
addition (summation) or vector
subtraction (difference)
Resultant is always tip-to-tip
Operands are oriented “ tip-to-tail ”
resultant vector is oriented “ tip-to-
tip ”
The resultant vector in a
summation always originates at
the displacement origin and
terminates coincident at the
terminus of the final displacement
vector (e.g. tip-to-tip)
Chump Alert: A vector summation
is a species of Linear Comination
© Art Traynor 2011
Mathematics
Vector Space
Vector Space ( General )
Mathematical Space
Vector Spaces
Vector
Spaces
Metric
Spaces
Topological
Spaces
A Vector Space V
is a species of Set
over a Field F of scalars (e.g. R or C )
whose constituent point elements can be uniquely characterized by
an ordered tuple of n-dimension ( Vectors )
Structured by the following Linear Operations:

Addition
f ( c 〈 an 〉 ) = 〈 can 〉 = 〈 rn 〉 = r ( e.g. rn  R n
)
Scalar Multiplication
A function that assigns to the combination of any element of the
multiplicand field and any multiplier vector space element a resultant
unique n-tuple ( Vector ) composed of the product of the respective
multiplicand scalar and the constituent multiplier vector n-components
supra.
f : F x V  V = a  V  f ( a )  V
Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
© Art Traynor 2011
Mathematics
Vectors
Vector (General )
PQ
x
y Position Vector
O initial point
Free Vector
C
 Vector Scalar Multiple – a species of Transformation where the
CoDomain Set if positive effects an Expansion,
or if negative effects a Dilation.

O
C ( cax , cay )
A ( ax , ay )
c OA = OC
terminal point
Example: F = ma
Vector Scalar Multiple
Operands are oriented “ tip-to-tail ”
with the multiplicand ( vector to be
scaled ) “ scaled ” by the
multiplier-scalar.
The result constitutes a vector
addition of the product of the
scalar and the multiplicand
normalized unit vector (NUV) thus
preserving multiplicand orientation
in the result
c 〈 ax , ay 〉 = 〈 cax , cay 〉
Chump Alert: A vector scalar is a
species of Linear Comination
© Art Traynor 2011
Mathematics
Vector Space
Vector Space ( General )
Mathematical Space
Vector Spaces
Vector
Spaces
Metric
Spaces
Topological
Spaces
A Vector Space V is a species of Set over a Field F
of scalars (e.g. R or C ) whose constituent point elements can be
uniquely characterized by an ordered tuple of n-dimension ( Vectors )
Structured by the following Linear Operations:

Addition
Scalar Multiplication
Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Vector/Linear (VL ) Spaces are said to be “Algebraic”
VL Space operations define figures (subspaces?) such as lines and planes
The Dimension of a VL Space is determined by the maximal
number of Linear Independent variables (identical to the minimal
number of vectors that Span the space)
Additional Structure apart from that characterizing general Vector
Space is needed to define Nearness, Angles, or Distance
A Vector Space V is a species of Set over a Field F
© Art Traynor 2011
Mathematics
Vector Space
Vector Space ( General )
Mathematical Space
Vector Spaces
Vector
Spaces
Metric
Spaces
Topological
Spaces
A Vector Space V is a species of Set over a Field F
of scalars (e.g. R or C ) whose constituent point elements can be
uniquely characterized by an ordered tuple of n-dimension ( Vectors )
Structured by the following Linear Operations:

Addition
Scalar Multiplication
Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
Vector Spaces are said to be “Linear” spaces
(as distinct from Topological Spaces)
Vector/Linear (VL ) Spaces are said to be “Algebraic”
VL Space operations define figures (subspaces?) such as lines and planes
The Dimension of a VL Space is determined by the maximal
number of Linear Independent variables (identical to the minimal
number of vectors that Span the space)
Additional Structure apart from that characterizing general Vector
Space is needed to define Nearness, Angles, or Distance
© Art Traynor 2011
Mathematics
Vector Space
Vector Space ( General )
Mathematical Space
Vector Spaces
Vector
Spaces
Metric
Spaces
Topological
Spaces
A Vector Space V is a species of Set over a Field F
of scalars (e.g. R or C ) whose constituent point elements can be
uniquely characterized by an ordered tuple of n-dimension ( Vectors )
Structured by the following Linear Operations:

Addition
Scalar Multiplication
Closed under the operation of
Addition and Scalar Multiplication
And which satisfy the ten axioms
governing vector space elements
The essential Structure of a Vector Space enables transformations of
its elements that correspond to classes of Motion
Vector Addition (as well as Scalar Multiplication, which is by extension
repeated vector addition) corresponds to Translation
Translation is classed as one of three species of Rigid Motion the
other two Rotation and Reflection require additional Structure
A Vector is understood to represent a difference (Displacement )
between the respective values of its constituent ordered tuples
© Art Traynor 2011
Mathematics
Vectors
Vector Properties
Multiplicative Inverse
If c( v ) = 0
Zero Vector
Scalar Identity
– 1( v ) = – v
Properties of Scalar Multiplication
If v represents any element of a vector space V ( v  V )
and c represents any scalar, then the following properties pertain:

Zero Vector
Multiplicative Identity0( v ) = 0
Scalar Zero Vector
Multiplicative Identity
c( 0 ) = 0
then c = 0
or v = 0
© Art Traynor 2011
Mathematics
Vector Space Axioms – Addition Abstraction
Mathematical Space
Vector Space
A vector space is comprised of four elements: a set of vectors,
a set of scalars, and two operations:

u + v ( is in V ) Closure Under Addition
( u + v ) + w = u + ( v + w )
Changes Order of Operations
as per “PEM-DAS”, Parentheses
are the principal or first operation
u + v = v + u
Commutative Property
of Addition
Re-Orders Terms
Does Not Change
Order of Operations – PEM-DAS
Associative Property
of Addition
u + 0 = u Additive Identity
u + ( – u ) = 0 Additive Inverse
If V is a vector space
then $ 0 | "u  V
" u  V ,
$ – u | "u  V
Operations: Addition & Scalar Mult.
Section 4.2, (Pg. 155)
Represents “ 0 = – 2 ” ,
a contradiction,
and thus no solution {  }
to the LE system for which
the augmented matrix stands
Note that there is nothing
in these axioms that entails
Length/Distance or Magnitude of Vectors,
nor corresponding attributes
such as Angle or Nearness
© Art Traynor 2011
Mathematics
Multiplicative Identity
cu ( is in V )
Closure Under
Scalar Multiplication
c ( u + v ) = cu + cv Distributive
( c + d )u = cu + du Distributive
c( du ) = ( cd )u
Associative Property
of Multiplication
1( u ) = u
Vector Space Axioms – Scalar Multiplication Abstraction
A vector space is comprised of four elements: a set of vectors,
a set of scalars, and two operations:

Operations: Addition & Scalar Mult.
Section 4.2, (Pg. 155)
Let c = 0 and you therefore don’t need
to state a separate scalar multiplicative
zero element
Mathematical Space
Vector Space
Note that there is nothing
in these axioms that entails
Length/Distance or Magnitude of Vectors,
nor corresponding attributes
such as Angle or Nearness
© Art Traynor 2011
Mathematics
Normed Vector Space
Normed Vector Space
Mathematical Space
“Length” (aka Distance/Magnitude) is
one type of norm
The vector norm ║X║ is more formally
defined as the ℓ2-norm
Somewhat trivially, a Normed Vector Space
is a Vector Space
Structured by an Norm
 A norm is defined as a Mathematical
Structure of the species of a Measure.
Inner Product
Spaces
Normed
Vector Spaces
Vector Spaces
There are several species of Norm
A Norm on a vector space V is a function
that maps a vector a in V to an element r of a Field F
f : V R = a  V  f ( a )  F = rn ( e.g rn  R n
)
2 2
ax + ay║a ║ = ║〈 ax , ay 〉║ =Magnitude
p-Norm
aka Euclidean Norm, L2 Norm, or ℓ2Norm ( or “ Length ” )
║a ║p = ( Σ | ai |p
)i = 1
n 1
p
© Art Traynor 2011
Mathematics
Magnitude
Vectors
Vector ( General ) Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
x
y
O
θ
A ( ax , ay )
 Magnitude
a
Position Vector
PVF: Position Vector Form
xO
θ
a (adj )
b (opp )
r = c (hyp )
M A (1, 0)
P ( cos θ, sin θ )
1
tan θ
cos θ
Q
sin θ
y
UCF: Unit Circle Form
ay
ax
Unit Circle
In PVF the magnitude of a vector a = 〈 ax , ay 〉 is equivalent to the
hypotenuse ( c = ║a ║ ) of a right triangle whose adjacent side
( a ) is given by the coordinate a1 , and whose opposite side ( b )
is given by the coordinate a2 :
2 2
ax + ay║a ║ = ║ 〈 ax , ay 〉 ║ =
Pythagorean Theorem derived
© Art Traynor 2011
Mathematics
Inner Product Space
Inner Product Space ( IPS )
An Inner Product Space
is a Vector Space
over a Field of Scalars (e.g. R or C )
Structured by an Inner Product

Inner Product
Spaces
Normed
Vector Spaces
For a Euclidean Space the Inner
Product is defined as the Dot
Product
Positive-Definite Symmetric
Bilinear Form
Length/Distance or Magnitude of Vectors
This Structure moreover defines IPS salients such as:
Vector Subtended Angle
Orthogonality of Vectors
These spaces have a well-ordered semantic construction of the form: A Vector Space with an Inner
Product “on” it…
“ The IPS of conventional multiplication over the field of R ”
“ The IPS of the dot product over the field of R ”
Mathematical Space
© Art Traynor 2011
Mathematics
Inner Product Space ( IPS ) Axioms
Inner Product
Spaces
Normed
Vector Spaces
A summation
of the Scalar Product
of the vector components, a = 〈 ax , ay 〉 , b = 〈 bx , by 〉
For a Euclidean Space the Inner
Product is defined as the Dot
Product
Note here the possibility of
describing the dot product as an
equivalence class for its alternate
expressions

to every n-tuple of vectors a and b in V, a scalar in F
Let V be a vector space over F , a Field of Scalars (e.g. R or C ) .
An Inner Product on V is a function that assigns,
f ( 〈 a , b 〉 ) = ( an bn + an bn ) = rn ( e.g. rn  R n
)
Inner Product Space
Mathematical Space
© Art Traynor 2011
Mathematics
Inner Product Axioms
Given vectors u , v , and w in Rn , and scalars c , the following axioms pertain:
〈 u , v 〉 = 〈 v , u 〉 Symmetry
〈 u , v + w 〉 = 〈 u , v 〉 + 〈 u , w 〉 Additive Linearity
Positive Definiteness
Inner Product Space
Mathematical Space
c 〈 u , v 〉 = 〈 cu , v 〉 Multiplicative Linearity
〈 v , v 〉 ≥ 0 〈 v , u 〉
〈 v , v 〉 = 0
if and only if v = 0
Section 5.2, (Pg. 237)
© Art Traynor 2011
Mathematics
Vectors
x
y Position Vector
O initial point
terminal point
Free Vector
A
B
O
A ( ax , ay )
B ( bx , by )
║a ║
║b ║
Vector ( Euclidean )
 Dot Product
The dot product of two vectors
is the scalar summation
of the product
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
PVF: Position Vector Form
UCF: Unit Circle Form
a · b = ax bx + ay by
of their components, a = < ax , ay > , b = < bx , by >
Also referred to as the Scalar Product or Inner Product Pythagorean Theorem derived
Inner (Dot) Product
© Art Traynor 2011
Mathematics
Vectors
x
y Position Vector
O initial point
terminal point
Free Vector
A
B
O
A ( ax , ay )
B ( bx , by )
║a ║
║b ║
Vector ( Euclidean )
 Dot Product & Angle Between Vectors
For any two non-zero vectors sharing a common initial point
the dot product of the two vectors is equivalent to
the product of their magnitudes and the cosine of the angle between
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
Inner (Dot) Product
θ θ
a · b = ax bx + ay by
a · b = ║b ║║a ║ cosθ
cosθ =
║a ║║b ║
a · b
You will be asked to find the angle
between two vectors sharing a
common initial point (origin)…a lot
θ = cos– 1
║a ║║b ║
a · b
© Art Traynor 2011
Mathematics
Vectors
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
x
y Position Vector
O
Free Vector
Physical Quantities represented
by vectors include: Displacement,
Velocity, Acceleration, Momentum,
Gravity, etc.
O
A ( ax , ay )
B ( bx , by )
a
b
c
A
B
θ θ
║a ║ cos θ
 Dot Product & Angle Between Vectors
For any two non-zero vectors sharing a common initial point
the dot product of the two vectors is equivalent to
the product of their magnitudes and the cosine of the angle between
Vector ( Euclidean )
a · b = ax bx + ay by
a · b = ║b ║║a ║ cosθ
= a · b
OB – OA = AB
Inner (Dot) Product
© Art Traynor 2011
Mathematics
Vectors
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
x
y
Position Vector
O
Free Vector
Physical Quantities represented
by vectors include: Displacement,
Velocity, Acceleration, Momentum,
Gravity, etc.
O
A ( a1, a2 )
B ( b1x , by )
a
b
c
A
B
θ θ
║a ║ cos θ
║b ║
Area = ║b ║║a ║ cosθ
= a · b
 Dot Product & Angle Between Vectors
For any two non-zero vectors sharing a common initial point
the dot product of the two vectors is equivalent to
the product of their magnitudes and the cosine of the angle between
Vector ( Euclidean )
a · b = ax bx + ay by
OB – OA = AB
Inner (Dot) Product
© Art Traynor 2011
Mathematics
Vectors
x
y
Position Vector
O
A ( ax , ay )
B ( bx , by )
a
b
θ
 Vector Component as Projection
Vector ( Euclidean )
Inner (Dot) Product
②
③
④
The intersection of any two vectors with
common origin will feature a shared angle
( the “Angle Between” ).
①
② In Position Vector Form (PVF), the vector
system can be aligned so that the vector
common origin coincides with a coordinate
system origin and one of the vectors (the
Multiplier vector “ b ”) can then be aligned
along the x-coordinate axis
③ In this orientation the Multiplicand vector
“ a ” (if the angle between is acute) will
terminate in the first quadrant of the
coordinate system.
O
A
B
θ
Free Vector
①
④ Note that the X-component of a (i.e. ax) is
geometrically equivalent to a vertical
projection from a onto the X-axis and b
ax
© Art Traynor 2011
Mathematics
Vectors
x
y
Position Vector
O
A ( ax , ay )
B ( bx , by )
a
b
θ
 Vector Component as Projection
Vector ( Euclidean )
Inner (Dot) Product
⑤ Recalling the trigonometric relationships
of the Unit Circle, it can be further noted
that the X-component of a (i.e. ax) –
previously noted to be geometrically
equivalent to a vertical projection from a
onto the X-axis and b – is also
geometrically equivalent to the product of
the length of a (its Magnitude) and the
cosine of the angle formed with the x-axis
ax
2 2
ax + ay║a ║ = ║〈 ax , ay 〉║ =
ax = ║a ║ cosθ
║b ║
1
compb a = a · b
O
U
!
!
cos θ
r = 1 = c
r = c (hyp )
tan θ
sin θ
θ
x
y
a (adj )
b (opp )
Unit Circle
© Art Traynor 2011
Mathematics
Vectors
x
y
Position Vector
O
A ( ax , ay )
B ( bx , by )
a
b
θ
 Vector Component as Projection
Vector ( Euclidean )
Inner (Dot) Product
⑤ Recalling the trigonometric relationships
of the Unit Circle, it can be further noted
that the X-component of a (i.e. ax) –
previously noted to be geometrically
equivalent to a vertical projection from a
onto the X-axis and b – is also
geometrically equivalent to the product of
the length of a (its Magnitude) and the
cosine of the angle formed with the x-axis
ax
2 2
ax + ay║a ║ = ║〈 ax , ay 〉║ =
Another way to express this geometrical
equivalence is to note the inherent
relationship between the lengths of two
vectors in composition sharing a common
origin and the angle between the two
supplied by the inner (dot) product
relationship
xO
θ
a (adj )
b (opp )
r = c (hyp )
M A (1, 0)
P ( cos θ, sin θ )
1
tan θ
(r|c|1) cos θ
Q
sin θ
y
Unit Circle
a
OA = OP = AP = t = θ = 1
© Art Traynor 2011
Mathematics
Vectors
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
x
y
Position Vector
O
Physical Quantities represented
by vectors include: Displacement,
Velocity, Acceleration, Momentum,
Gravity, etc.
A ( a1, a2 )
B ( b1x , by )
a
b
c
θ
║a ║ cos θ
║b ║
Area = ║b ║║a ║ cosθ
 Vector Component as Projection
For any two non-zero vectors sharing a common initial point
the dot product of the two vectors is equivalent to
the product of their magnitudes and the cosine of the angle between
Vector ( Euclidean )
Inner (Dot) Product
a · b = ║b ║║a ║ cosθ
© Art Traynor 2011
Mathematics
Vectors
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
xO
The notion of “component along” is
a direct consequence of the
definition of an inner (dot) product –
relating the two “sides” of a vector
“triangle” via a ratio given by the
cosine of the “angle between”
O
A ( ax , ay )
B ( b1, b2 )
a
b
c
A
Bθ θ
║a ║ cos θ
 Vector Component Along an Adjoining Vector
Vector ( Euclidean )
y
Position Vector
The component of OA along OB
that has the same direction as OB
║b ║
1
compb a = a · b
x
y
Position Vector
║b ║
b
compb a = a ·
Compb a = a ·║b ║
b
is the dot product of OA with the unit vector 1
║u ║
u
║a ║
û = u =( (
Dot Product
© Art Traynor 2011
Mathematics
Vectors
Col. 1 Col. 2 Col. 3 . . . Col. n
a1 a2 a3 . . . ana · b = aTb =
Col. 1
b1
b2
b3
bm
.
.
.
Vector ( Euclidean )
 Dot Product ( Determinant Form )
The dot product of two vectors, is the matrix product
of the 1 x n transpose of the multiplicand vector
and the m x 1 multiplier vector
Col. 1
b1
b =
b2
b3
bm
.
.
.
Col. 1
a1
a =
a2
a3
an
.
.
.
a · b = aTb = a1b1 + a2b2 + a3b3 …+ anbm
Inner (Dot) Product
© Art Traynor 2011
Mathematics
Vector Spaces – Rn
Vectors
Tuple Properties
An “ n-tuple” is characterized by the following:
A sequence
An ordered list
Comprising “ n ” elements ( n is a non-negative integer)
Canonical “ n-tuples ”
0-tuple: null tuple
1-tuple: singleton
2-tuple: ordered pair
3-tuple: triplet
© Art Traynor 2011
Mathematics
Vector Spaces – Rn
Vectors
Vector “ n-tuple ” Representation
An ordered n-tuple represents a vector in n-space
Section 4.1, (Pg. 149)
n Of the form ( ai , ai+1 ,…an – 1 , an )
n The Set of all n-tuples is n-space, denoted by Rn
n An n-tuple can be rendered as a point in Rn whose coordinates
describe a unique vector a
n n-tuples delineate Rn such that all points in Rn can be
represented by a unique n-tuple
Tuple Properties
© Art Traynor 2011
Mathematics
Vector Spaces – Rn
Vectors
“ n-tuple ” distinguished from a set
n-tuples delineate Rn space such that tuples
of disparate n-order: ( 1, 2, 3, 2 ) ≠ ( 1, 2, 3 )
are not equal as the same sequence expressed
as elements of a set { 1, 2, 3, 2 } = { 1, 2, 3 }

Tuple elements are ordered: ( 1, 2, 3 ) ≠ ( 3, 2, 1 )
whereas for a set { 1, 2, 3 } = { 3, 2, 1 }

A tuple is composed of a finite population of elements
whereas a set may contain infinitely many elements

Tuple: Sequence Matters
Set: Sequence Does Not Matter
Set: Order Matters
Set: Order Does Not Matter
Tuple Properties
© Art Traynor 2011
Mathematics
Vector Spaces – Rn
Vectors
Tuples as Functions
An n-tuple can be rendered as a function “ F ”
the domain of which is represented by the tuple’s element index/indices or “ X ”
the codomain of which is represented by the tuple’s elements or “ Y ”
X = { i , i + 1 ,…, n – 1 , n }
( ai , ai+1 ,…an – 1 , an ) = ( X , Y , F )
( a1 , a2 ,…, an – 1 , an ) = ( X , Y , F )
X = { 1 , 2 ,…, n – 1 , n }
or
or
Y = { a1 , a2 ,…, an –1 , an }
F = { ( 1, a1 ) , ( 2, a2 ) ,…, ( n – 1, an – 1 ) , ( n, an ) }
Tuple Properties
© Art Traynor 2011
Mathematics
Definition
Vectors
Vector ( Euclidean )
A geometric object (directed line segment)
describing a physical quantity and characterized by
Direction: depending on the coordinate system used to describe it; and
Magnitude: a scalar quantity (i.e. the “length” of the vector)
Aka: Geometric or Spatial Vector
originating at an initial point [ an ordered pair : ( 0, 0 ) ]
and concluding at a terminal point [ an ordered pair : ( a1 , a2 ) ]
Other mathematical objects
describing physical quantities and
coordinate system transforms
include: Pseudovectors and
Tensors
 Not to be confused with elements of Vector Space (as in Linear Algebra)
 Fixed-size, ordered collections
 Aka: Inner Product Space
 Also distinguished from statistical concept of a Random Vector
From the Latin Vehere (to carry)
or from Vectus…to carry some-
thing from the origin to the point
constituting the components of the vector 〈 a1 , a2 〉
© Art Traynor 2011
Mathematics
Vectors
Vector ( Euclidean ) Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
 Vector – Properties ( PVF Form)
Each (position) vector determines a unique Ordered Pair ( a1 , a2 )
The coordinates a1 and a2 form
the Components of vector 〈 a1 , a2 〉
x
y Position Vector
║a ║
O
θ
A ( a1, a2 )
initial point
terminal point
a
a1
a2


Position Vector
 A vector represented in PVF is Unique
n There is precisely one free-vector equivalent in PVF: a = OA
n The unique ordered pair describing the vector
is a unique n-tuple in Rn
© Art Traynor 2011
Mathematics
Vector Standard Operations in Rn
Vectors
Sum of Vectors: u + v
u + v = ( u1+ v1 , u2+ v2 , … , un –1 + vn –1 , un + vn )
Given u = ( ui , ui+1 ,…un – 1 , un ) and
v = ( vi , vi+1 ,…vn – 1 , vn )
Scalar Multiple of Vectors: cu
cu = ( cu1 , cu2 , … , cun –1 , cun )
Vector Operations
© Art Traynor 2011
Mathematics
Addition
Vectors
Vector ( Euclidean )
x
y
O initial point
terminal point
Free Vector
r
A
 Sum of Vectors – Vector Addition (Tail –to–Tip)
B
a ( ax, ay )
b ( bx, by )
║a ║
║b ║
 Any two (or more) vectors can be summed by positioning the operand
vector (or its corresponding-equivalent vector) tail at the tip of the
augend vector.
 The summation (resultant) vector is then extended from (tail) the
origin (tail) of the augend vector to the terminal point (tip) of the
operand vector (tip-to-tip/head-to-head).
ry
rx
r ( rx , ry )
θ
“ Tail-to-Tip ”
“ Tip-to-Tip ”
Same procedure, sequence of
operations whether for vector
addition (summation) or vector
subtraction (difference)
Resultant is always tip-to-tip
Operands are oriented “ tip-to-tail ”
resultant vector is oriented “ tip-to-
tip ”
The resultant vector in a
summation always originates at
the displacement origin and
terminates coincident at the
terminus of the final displacement
vector (e.g. tip-to-tip)
Chump Alert: A vector summation
is a species of Linear Comination
© Art Traynor 2011
Mathematics
Vector Standard Operations in Rn
Vectors
Vector Operations
Difference of Vectors: u – v
u – v = ( u1 – v1 , u2 – v2 , … , un –1 – vn –1 , un – vn )
Given u = ( ui , ui+1 ,…un – 1 , un ) and
v = ( vi , vi+1 ,…vn – 1 , vn )
Scalar Multiplicative Inverse: – cu ( c = 1)
– u = ( – u1 , – u2 , … , – un –1 , – un )
© Art Traynor 2011
Mathematics
Subtraction
Vectors
Vector ( Euclidean )
x
y
O
ry
rx
θ
initial
point
terminal point
Free Vector
r = a + bcorr
a
b
O
“ Tail-to-Tip ”
“ Tip-to-Tip ”
( Addition )
bcorr
– bcorr “ Tip-to-Tip ”
( Difference )
Position Vector
r = a – bcorr
Difference of Vectors – Vector Subtraction ( Tail –to–Tip )
 Any two (or more) vectors can be subtracted by positioning the tail of
a corresponding-equivalent subtrahend vector (initial point) at the
tip (terminal point) of the minuend vector.
 The difference (resultant) vector is then extended from the tail (initial
point ) of the minuend vector (tail-to-tail) to the terminal point
(tip) of the subtrahend vector (tip-to-tip).
minuend
subtrahend
Same procedure, sequence of
operations whether for vector
addition (summation) or vector
subtraction (difference)
Resultant is always tip-to-tip
r
r = a + bcorr
bcorr
– bcorr
a
b
Operands are oriented “ tip-to-tail ”
resultant vector is oriented “ tip-to-
tip ”
Chump Alert: A vector difference
is a species of Linear Comination
© Art Traynor 2011
Mathematics
Vector Properties – Additive Identity & Additive Inverse
Vectors
Vector Properties
Given vectors u , v , and w in Rn , and scalars c and d, the following
properties pertain

0v = 0 Scalar Zero Element
If u + v = v then u = 0
Additive Identity
is Unique
If v + u = 0 then u = – v Additive Inverse
is Unique
c0 = 0
Scalar Multiplicative Identity
of Zero Vector
If cv = 0 then c = 0 or v = 0 Zero Vector
Product Equivalence
– ( – v ) = v Negation Identity
Section 4.1, (Pg. 151)
© Art Traynor 2011
Mathematics
Vector Spaces – Classification
Real Number Vector Spaces
R = set of all real numbers
R2 = set of all ordered pairs
R3 = set of all ordered triplets
Rn = set of all n-tuple
Matrix Vector Spaces
Mm,n = set of all m x n matrices
Mn,n = set of all n x n square matrices
Section 4.2, (Pg. 157)
Vector Spaces
Vector Space
© Art Traynor 2011
Mathematics
Vector Spaces
Vector Spaces – Classification
Polynomial Vector Spaces
P = set of all polynomials
Pn = set of all polynomials of degree ≤ n
Continuous Functions ( Calculus ) Vector Spaces
C ( – ∞ , ∞) = set of all continuous functions
defined on the real number line

C [ a, b ] = set of all continuous functions
defined on a closed interval [ a, b ]

Section 4.2, (Pg. 157)
Vector Space
© Art Traynor 2011
Mathematics
Vector Subspaces
Subspace Definition
A non-empty subset W ( W ≠  ) of a vector space V
is a subspace of V when the following conditions pertain:

W is a vector space under addition in V
W is a vector space under scalar multiplication in V
Subspace Test
For a non-empty subset W ( W ≠  ) of a vector space V,
W is a subspace of V if-and-only if the following pertain:

If u and v are in W, then u + v is in W
If u is in W and c is any scalar, then cu is in W
Zero Subspace
W = { 0 }
Section 4.3, (Pg. 162)
Section 4.3, (Pg. 162)
Section 4.3, (Pg. 163)
Vector Space
© Art Traynor 2011
Mathematics
Vector Subspaces
W 1
Polynomial
functions
W 5
Functions
W 2
Differentiable
functions
W 3
Continuous
functions
W 4
Integrable
functions
W5 = Vector Space " f Defined on [ 0, 1 ]
W4 = Set " f Integrable on [ 0, 1 ]
W3 = Set " f Continuous on [ 0, 1 ]
W2 = Set " f Differentiable on [ 0, 1 ]
W2 = Set " Polynomials
Defined on [ 0, 1 ]
W1  W2  W3  W4  W5
W 1 – Every Polynomial function is Differentiable  W1  W2
W 2 – Every Differentiable function is Continuous  W2  W3
W 3 – Every Continuous function is Integrable  W3  W4
W 4 – Every Integrable function
is a Function  W4  W5
Function Space Section 4.3, (Pg. 164)
Vector Space
© Art Traynor 2011
Mathematics
Vector Subspaces
U
V W
V  W
Properties of Scalar Multiplication
If V & W are both subspaces of a vector space U,
then the intersection of V & W ( V  W )
is also a subspace of U

Vector Space
© Art Traynor 2011
Mathematics
Linear Combination of Vectors ( Definition )
A vector v in a vector space V
with scalars c = ( ci , ci+1 ,…cn – 1 , cn )
is a Linear Combination
of the vectors ( ui , ui+1 ,…un – 1 , un )
expressed as:

v = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un
Section 4.1, (Pg. 152)
Section 4.4, (Pg. 169)
Linear Combination
Example:
S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) }
v1 v2 v3
v1 = 3v2 + v3
v1 = 3( 0 , 1 , 2 ) + (1 , 0 , – 5 )
v1 = ( 0 , 3 , 6 ) + (1 , 0 , – 5 )
v1 = ( 0 + 1 ) , ( 3 + 0 ) , ( 6 – 5 ) = ( 1 , 3 , 1 )
V1 can be expressed as a
combination of components of the
other two vectors in the set S
$ c  F | ( cv2 v3 )  v1
$ c  F | ( cv2  v3 )  v1
Chump Alert: Each of the Vector
Space operations (e.g. summation,
difference, scalar multiplications) is
a species of Linear Combination
Vector Space
© Art Traynor 2011
Mathematics
Spanning Set of a Vector Space
Given S = { vi , vi+1 ,…vk – 1 , vk },
a subset of vector space V
the set S is a Spanning Set of V
when every vector in V can be written as a Linear Combination
of vectors in S, “ S spans V ”

x = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn
Section 4.1, (Pg. 152)
Section 4.4, (Pg. 169)
Section 4.4, (Pg. 171)
Span
Sounds analogous to the power
set of a vector equaton?
The set of unit vectors S = { i, j,
k } are the minimal spanning set
(Basis) for Rn Vector Space
P ( S ) = { , S, cn û }
Vector Space
© Art Traynor 2011
Mathematics
Vectors
Vector ( Euclidean )
Aka: Geometric or Spatial Vector
From the Latin Vehere (to carry)
x
y Position Vector
O
θ
A ( ax , ay )
 Unit Vector ( Components )
Any vector in PVF can be expressed as a scalar product of the vector
sum of its unit ( multiplicative scalar identity ) components
î = 〈 1, 0 〉 , ĵ = 〈 1, 0 〉
a
ĵ
î x
y Position Vector
O
θ
A ( ax , ay )
ay ĵ
ax î
a
a = ax î + ay ĵ
PVF: Position Vector Form
c ( î ) = 〈 c1, c0 〉 , c ( ĵ ) = 〈 c 0, c 1 〉
ax ( î ) = 〈 ax 1, ax 0 〉 , ay ( ĵ ) = 〈 ay 0, ay 1 〉
Unit Vector
The set of unit vectors S = { i, j,
k } are the minimal spanning set
(Basis) for Rn Vector Space
© Art Traynor 2011
Mathematics
Vectors
Vector ( Euclidean ) Aka: Geometric or Spatial Vector
Aka: Versor (Cartesian)
x
y Position Vector
O
θ
 Normalized Unit Vector
A normalized unit vector ( NUV ) is the vector of unitary magnitude
corresponding to the set of all vectors which share its direction
ĵ
î x
y Position Vector
O
θ
A ( ax, ay )
a2 ĵ
a1 î
A ( ax , ay )
û ( ax , ay )║a ║
1
║a ║
1

Any vector can be specified by the scalar product of its corresponding
normalized unit vector and its magnitude ( identity )

a = ax î + ay ĵ
1
║a ║
a
║a ║
û = a =
The NUV of a vector is the scalar product of the reciprocal of its magnitude
ĵ
î
û
ûa
Normalized Unit Vector
© Art Traynor 2011
Mathematics
Span of a Set
Given S = { vi , vi+1 ,…vk – 1 , vk },
is a set of vectors in a vector space V
with scalars c = ( ci , ci+1 ,…cn – 1 , cn )
then the span of S
is the set of all Linear Combinations
of the vectors in S,

span ( S ) = { ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk }
Section 4.4, (Pg. 172)
The span of S is denoted:
span ( S )
or span { vi , vi+1 ,…vk – 1 , vk }
When span ( S ) = V, it is said that:
V is spanned by { vi , vi+1 ,…vk – 1 , vk }
or S spans V
Span
P ( S ) = { , S, cn û }
The set of unit vectors S = { i, j,
k } are the minimal spanning set
(Basis) for Rn Vector Space
Vector Space
© Art Traynor 2011
Mathematics
Span ( S ) as Subspace of V
Span
Given S = { vi , vi+1 ,…vk – 1 , vk },
is a set of vectors in a vector space V
then the span of S
span ( S )
or span { vi , vi+1 ,…vk – 1 , vk }
is a Subspace of V
 Section 4.4, (Pg. 172)
The span of S denoted span ( S )
or span { vi , vi+1 ,…vk – 1 , vk }
is the smallest Subspace of V containing Ssuch that
every other Subspace of V containing S
Must also contain span ( S )
It is not sufficiently
obvious from this that the
minimal cardinality of
Spanning Set corresponds
precisely to the dimension
of the space
The set of unit vectors S = { i, j,
k } are the minimal spanning set
(Basis) for Rn Vector Space
P ( S ) = { , S, cn û }
W = span ( S ) = { Σ ci vi | k  N , vi  S , ci  F }i = 1
k
V
Sk – i
SiSi W
Si
Si
F = {ci , ci+1…ck – 1 , ck }
Vector Space
© Art Traynor 2011
Mathematics
Linear Independence
Linear Independence
Section 4.4, (Pg. 173)
Given S = { vi , vi+1 ,…vk – 1 , vk },
within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn )
a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0
expresses Linear Dependence if the solution set
includes at least one non-zero solution and
if the vector equation features only the trial solution
0 = ( ci , ci+1 ,…cn – 1 , cn ) it is said to express Linear Independence

By setting it equal to the
zero vector, are we
looking for solutions to a
homogenous system?
Example:
S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) }
v1 v2 v3
0 = c1v1 + c2v2 + c3v3
0 = c1 ( 1 , 3 , 1 ) + c2 ( 0 , 1 , 2 ) + c3 ( 1 , 0 , – 5 )
ci = { 1 , – 3 , – 1 }
0 = ( 1 , 3 , 1 ) + ( 0 , – 3 , – 6 ) + ( – 1 , 0 , 5 )
0 = xi ( 1 + 0 – 1 ) , yi ( 3 – 3 + 0 ) , zi ( 1 – 6 + 5 )
0 = xi ( 0 ) , yi ( 0 ) , zi ( 0 )
It is not sufficiently
obvious from this that the
cardinality of the
maximum set of linearly
independent vectors
corresponds precisely to
the dimension of the
space
xi yi zi = x1 , y1, z1 x2 , y2, z2 x3 , y3, z3
Vector Space
© Art Traynor 2011
Mathematics
Linear Independence
Linear Independence
Section 4.4, (Pg. 173)
Given S = { vi , vi+1 ,…vk – 1 , vk },
within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn )
a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0
expresses Linear Dependence if the solution set includes at least one non-
zero solution and if the vector equation features only the trial solution

By setting it equal to the
zero vector, are we
looking for solutions to a
homogenous system?
Example:
S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) }
v1 v2
v3
0 = c1v1 + c2v2 + c3v3
0 = c1 ( 1 , 3 , 1 ) + c2 ( 0 , 1 , 2 ) + c3 ( 1 , 0 , – 5 )
ci = { 1 , – 3 , – 1 }
0 = ( 1 , 3 , 1 ) + ( 0 , – 3 , – 6 ) + ( – 1 , 0 , 5 )
0 = xi ( 1 + 0 – 1 ) , yi ( 3 – 3 + 0 ) , zi ( 1 – 6 + 5 )
0 = xi ( 0 ) , yi ( 0 ) , zi ( 0 )
It is not sufficiently
obvious from this that the
cardinality of the
maximum set of linearly
independent vectors
corresponds precisely to
the dimension of the
space
0 = 1 v1 + – 3 v2 + – 1 v3
xi yi zi = x1 , y1, z1 x2 , y2, z2 x3 , y3, z3
Vector Space
© Art Traynor 2011
Mathematics
Linear Independence
Linear Independence
Section 4.4, (Pg. 173)
Given S = { vi , vi+1 ,…vk – 1 , vk },
within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn )
a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0
expresses Linear Dependence if the solution set
includes at least one non-zero solution and if the vector equation features only the trial
solution 0 = ( ci , ci+1 ,…cn – 1 , cn ) it is said to express Linear Independence

Example:
A Visitor to New York City asks directions to Carnegie Hall.
He is instructed to proceed 3-blocks North then 4-blocks East.
you are
here
3N
4E
5NE
These two directions are sufficient to allow him to reach his destination
The system of vectors is Linearly Independent and corresponds to the
dimension of the space (which would be elsewise if he needed to go to the 6th Floor)

Adding that the destination is 5-blocks Northeast renders the system of vectors
Linearly Dependent (as one of the vectors can be expressed as
a Linear Combination of the other two).

Vector Space
© Art Traynor 2011
Mathematics
Linear Independence
Linear Independence
Example:
A Visitor to New York City asks directions to Carnegie Hall.
He is instructed to proceed 3-blocks North then 4-blocks East.
you are
here
3N
4E
5NE
These two directions are sufficient to allow him to reach his destination
The system of vectors is Linearly Independent and corresponds to the
dimension of the space (which would be elsewise if he needed to go to the 6th Floor)

Adding that the destination is 5-blocks Northeast renders the system of vectors
Linearly Dependent (as one of the vectors can be expressed as
a Linear Combination of the other two).

Vector Space
Ban the Hypotenuse! It is not necessary.
Any triangle can be described completely by a & b.
C is not necessary
© Art Traynor 2011
Mathematics
Linear Independence Test
Linear Independence
Section 4.4, (Pg. 174)
For S = { vi , vi+1 ,…vk – 1 , vk },
constituting a set of vectors within vector space V ,
to determine the linear independence of S:

ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0
 State the set as a homogenous equation
equivalent to a sum of vector products
( of a scalar coefficient and the respective constituent vector components)
 Perform GJE (Gauss-Jordan Elimination)
to determine if the system has a Unique Solution
 Where only the trivial solution ci = 0 with satisfy the system,
the Set S is demonstrated to be Linear Independent
where the system is also satisfied by one or more
non-trivial solutions, the Set S is demonstrated to be Linear Independent
Vector Space
© Art Traynor 2011
Mathematics
Linear Dependence & Linear Combination
Linear Independence
Section 4.4, (Pg. 176)
For S = { vi , vi+1 ,…vk – 1 , vk }, k ≥ 2
constituting a set of vectors within vector space V ,
the set can be demonstrated to be Linear Dependent
if-and-only-if

 At least one element vector of the set can be
expressed as a linear combination of any of
the other vectors in the set
For S = { v , u }
constituting a set of vectors within vector space V ,
the set can be demonstrated to be Linear Dependent
if-and-only-if

 One element vector of the set can be
expressed as a scalar multiple of the other
Vector Space
© Art Traynor 2011
Mathematics
Basis Criteria
Basis
Section 4.5, (Pg. 180)
For S = { vi , vi+1 ,…vk – 1 , vk },
constituting a set of vectors within vector space V ,
the set can be demonstrated to form a Basis for the
vector set if

 S spans V
 S is Linear Independent
Infinite Dimensional
examples: Vector Space P ( all polynomials )
Vector Space C ( all continuous functions)

Finite Dimensional
examples: Zero Vector { 0 }

Standard Basis
for an n x n matrix features a diagonal populated with
ones with all other entries occupied by zeros

Vector Space
© Art Traynor 2011
Mathematics
Basis Representation - Uniqueness
Basis
Section 4.5, (Pg. 182)
For S = { vi , vi+1 ,…vk – 1 , vk },
constituting a set of vectors within vector space V ,
and forming a basis for that vector space, then
Every element vector can only be expressed as a unique
Linear Combination of the constituent vectors

 If there were more than one, their difference
would yield the zero vector which would violate
the Basis criteria requiring Linear Independence
Vector Space
© Art Traynor 2011
Mathematics
Basis Cardinality
Basis
Section 4.5, (Pg. 184)
For S = { vi , vi+1 ,…vn – 1 , vn },
constituting a set of vectors within vector space V ,
and forming a basis for that vector space with precisely
“ n ” vectors, then
Every basis for V will include precisely “ n ” vectors

Basis Cardinality - Maximum
For S = { vi , vi+1 ,…vn – 1 , vn },
constituting a set of vectors within vector space V ,
and forming a basis for that vector space, then
any set in Rn with k vectors, where k > n
will be Linear Dependent

Section 4.5, (Pg. 183)
Vector Space
Easier way to think of it…
Refer to the “Standard
Basis” for the vector
space (e.g. R3)
Any alternative Basis must
then include precisely that
many vectors
© Art Traynor 2011
Mathematics
Vector Space Dimension
Vector Space
Dimension
Section 4.5, (Pg. 185)
For S = { vi , vi+1 ,…vk – 1 , vk },
constituting a set of vectors within vector space V ,
and forming a basis for that vector space with precisely
“ n ” vectors, then
The number “ n ” is denoted as the Dimension
of V , or dim ( V )

 dim ( Rn ) = n
 dim ( Pn ) = n + 1
 dim ( Mm,n ) = m · n
To determine the Dimension of a Subspace W of vector space V
 Identify a set S of Linear Independent vectors that Span subspace W
 The set S thus identified is a Basis for the subset W
 The dimension of subspace W is thus the count ( or cardinality )
of the vectors in the Basis
© Art Traynor 2011
Mathematics
Vector Space Dimension
Vector Space
Dimension
Section 4.5, (Pg. 185)
To determine the Dimension of a Subspace W of vector space V
 Identify a set S of Linear Independent vectors that Span subspace W
 The set S thus identified is a Basis for the subset W
 The dimension of subspace W is thus the count ( or cardinality )
of the vectors in the Basis
Example: W = { ( d , c – d , c ) } Inspection reveals that this subspace
has precisely two factors: c and d
Wc = { c ( 0c · d , 1c – d , 1c ) }
Wc = { c ( 0 , 1, 1 ) }
Wd = { d ( c · 0d , c – 1d , c · 0d ) }
Wd = { d ( 0 , – 1, 0 ) }
Wcd = { c ( 0 , 1, 1 ) + d ( 0 , – 1, 0 ) }
The Dimension of the subspace is thus two
because we have precisely two vectors
spanned by the set
S = { (0,1,1), (1, –1, 0) },
which can be shown to be Linear Independent
and thus a Basis for W
© Art Traynor 2011
Mathematics
Basis Test for an n-Dimensional Space
Basis
Section 4.5, (Pg. 186)
For a vector space V of Dimension n
if S = { vi , vi+1 ,…vk – 1 , vk },
constitutes a set of Linear Independent vectors in V,
then:

 S is a Basis for V
Vector Space
 If S spans V then S is a basis for V
© Art Traynor 2011
Mathematics
Subspace Spans of Vector Representations
Vector Representations
Section 4.6, (Pg. 189)
Matrices
A =
A
a11
a21
am1
.
.
.
a12
a22
am2
.
.
.
. . .
. . .
. . .
.
.
.
a1n
a2n
amn
.
.
.
( a11 , a12 , … a1n )
Row Vectors of A
( a21 , a22 , … a2n )
.
.
.
( a21 , a22 , … a2n )
A =
A
a11
a21
am1
.
.
.
a12
a22
am2
.
.
.
. . .
. . .
. . .
.
.
.
a1n
a2n
amn
.
.
.
a11
a21
am1
.
.
.
a12
a22
am2
.
.
.
. . .
. . .
. . .
.
.
.
a1n
a2n
amn
.
.
.
Column Vectors of A
For a matrix A of m-rows and n-columns Amx n
The Row Space ( a subspace of Rn ) is spanned by the row vectors of A
The Column Space ( a subspace of Rn ) is spanned by the column vectors of A
© Art Traynor 2011
Mathematics
Row Space for Row Equivalent Matrices
Matrix Sub Spaces
Section 4.6, (Pg. 190)
Another Row-Equivalent m-by-n matrix Bm x n
will share the same Row Space with Amx n

Vector Space
Row Space Basis Section 4.6, (Pg. 190)
For a matrix A of m-rows and n-columns Amx n
Another Row-Equivalent m-by-n matrix Bm x n
expressed in Row Echelon Form ( REF ) will feature
non-zero Row Vectors constituting a Basis for Amx n

For a matrix A of m-rows and n-columns Amx n
Row/Column Space Dimensional Equivalence Section 4.6, (Pg. 192)
Both Row Space and Column Space share the same value
for their respective Dimensions

For a matrix A of m-rows and n-columns Amx n
© Art Traynor 2011
Mathematics
Matrix Rank
Matrix Rank
Section 4.6, (Pg. 193)
The Dimension of a Row or Column Space
defines the Rank of the Matrix, denoted rank ( A )

Vector Space
For a matrix A of m-rows and n-columns Amx n
© Art Traynor 2011
Mathematics
Matrix Nullspace and Nullity
Nullspace
Section 4.6, (Pg. 194)
The solution set for the system forms a subspace of Rn designated as
the Nullspace of A and denoted as N ( A )

Vector Space
For a homogenous linear system Ax = 0 where A is an matrix of m-rows
and n-columns Amx n , x = [ xi , xi+1 ,…xn – 1 , xn ] T is a column
vector of unknowns 0 = [ 0 , 0 , … 0 , 0 ] T is the zero vector in Rm
N ( A ) = { x  Rn | Ax = 0 }
The Dimension of the Nullspace of A is designated as its Nullity
dim ( N ( A ) )
© Art Traynor 2011
Mathematics
1
3
1
2
6
2
– 2
– 5
0
1
4
3
Standard Coefficient Matrix
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix
A =
Standard Matrix Form
(SMF) is that arrangement
of matrix elements in which
constituent rows are
populated with individual
expressions (equations)
constituting the linear
system (of which the matrix
is a representation), and in
which the columns are
arrayed such that each is
populated by a distinct
“unknown” (variable) the
entries of which are
populated by their individual
coefficients.
Section 6.3, (Pg. 314)
Section 4.6, (Pg. 195),
Example 7
Designate each row with an Uppercase Alpha Character…this
will allow the Elementary Row Operations (EROs) to be
performed to be described in a summarized algebraic fashion.
①
1
3
1
2
6
2
– 2
– 5
0
1
4
3
A1
B1
C1
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Utilize Permutation to manipulate the simplest (most easily
reduced) rows into the primary and higher positions in the matrix
1
3
1
2
6
2
– 2
– 5
0
1
4
3
A1
B1
C1
②
3
1
6
2
– 5
0
4
3
B1
C1
The book prefers Row Three to be in
the first position, so we’ll begin by
permuting Rows One and Three
A ⇌ C1
1 2 – 2 1A1
Once “fixed” at a value of one (1) circle
this entry as an established pivot
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Continued…②
3
1
6
2
– 5
0
4
3
B1
C1
1 2 – 2 1A1
A ⇌ B1
3
1
6
2
– 5
0
4
3
B1
C1
1 2 – 2 1A1
Row Three appears to present a very
simple reduction (by simply scaling it by
a factor of negative one), so it too
should be permuted with Row Two for
clarity and ease of succeeding EROs
operations.
A ⇌ B1
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
From this established “Pivot” move down to the next row and
render the entry there into a “zero” using EROs
1
③
So now we perform our first real
operation on the system. It can be noted
“by inspection” that Row Two scaled by a
factor of – 1 can make quick work of
yielding zeros just where we want them
– 1A1 = A1
3 6
2
– 5
0
4
3
B1
C1
1 2 – 2 1A1
– 1 – 2 2 – 1– 1A1
1 2 0 3C1
0 0 2 2A1
Adding the scaled Row Two to Row One
we get a cleaned up replacement for Row
Two in the next evolution of the GJE
reduction
C2 – 1A1 = A1
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Continued…
1
③
So now we perform our first real
operation on the system. It can be noted
“by inspection” that Row Two scaled by a
factor of – 1 can make quick work of
yielding zeros just where we want them
– 1A1 = A1
3 6
2
– 5
0
4
3
B1
C1
1 2 – 2 1A1
– 1 – 2 2 – 1– 1A1
1 2 0 3C1
0 0 2 2A1
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Continued…③
1
3 6
2
– 5
0
4
3
B1
C1
0 0 2 2A1
Inspection suggests that Row one scaled
by a factor of – 3 would allow for a handy
reduction of Row Three into the desired
zero element in the first column position
B – 3C1 = B1
– 3 – 6 0 – 9– 3C1
0 0 – 5 – 5B1
3 6 – 5 4B1
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Having “zeroed-out” the first column,
we proceed to the right…
1 2 0 3C1
0 0 2 2A1
0 0 – 5 – 5B1
Rows Two and Three can be dispatched
with a scaling by their product
5A1 = A2
2B1 = B2
0 0 10 10A2
0 0 – 10 – 10B2
A simple summation of Rows Two and
Three will thereby reduce this matrix to a
very tame state
A2 + B2 = B3
0 0 0 0B3
④
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Continued…
1 2 0 3C1
0 0 0 0B3
0 0 10 10A2
Finally we note that Row 2 can be
reduced by a common factor ( 10 ) to
yield a maximally simplified row
A2 = A3
1
10
1 2 0 3C1
0 0 0 0B3
0 0 1 1A3
Arrived!!
④
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Now we look to parameterizing the system. We look for
coefficients ≠ 1 to prefer for selection so as to maximally
simplify substitution back into the system.
1 2 0 3
0 0 0 0
0 0 1 1
x1 + 2x2 + 2x3 + 3x4 = 0
System of Linear Equations
x1 + 2x2 + 1x3 + 1x4 = 0
⑤
x1 + 2s2 + 2x3 + 3t4 = 0
x1 + 2x2 + 1x3 + 1t4 = 0
x1 + 2x2 + 1x3 + 1x2 = s
x1 + 2x2 + 1x3 + 1x4 = t
B =
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Continued…
1 2 0 3
0 0 0 0
0 0 1 1
System of Linear Equations⑤
x1 + 2s2 + 2x3 + 3t4 = 0
x1 + 2x2 + 1x3 3 + 1t4 = 0
x1 + 2x2 + 1x3 + 1x2 = s
x1 + 2x2 + 1x3 + 1x4 = t
x1 + 2s2 + 2x3 + 3t4 = – 2s – 3t
x1 + 2s2 + 2x3 + x1 = – 2s – 3t
x1 + 2s2 + 2x3 + x3 = – t
B =
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
Now we re-configure the system into Partitioned Matrix Form
(PMF) arraying the “unknowns” (variables) and the
parameterized solutions into respective column vectors
1 2 0 3
0 0 0 0
0 0 1 1
System of Linear Equations
x1 + 2s2 + 2x3 + 3t4 = 0
x1 + 2x2 + 1x3 3 + 1t4 = 0
x1 + 2x2 + 1x3 + 1x2 = s
x1 + 2x2 + 1x3 + 1x4 = t
x1 + 2s2 + 2x3 + 3t4 = – 2s – 3t
x1 + 2s2 + 2x3 + x1 = – 2s – 3t
x1 + 2s2 + 2x3 + x3 = – t
⑥
– 2s
1s
0
– 3t
0t
– 1t
0 1t
x =
– 2
1
0
– 3
0
– 1
0 1
= s + t
B =
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix Section 4.6, (Pg. 195),
Example 7
1 2 0 3
0 0 0 0
0 0 1 1
– 2s
1s
0
– 3t
0t
– 1t
0 1t
x =
– 2
1
0
– 3
0
– 1
0 1
= s + t
B =
N ( A ) = { ( – 2 , 1, 0, 0 ) , ( – 3 , 0, – 1, 1 ) }
The Nullspace of the matrix thus forms a basis, synonymous with the
Solution Space of the homogenous system Ax = 0 , ALL
solutions of which are Linear Combinations of these two vectors
1
3
1
2
6
2
– 2
– 5
0
1
4
3
Standard Coefficient Matrix
A =
“ Row Equivalent” ( REF Basis Matrix )
v1
v2
v3
w1
w2
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix
Section 4.6, (Pg. 195),
Example 7
– 2s
1s
0
– 3t
0t
– 1t
0 1t
x =
– 2
1
0
– 3
0
– 1
0 1
= s + t
dim ( N ( A ) ) = 2 ; i.e. { w1 , w2 }
The Dimension of the Nullspace is equivalent to the cardinality of
the non-zero “Row Equivalent” REF Basis matrix row constituents,
which is otherwise known as the Nullity of the Matrix
1
3
1
2
6
2
– 2
– 5
0
1
4
3
Standard Coefficient Matrix
A =
v1
v2
v3
1 2 0 3
0 0 0 0
0 0 1 1B =
“ Row Equivalent” ( REF Basis Matrix )
w1
w2
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix
Section 4.6, (Pg. 195),
Example 7
– 2s
1s
0
– 3t
0t
– 1t
0 1t
x =
– 2
1
0
– 3
0
– 1
0 1
= s + t
The Rank and Nullity of the Solution Space “Row Equivalent”
REF Basis matrix can be determined by the number of columns featuring
leading “ ones ” ( i.e. two, the Rank , columns 1 & 3 ) and those
columns which correspond to the free variable column vectors ( i.e. two, the
Nullity , columns 2 & 4 ) .
1
3
1
2
6
2
– 2
– 5
0
1
4
3
Standard Coefficient Matrix
A =
v1
v2
v3
1 2 0 3
0 0 0 0
0 0 1 1B =
“ Row Equivalent” ( REF Basis Matrix )
w1
w2
© Art Traynor 2011
Mathematics
Nullspace
Vector Space
Matrix Nullspace, Rank, and Nullity
Example: find the nullspace of the matrix
Section 4.6, (Pg. 195),
Example 7
– 2s
1s
0
– 3t
0t
– 1t
0 1t
x =
– 2
1
0
– 3
0
– 1
0 1
= s + t
For a matrix A of m-rows and n-columns Amx n then, the total number
of columns ( n ) is the sum of the Rank and Nullity of the
Solution Space “Row Equivalent” REF Basis matrix ( i.e. Rank two,
columns 1 & 3 and Nullity two, columns 2 & 4 ) .
1
3
1
2
6
2
– 2
– 5
0
1
4
3
Standard Coefficient Matrix
A =
v1
v2
v3
1 2 0 3
0 0 0 0
0 0 1 1B =
“ Row Equivalent” ( REF Basis Matrix )
w1
w2
© Art Traynor 2011
Mathematics
Solution Space Dimension – Matrix Metrics
Solution Space
Section 4.6, (Pg. 196)
Vector Space
For a matrix A composed of m-rows and n-columns Amx n ,
of Rank “ r ” , the Dimension of the Solution Space Ax = 0 is
The difference of the matrix column cardinality “ n ” less
the matrix Rank “ r ” , or n – r

The column space cardinality “ n ” is equal to
n = rank ( A ) – nullity ( A )
or
n = rank ( A ) – dim ( N ( A ) )

© Art Traynor 2011
Mathematics
Solution Space for Non-Homogenous LE Systems ( nHLES )
Solution Space
Section 4.6, (Pg. 197)
Vector Space
For a matrix A composed of m-rows and n-columns Amx n ,
the non-homogenous system Ax = b with particular solution xp
can be expressed in the form x = xp + xh where Ax = 0
represents the associated homogenous system ( HLES ) with solution xh
The zero vector 0 = { 0 , 0 , … 0 , 0 } cannot be a solution
to an nHLES therefore the set of all solution vectors to the nHLES
cannot structure a subspace.

© Art Traynor 2011
Mathematics
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
Augmented Coefficient Matrix
Solution Space
Vector Space
Particular Solution of a Non-Homogenous LE System ( nHLES )
Example: Find the set of all solution vectors of the LE system Section 4.6, (Pg. 197),
Example 9
5
8
– 9
1x1 + 0x2 – 2x3 + 1x4 = 5s
3x1 + 1x2 – 5x3 + 0x4 = 8s
1x1 + 2x2 – 0x3 – 5x4 = – 9s
Non-Homogenous System of Linear Equations ( nHLES)
A =
v1
v2
v3

x1 x2 x3 x4 b
Section 6.3, (Pg. 314)
The set of all solution vectors to an nHLES of the form Ax = b entails
expressing the system as a matrix A composed of m-rows and n-
columns Amx n , arrayed in augmented Standard Matrix Form ( aSMF ).
①
The aSMF matrix features Row Vectors { v i , v i + 1 ,…vn – 1 , v n }
corresponding to the constituent equations of the nHLES.
Column Vectors in the aSMF are arrayed to collect each distinct
“ unknown ” ( variable ) term { xi , x i + 1 ,…xn – 1 , x n } with each
entry populated by their individual coefficients and adjoined by the
solution vector of constants { b }.
© Art Traynor 2011
Mathematics
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
Augmented Coefficient Matrix
Solution Space
Vector Space
Particular Solution of a Non-Homogenous LE System
Example: Find the set of all solution vectors of the LE system
A =
Section 4.6, (Pg. 197),
Example 9
Designate each row with an Uppercase Alpha Character…this
will allow the Elementary Row Operations (EROs) of the
Gauss-Jordan Elimination (GJE) to be applied to be described
in a summarized algebraic fashion.
5
8
– 9
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
5
8
– 9
A1
B1
C1
②
v1
v2
v3
x1 x2 x3 x4 b
Gauss-Jordan Elimination
(GJE) is an algorithmic
scheme applied to a
Standard Matrix Form
(SMF) representation of a
system of Linear Equations
resulting in a “row-
equivalent” reduced matrix
on which the main diagonal
entries are all “ones” (pivots
in Row Echelon Form -
REF) and all entries above
and below the “pivots” are
populated by “zeros” or
Reduced Row Echelon
Form (RREF).
Section 1.2, (Pg. 19)
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Investigate permutation as a strategy to manipulate the simplest (most
easily reduced) rows into the primary and higher positions in the matrix
In this case, the first row already
features a value of one in the first
position along the main diagonal, so all
is well to proceed to the next step in
the GJE reduction process to arrive at a
“Row Equivalent” REF Basis matrix…
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
5
8
– 9
A1
B1
C1
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
5
8
– 9
A1
B1
C1
Inspection seems to suggest that scaling
Row Three by – 3 and summing with
Row Two would yield an appealing
reduction of Row Two into the form
where the leading entry in the row is
rendered into a zero.
B2 – 3C1 = B1
Section 4.6, (Pg. 197),
Example 9
Solution Space
From this established “Pivot” move down to the next row and
render the entry there into a “zero” using EROs
Example: Find the set of all solution vectors of the LE system
③
④
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Section 4.6, (Pg. 197),
Example 9
Solution Space
Example: Find the set of all solution vectors of the LE system
Continued…
3
– 3
1
– 6
– 5
0
0
15
8
27
B1
– 3C1
0 – 5 – 5 15 35B1
1
1
0
2
– 2
0
1
– 5
5
– 9
A1
C1
From this established “zero” move down to the next row and
render the entry there into a “zero” using EROs
0 – 5 – 5 15 35B1
④
Inspection suggests that scaling Row
Two by – 1 and summing with Row One
would yield an appealing reduction of
Row Three into the form where the
leading entry in the row is rendered into a
zero.
C2 – 1A1 = C1
⑤
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Section 4.6, (Pg. 197),
Example 9
Solution Space
Example: Find the set of all solution vectors of the LE system
Continued…
C1
– 1A1
0 2 2 – 6 – 14C1
1
0
0
2
– 2
2
1
– 6
5
– 14
A1
C1
With the second column entry in the first row already fixed at a
“zero” value, we can proceed down to the next row and render
the entry there into a “zero” using EROs
0 – 5 – 5 15 35B1
Inspection suggests that scaling Row
Two by a factor of negative one–fifth
would reduce Row Two to the desired
“Pivot” value of “one” at the next position
down along the main diagonal.
⑤
1 2 0 – 5 – 9
– 1 0 2 – 1 – 5
– B1 = B2
1
5
⑥
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Section 4.6, (Pg. 197),
Example 9
Solution Space
Example: Find the set of all solution vectors of the LE system
Continued…
B1
0B2
1
0
0
2
– 2
2
1
– 6
5
– 14
A1
C1
With the second “pivot” fixed along the main diagonal, we can
proceed down to the next row and render the entry there into a
“zero” using EROs
B2
Now for the coup de grâce – we notice
that Row Three is a scalar multiple of
Row Two, thus scaling Row Two by a
factor of – 2 and summing with Row
Three will yield a new Row Three
populated entirely of all zero entries
0 1 1 – 3 – 7
⑥
– B1
1
5
0 – 5 – 5 15 35
1 1 – 3 – 7
0 1 1 – 3 – 7
⑦
C1 – 2B2 = C2
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Section 4.6, (Pg. 197),
Example 9
Solution Space
Example: Find the set of all solution vectors of the LE system
Continued…
C1
0C2
1
0
0
0
– 2
0
1
0
5
0
We now need to examine this reduced matrix to determine an
appropriate parameterization of the Solution Vectors of the nHLES
The Gauss-Jordan Elimination (GJE) is
now complete as we have arrived at a
“row-equivalent” matrix to A, now
designated B, as the remaining “non-
zero” row vectors constitute a Basis set
for the system (?) …QED
0 – 2 – 2 6 14
0 0 0 0
0 1 1 – 3 – 7
0 2 2 – 6 – 14
– 2B2
⑦
B =
⑧
The RREF matrix, thus
transformed from the
root SMF or TMF
(conventionally
designated “ A ” with
entries “ ci ” ) is now
restated as RREF matrix
“ B ” with entries “ di ”
Section 4.6, (Pg. 193)
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Section 4.6, (Pg. 197),
Example 9
Solution Space
Example: Find the set of all solution vectors of the LE system
X3 and X4 with coefficients ≠ 1 are the best
candidates for parameterization.
1
0
0
0
– 2
0
1
0
5
0
0 1 1 – 3 – 7B =
RREF ( Basis Set ) for nHLES
1x1 + 0x2 – 2x3 + 1x4 = 5s
3x1 + 1x2 + 1x3 – 3x4 = – 7s
x1 + 2x2 + 1x3 + 1x3 = s
x1 + 2x2 + 1x3 + 1x4 = t
1x1 + 0x2 – 2s + 1t4 = 5s
1x1 + 0x2 – 2s + 1t4 = 2s – 1t + 5s
3x1 + 1x2 + 1s3 – 3t4 = – 7s
3x1 + 1x2 + 1s3 – 3t4 = – 1ss +3t – 7s
2s
– 1s
1s
– 1t
+ 3t
+ 0t
0s + 1t
x =
+ 5
– 7
+ 0
x1
x2
x3
x4
=
+ 0
⑨
© Art Traynor 2011
Mathematics
Vector Space
Matrix Nullspace, Rank, and Nullity
Solution Space
Example: Find the set of all solution vectors of the LE system
Finally we state the solution vectors in terms of
their correspondence with the solution for the
associated homogenous system
1
0
0
0
– 2
0
1
0
5
0
0 1 1 – 3 – 7B =
x =
x1
x2
x3
x4
=
+ 5
– 7
+ 0
+ 0
– 1t
+ 3t
+ 0t
+ 1t
2s
– 1s
1s
0s
+ +
x =
x1
x2
x3
x4
= s
+ 5
– 7
+ 0
+ 0
– 1t
+ 3t
+ 0t
+ 1t
2s
– 1s
1s
0s
+ t +
RREF ( Basis Set ) for nHLES
x1 + 2x2 + 1x3 + 1x3 = s
x1 + 2x2 + 1x3 + 1x4 = t
1x1 + 0x2 – 2s + 1x1 = 2s – 1t + 5s
3x1 + 1x2 + 1s3 – 3x2 = – 1ss +3t – 7s
10
xi u1 u2 xp
x = su1 + tu2 + xp
System Solution(s)
x = xh + xp
xh = su1 + tu2
Ax = 0 ( HLES )
Ax = b ( nHLES )
Section 4.6, (Pg. 197),
Example 9
xh thus represents
an arbitrary vector in the
solution space of Ax = 0
Section 4.6, (Pg. 197)
© Art Traynor 2011
Mathematics
1
3
1
0
1
2
– 2
– 5
0
1
0
– 5
Augmented Coefficient Matrix
Solution Space
Vector Space
Particular Solution of a Non-Homogenous LE System ( nHLES )
Example: Find the set of all solution vectors of the LE system
Section 4.6, (Pg. 197),
Example 9
5
8
– 9
A =
v1
v2
v3

x1 x2 x3 x4 b
1
0
0
0
– 2
0
1
0
5
0
0 1 1 – 3 – 7B =
w1
w2
The wi vectors thus form a
Basis for the Row Space of A,
or the subspace spanned by
S = { vi , vi + 1 ,…vn – 1 , vn }
Section 4.6, (Pg. 191)
Continued…
“ Row Equivalent” ( REF Basis Matrix )
Only columns featuring a
“leading one” ( “Pivot”) in the
RREF matrix B are Linear
Independent
Section 4.6, (Pg. 192)
Linear
Independent Linear Dependent
x1 x2 x3 x4 b
x =
x1
x2
x3
x4
= s
+ 5
– 7
+ 0
+ 0
– 1t
+ 3t
+ 0t
+ 1t
2s
– 1s
1s
0s
+ t +
10
© Art Traynor 2011
Mathematics
Solution Space Consistency
Solution Space
Section 4.6, (Pg. 198)
Vector Space
For a matrix A composed of m-rows and n-columns Amx n ,
of which Ax = b defines a particular solution xp to the nHLES
The nHLES is consistent if-and-only-if
 The solution vector b is among the Column Space of A
 The solution vector b = [ bi , bi+1 ,…bn – 1 , bn ] T
represents a Linear Combination of the columns of A
 The solution vector b is among those populating the Subspace Rm
Spanned by the columns of A
There is an additional
implication that the respective
Ranks of the nHLES
coefficient and augmented
matrices are equivalent
Section 4.6, (Pg. 198)
I think what is meant by this
is that if b is adjoined to A
then reduced by GJE to an RREF
matrix B, so long as the column
representing b remains, the system is thus
demonstrated consistent
© Art Traynor 2011
Mathematics
Square Matrix Equivalency Conditions
Solution Space
Section 4.6, (Pg. 198)
Vector Space
For a matrix A composed of m-rows and n-columns Amx n ,
each of the following incidents implies each of the other
A is invertible
Ax = b has a unique solution for any n x 1 matrix b
Ax = 0 has the trivial solution
A is “row equivalent” to to In

|A | ≠ 0
Rank ( A ) = n
The n row vectors of A are Linear Independent
The n column vectors of A are Linear Independent
“ A ” cannot be empty, or the zero vector
© Art Traynor 2011
Mathematics
Coordinate Representation Relative to a Basis
Basis
Section 4.7, (Pg. 202)
Vector Space
For an Ordered Basis set B = { vi , vi+1 ,…vn – 1 , vn },
within Vector Space V , there exists a vector x in V
that can be expressed as a Linear Combination, or sum of
scalar multiples of the constituent vectors of x such that:

xn = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn
The coordinate matrix, or coordinate vector, of x relative to B is the
column matrix in Rn whose components are the coordinates of x
the scalars of which ci = [ ci , ci+1 ,…cn – 1 , cn ] are otherwise
referred to as the coordinates of x relative to the Basis B
c1
c2
cn
.
.
.
[ xn ]Bn
=
Chump Alert: Coordinate
matrix representation relative
to a Basis ( standard or
otherwise) is directly
analogous to Normalized ( ? )
Unit Vector representation
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
in the vector space to be described as a linear combination of the
unit vector set U = { ui , u i + 1 ,…un – 1 , u n }
x
y
O
u2
u1
Svn
· SBn
= SBn ´
v11
v21
vn1
.
.
.
vi
11
02
0n
.
.
.
In
01
12
0n
.
.
.
01
02
1n
.
.
.
. . .
. . .
. . .
.
.
.
Svn
· U = SBn ´
=
v1
02
0n
.
.
.
01
v2
0n
.
.
.
01
02
vn
.
.
.
. . .
. . .
. . .
.
.
.
SBn ´
We begin by noting that the Standard Basis SBn
allows any point
v12
v22
vn2
.
.
.
. . .
. . .
.
.
.
. . . v1n
v2n
vnn
.
.
.
I am switching the text’s
designation of B and B´
as it seems more intuitive
to think of the “prime” set
as the translated, or product
set
1a
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
This entails that any alternative Basis is necessarily a linear
combination of the standard Basis (which is equal to the Identity
Matrix) U = In = { ui , u i + 1 ,…un – 1 , u n }
x
y
O
u2
u1
Translation of the standard
Basis by a set of vector
scalars constituting an
alternative Basis may thus
appear to be trivial
Svn
· SBn
= SBn ´
vi In
Svn
· U = SBn ´
=
SBn ´
111 112
021 222
xi
yj
v1 v2
111 012
021 122
111 112
021 222
u1 u2 B´
1b
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
This alternative Basis is akin to a Translation of the standard
coordinate origin O → O ´
.
x
y
O
O ´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
Svn
· SBn
= SBn ´
vi
Svn
· U = SBn ´
=
SBn ´
111 112
021 222
xi
yj
v1 v2
111 112
021 222
B´
In
111 012
021 122
u1 u2
I’m going to keep the color
coding of the B-Prime Matrix
as green to emphasize it’s
status as a “resultant” (i.e. the
transformed, alternative Basis)
1c
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
The Translated, or product vector set can be expressed in several
equivalent forms.
x
y
O
O ´
B´ = { ( 1 , 0 ) , ( 1 , 2 ) }
B´ = { v1 , v2 }
r11 r12
r21 r22
B´ =
v1 v2
xi
yj
111 112
021 222
B´ =
v1 v2
xi
yj
Basis constituent
vectors expressed
in set notation
Restated in
Transition Matrix
Form (TMF)
Basis constituent
vector components
expressed in set
notation
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
I’m going to keep the color
coding of the B-Prime Matrix
as green to emphasize it’s
status as a “resultant” (i.e. the
transformed, alternative Basis)
②
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
Supplying axes through the Translated origin suggests the shifted
coordinate system introduced by the alternative Basis
x
y
x´
O
O ´
Note that the xi “Unknown” or
“Variable” row vector,
populated with the
multiplicative identity (i.e. all
“ones”) shifts the X-Axis of the
alternative Basis system of
coordinates by a scalar
multiple of one
111 012
021 122
B =
u1 u2
xi
yj
u2
u1
111 112
021 222
B ´ =
v1 v2
xi
yj
“Standard” Basis
Element
1B
1B
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
3a
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
Supplying axes through the Translated origin suggests the shifted
coordinate system introduced by the alternative Basis
x
y
x´
O
O ´
111 012
021 122
B =
u1 u2
xi
yj
u2
u1
v2
v1
111 112
021 222
B ´ =
v1 v2
xi
yj
“Standard” Basis
Element
“Alternative” Basis
Element
1B 2B 3B 4B 5B
1B
1B
The resultant vector ( v1 + v2 )
indicates the orientation of the
“y” coordinate axis and
introduces a “skew” in the
alternative Basis system by
comparison with the standard
Basis
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
1B´ 2B´ 3B´ 4B´ 5B´
3b
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
The alternative Basis maps input coordinates from the standard
Basis incrementing each “X” value by +1 and each “Y” value
by +2
Note that the “two” in the yi
“Unknown” or “Variable” row
vector also scales or
elongates the alternative Basis
“y” coordinate
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
111 012
021 122
B =
u1 u2
xi
yj
u2
u1
v2
v1
111 112
021 222
B ´ =
v1 v2
xi
yj
“Standard” Basis
Element
“Alternative” Basis
Element
1B
1B
1B´
2B ⇌ 1B´
1B´ 2B´ 3B´ 4B´ 5B´
3c
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
Thus the coordinate values of the two systems superimposed will
bear the same values for the “X” coordinates, but will feature two
values for the “Y” coordinates scaled by factor of two.
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
111 012
021 122
B =
u1 u2
xi
yj
u2
u1
v2
v1
111 112
021 222
B ´ =
v1 v2
xi
yj
“Standard” Basis
Element
“Alternative” Basis
Element
1B
1B
1B´
2B ⇌ 1B´
1B´ 2B´ 3B´ 4B´ 5B´
3d
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
Any arbitrary vector in the space can be represented as a scalar
multiple (i.e. Linear Combination) of the basis vectors
xn = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
xn = c1 v1 + c2 v2
cn = ( c1 , c2 )
B´ = { ( 1 , 0 ) , ( 1 , 2 ) }
111 112
021 222
B´ =
v1 v2
xi
yj
Basis constituent
vector components
expressed in Set
Notation Form SNF
Restated in
Transition Matrix
Form (TMF)
Arbitrary vector in
Vector Space V
c1
c2
cn
.
.
.
cn = 
ci
311
221
The ordered pair of
scalar vector
coefficients of x
Restated as a Column
Vector (scalar
coefficients of x)
1B´ 2B´ 3B´ 4B´ 5B´
④
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
the ordered pair collection of vector scalar coefficients of the arbitrary
vector x is explicitly corresponded with the Basis vectors the
products of which form a resultant Linear Combination.
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
xn = c1 v1 + c2 v2
cn = ( c1 , c2 )
A scalar coefficient of
“one” is implicit
c1
c2
cn
.
.
.

ci
311
221
The ordered pair of
scalar vector
coefficients of x
Restated as a Column
Vector (scalar
coefficients of x)
B´ = { v1 , v2 }
The set of Basis
vectors in SNF
Juxtaposed as below,
it is clear that x is a
linear combination of
the Basis Vectors
B´ = { 1 v1 , 1 v2 }
ci ≠ 1 implies that the
arbitrary vector is a
Linear Combination of
the Basis Vector set
cn = [ x ]B´ =
An illuminating change of notation is here introduced [ x ]B´ whereby
1B´ 2B´ 3B´ 4B´ 5B´
5a
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
c1
c2
cn
.
.
.

ci
311
221
Stated as a Column
Vector (scalar
coefficients of x)
Juxtaposed as below,
it is clear that x is a
linear combination of
the Basis Vectors
cn = [ x ]B´ =
The novel notation for the set (ordered pair, or column vector form)
of vector scalar coefficients ci directs our attention to the specific
Basis by which the resulting coordinates are generated.
e.g.: x is expressed relative to Basis B
“ Relative to ”
“ Defined by ”
“ Within ”
xn = c1 v1 + c2 v2
cn = ( c1 , c2 )
B´ = { v1 , v2 }
B´ = { 1 v1 , 1 v2 }1B´ 2B´ 3B´ 4B´ 5B´
5b
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2
Example: Find the coordinate matrix of x relative to a non-standard basis
①
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
c1
c2
cn
.
.
.

ci
311
221
Stated as a Column
Vector (scalar
coefficients of x)
Juxtaposed as below,
it is clear that x is a
linear combination of
the Basis Vectors
cn = [ x ]B´ =
Vector x therefore can be graphically represented by a Position
Vector demarcated in the B-Prime coordinate system [ i.e. three
increments of “ x ” from the alternative Basis ( aB ) origin and two
increments of “ y ” from the aB origin ].
“ Relative to ”
“ Defined by ”
“ Within ”
111 012
021 122
B
u1 u2
u2
u1
v2
v1
111 112
021 222
B ´
v1 v2
xi
yj
“Standard” Basis
Element
“Alternative” Basis
Element
1B
1B
1B´
2B ⇌ 1B´
xB = ( 3 , 2 )
1B´ 2B´ 3B´ 4B´ 5B´
© Art Traynor 2011
Mathematics
y´
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2Example: Find the coordinate matrix of x relative to a non-standard basis
①
x
y
x´
O
O ´
1B 2B 3B 4B 5B
1B ⇌ ½B´
1B´ 2B´ 3B´
2B ⇌ 1B´
3B ⇌ 1½ B´
4B ⇌ 2B´
v2 = 2 u2
( 1 , 2 )
v1 = 1 u1
( 1 , 0 )
To restate the scalar set of the arbitrary vector
coefficients in terms of the standard basis ( sB ),
the aB vector components need simply be scaled
by the vector equation for x to yield the scalar
solution set in terms of the sB
xB = ( 3 , 2 )
xn = c1 v1 + c2 v2
B´ = { ( 1 , 0 ) , ( 1 , 2 ) }
B´ = { v1 , v2 }
xn = c1 ( 1 , 0 ) + c2 ( 1 , 2 )
cn = [ x ]B
= ( c1 , c2 )
cn = [ x ]B
= ( 3 , 2 )
xn = 3( 1 , 0 ) + 2 ( 1 , 2 )
x = ( 3·1 + 2·1 ) , ( 3·0 + 2·2 )
x = ( 3 + 2 ) , ( 0 + 4 )
x = ( 5 ) , ( 4 )
xn = 3 ( v1 ) + 2 ( v2 )
1B´ 2B´ 3B´ 4B´
5B´
© Art Traynor 2011
Mathematics
Vector Space
Basis
Coordinate Matrices and Bases Section 4.7 (Pg. 203),
Example 2Example: Find the coordinate matrix of x relative to a non-standard basis
① Note finally that, as B ′ ( B-Prime ) is the Identity
Matrix ( IM ), the B matrix is equivalent to the
Transition Matrix (from non-standard to standard
Basis) conventionally noted as P – 1
Section 4.7 (Pg. 208)
[ B′ In ]
Adjoin
[ In P – 1 ]
RREF

GJE
( EROs ) 
P – 1 = ( B′ ) – 1
′
Change of Basis
Non-Standard  Standard
111 112
021 222
v1 v2
xi
yj
P – 1 = ( B′ ) = ′
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
We first restate the given alternative Basis aB = B ´ ( expressed in
Set Notation Form – SNF ) into Transformation Matrix Form ( TMF ).
①
B´ = { u1 , u2 , u3 } Alternative Basis
constituent vectors
expressed in Set
Notation Form SNF
B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
1
0
1
0
– 1
2
– 2
– 3
– 5
aB Coefficient Matrix
B´ =
c1
c2
c3
u1 u2 u3
Given the form of the
problem, it is implicit that
the arbitrary vector is
proffered relative to the
standard Basis
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3
Example: Find the coordinate matrix of x relative to a non-standard basis
With this restatement, we next introduce the arbitrary vector
coordinates ( to which we are tasked to find the coordinate matrix
corresponding to the aB ) rendering it into a column vector form.
①
B´ = { u1 , u2 , u3 }
B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
1
0
1
0
– 1
2
– 2
– 3
– 5
aB Coefficient Matrix
B´ =
c1
c2
c3
u1 u2 u3
These given arbitrary
vector coordinates are
presumably introduced
relative to the Standard
Basis?
xn = ( 1 , 2 , – 1 )
c1
c2
cn
.
.
.

ci
cn = [ x ]B´ =
1
2
c1
c2
c3
xi
– 1“ Relative to ”
“ Defined by ”
“ Within ”
Keep in mind that the ‘given’ arbitrary vector is stated relative to the
standard Basis and thus represents a solution set by which the set of
scalar multiples applicable to the alternative Basis will be derived.
Alternative Basis
constituent vectors
expressed in Set
Notation Form SNF
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
We next introduce a column vector of scalars the solution to which
will represent the coordinate matrix of the arbitrary vector with
respect to the alternative Basis ( aB )
①
B´ = { u1 , u2 , u3 }
B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
1
0
1
0
– 1
2
– 2
– 3
– 5
aB Coefficient Matrix
u1 u2 u3
Alternative Basis
constituent vectors
expressed in Set
Notation Form SNF
These given arbitrary
vector coordinates are
presumably introduced
relative to the Standard
Basis?
1
2
– 1
c1
c2
c3
xi
xn = ( 1 , 2 , – 1 )
Keep in mind that the ‘given’
arbitrary vector is stated relative
to the standard Basis and thus
represents a solution set by
which the set of scalar multiples
associated with the alternative
Basis will be determined
c1
c2
ci
– c3
=
cn = [ x ]B
= ( c1 , c2 , c3 )
© Art Traynor 2011
Mathematics
1
0
1
0
– 1
2
– 2
– 3
– 5
Augmented Coefficient Matrix
Vector Space
1
2
– 1
1c1 + 0x2 + 2c3 = 1s
3x1 – 1c2 + 3c3 = 2s
1c1 + 2c2 – 5c3 = – 1s
A =
c1
c2
c3

u1 u2 u3 b
The final “set-up” step entails restating the composed system once
again rendering it into a non-Homogenous Linear Equation System
( nHLES) and its corresponding Augmented Coeffficient Matrix (ACM)
①
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Non-Homogenous
Linear Equation System ( nHLES)
B´ = { u1 , u2 , u3 }
B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
xn = ( 1 , 2 , – 1 )
cn = ( c1 , c2 , c3 )
The ACM is thus expressed
in Standard Matrix Form
with the column vectors
corresponding to the
coefficients of the unknown
aB scalars, the solution for
which (by means of Gauss
Jordan Elimination – GJE)
will supply the coordinate
matrix relative to the aB
that the problem poses for
resolution.
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
1
0
1
0
– 1
2
– 2
– 3
– 5
Augmented Coefficient Matrix
1
2
– 1
A =
c1
c2
c3
u1 u2 u3 b
Designate each row with an Uppercase Alpha Character…this
will allow the Elementary Row Operations (EROs) of the
Gauss-Jordan Elimination (GJE) to be applied to be described
in a summarized algebraic fashion.
②
1
0
1
0
– 1
2
– 2
– 3
– 5
1
2
– 1
A1
B1
C1
Gauss-Jordan Elimination
(GJE) is an algorithmic
scheme applied to a
Standard Matrix Form
(SMF) representation of a
system of Linear Equations
resulting in a “row-
equivalent” reduced matrix
on which the main diagonal
entries are all “ones” (pivots
in Row Echelon Form -
REF) and all entries above
and below the “pivots” are
populated by “zeros” or
Reduced Row Echelon
Form (RREF).
Section 1.2, (Pg. 19)
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
1
0
1
0
– 1
2
– 2
– 3
– 5
1
2
– 1
A1
B1
C1
Investigate permutation as a strategy to manipulate the simplest (most
easily reduced) rows into the primary and higher positions in the matrix
③
In this case, the first row already
features a value of one in the first
position along the main diagonal, as
well as a zero directly below in the next
row down, so all is well to proceed to
the next step in the GJE reduction
process to arrive at a “Reduced Row
Equivalent” RREF Basis matrix…
From this established “Pivot” move down to the next row and
render the entry there into a “zero” using EROs
④
1
0
1
0
– 1
2
– 2
– 3
– 5
1
2
– 1
A1
B1
C1
Inspection suggests that scaling Row
One by – 1 and summing with Row
Three would yield an appealing reduction
of Row Three into the desired form where
the leading entry in the row is rendered
into a zero.
C1 = C2 – 1A1
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Continued…④
1
0
1
0
– 1
2
– 2
– 3
– 5
1
2
– 1
A1
B1
C1 C1 = C2 – 1A1
The resultant should always be stated
first (indexed to its succeeding value) in
the reduction evolution.
The Augend/Minuend term should always
be the same term as the resultant (at its
prevailing index).
The Subtrahend/Summand should thus
be the only term subject to any scaling.
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
We replace the transformed Row then proceed to the next non-
one entry (along the main diagonal) or non-zero entry (outside of
the main diagonal), working from top-to-bottom, and left-to-right
1
0
0
– 1
– 2
– 3
1
2
A1
B1
C1
With the entry in Row One, Column Two
already at zero, we move to reduce the
next main diagonal entry to a value of
One, which is easily accomplished by
scaling Row One by – 1
B1 = – 1B1
Continued…④
1 2 – 5 – 1C1
– 1A1 – 1 0 – 2 – 1
– 0 2 – 7 – 2C1
– 0 2 – 7 – 2
⑤
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Continued…
1
0
0
– 1
– 2
– 3
1
– 2
A1
B1
C1 – 0 2 – 7 – 2
⑤
⑥
Inspection suggests that Row Three be
summed with Row Two scaled by a
factor of – 2
With the second “pivot” fixed along the main diagonal, we can
proceed down to the next row and render the entry there into a
“zero” using EROs
1
0
0
– 1
– 2
– 3
1
– 2
A1
B1
C1 – 0 2 – 7 – 2
C2 = C1 – 2B1
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
With the main diagonal in Column Two fixed at the desired pivot
value of “one”, and all other entries in the column reduced to
values of “zero”, we proceed next to the top of Column Three to
continue our row reduction with further applied EROs
1 0 – 2 1A1
Inspection discloses that the lead entry in
Column Three ( j = 3 ) can be reduced to
“zero” by adding Row One to Row Three
scaled by a factor of Two
Continued…
– 0 0 – 1 – 2C2
⑥
C1 – 0 2 – 7 – 2
0 – 2 – 6 – 4– 2B1
⑦
0 – 1 – 3 – 2B1
– 0 0 – 1 – 2C2
A1 = A1 + 2C2
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Proceeding with EROs down Column Three, we will next
address the reduction of the entry in Row Two
1 0 – 0 5A1
Inspection suggests that the entry in
Column Three, Row Two ( ij = 23 ) can
be rendered into a zero value by the
summation of Row Two with Row Three
scaled by a factor of – 3.
Continued…
– 1 0 – 0 – 5A1
0 – 0 – 2 – 4+ 2C2
0 – 1 – 3 – 2B1
– 0 0 – 1 – 2C2
B2 = B1 – 3C2
1 0 – 2 1A1
⑦
⑧
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Proceeding with EROs down Column Three, we will next
address the reduction of the entry in Row Two
1 0 – 0 5A1
Inspection reveals that a simple scaling
of Row Three by a factor of – 1 will
render the system into its final RREF
expression.
Continued…
– 0 1 – 0 – 8B2
– 3C2
0 – 1 – 0 – 8B2
– 0 0 – 1 – 2C2
⑧
0 – 1 – 3 – 2B1
– 0 0 – 3 – 6
⑨
C3 = – 1C2
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
Having arrived at the RREF expression of the nHLES, we can explicitly
state the values for ci ( the coordinate matrix for the arbitrary vector
relative to the alternative Basis ( aB )
⑨
1 0 – 0 5A1
0 – 1 – 0 – 8B2
– 0 0 – 1 – 2C3
1 0 – 0 5
0 – 1 – 0 – 8
– 0 0 – 1 – 2
1d1 + 0x2 + 2c3 = 5s
3x1 – 1d2 + 3c3 = – 8s
1c1 + 2c2 – 5d3 = – 2s
u1 u2 u3 b
d1
d2
d3
No parameterization is necessary as the RREF form gives explicit
values for each unknown
This differs from the result
in example 9, pg 197…it
would be interesting to note
why this is so?
The RREF matrix, thus
transformed from the
root SMF or TMF
(conventionally
designated “ A ” with
entries “ ci ” ) is now
restated as RREF matrix
“ B ” with entries “ di ”
Section 4.6, (Pg. 193)
© Art Traynor 2011
Mathematics
Vector Space
Basis Transformation
Change of Basis ( aka Transformation )
Section 4.7 (Pg. 203),
Example 3Example: Find the coordinate matrix of x relative to a non-standard basis
No parameterization is necessary as the RREF form gives explicit
values for each unknown
1 0 – 0 5
0 – 1 – 0 – 8
– 0 0 – 1 – 2
1d1 + 0x2 + 2c3 = 5s
3x1 – 1d2 + 3c3 = – 8s
1c1 + 2c2 – 5d3 = – 2s
u1 u2 u3 b
10

5
– 2
d1
d2
d3
xi
– 1
d1
d2
dn
.
.
.

di
dn = [ x ]B´ =
“ Relative to ”
“ Defined by ”
“ Within ”
5
– 2
d1
d2
d3
xi
– 1
d1
d2
d3
x = d1u1 + d2u2 + d3u3 + xp
System Solution(s)
x = xh + xp
xh = d1u1 + d2u2 + d3u3
Ax = b
Ax = 0 ( HLES )
( nHLES )
RREF ( Basis Set ) for nHLES
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3
Example: Find the coordinate matrix of x relative to a non-standard basis
Continued…
A = B´ =
11
1
0
1
0
– 1
2
– 2
– 3
– 5
Augmented Coefficient Matrix
1
2
– 1
c1
c2
c3
u1 u2 u3 b
Only columns featuring a
“leading one” ( “Pivot”) in the
RREF matrix B are Linear
Independent
Section 4.6, (Pg. 192)
Perhaps trivially so, it is worth noting
that the wi vectors thus form a Basis
for the Row Space of A = B´ , or the
subspace spanned by
A = B´ = { ui , ui + 1 ,…un – 1 , un }
Section 4.6, (Pg. 191) Section 4.5, (Pg. 184)
Also recall that the subspace spanned by A = B´
will thus feature precisely three Basis vectors (by
the definition of Basis Cardinality ), exactly
corresponding to the number of Linear
Independent vectors in the space (any greater than
which would entail Linear Dependence of one of
the vectors).
This cardinality will always coincide with that of
the “Standard Basis” for the vector space (e.g. R3)

1 0 – 0 5
0 – 1 – 0 – 8
– 0 0 – 1 – 2
u1 u2 u3 b
d1
d2
d3
B =
“ Row Equivalent” ( RREF Basis Matrix )
w1
w2
Linear Independent
=
=
= w3
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3
Example: Find the coordinate matrix of x relative to a non-standard basis
Continued…12
A = B´ = 
1 0 – 0 5
0 – 1 – 0 – 8
– 0 0 – 1 – 2
u1 u2 u3 b
d1
d2
d3
B =
1
0
1
0
– 1
2
– 2
– 3
– 5
Augmented Coefficient Matrix
1
2
– 1
c1
c2
c3
u1 u2 u3 b
“ Row Equivalent” ( RREF Basis Matrix )
w1
w2
Linear Independent
=
=
= w3
xn = ci · ui + ci+1 · ui+1 +…+ cn –1 vn – 1 + cn vn
Arbitrary Vector, generally
xn = c1 · u1 + c2 · u2 + c3 · v3
Arbitrary Vector, particular
( 1 , 2 , – 1 ) = c1 ( 1 , 0, 1 ) + c2 ( 0 , – 1, 2 ) + c3 ( 2 , 3, – 5 )
Arbitrary Vector, as nHLES
linear combination relative
to aB
© Art Traynor 2011
Mathematics
Change of Basis ( aka Transformation )
Basis Transformation
Vector Space
Section 4.7 (Pg. 203),
Example 3
Example: Find the coordinate matrix of x relative to a non-standard basis
Continued…13
d1
d2
dn
.
.
.

di
dn = [ x ]B´ =
“ Relative to ”
“ Defined by ”
“ Within ”
5
– 2
d1
d2
d3
xi
– 1
xn = ci · ui + ci+1 · ui+1 +…+ cn –1 vn – 1 + cn vn
Arbitrary Vector, generally
xnn = c1 · u1 + c2 · u2 + c3 · v3
Arbitrary Vector, particular
xn = 51 ( 1 , 0, 1 ) + ( – 82 ) ( 0 , – 1, 2 ) + ( – 22 )( 2 , 3, – 5 )
Arbitrary Vector, as nHLES
linear combination relative
to aB
© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 204)
For an arbitrary vector x in vector space V described by coordinates relative to
a Standard Basis B , an ancillary description – in coordinate terms relative
to an Alternate Basis B′ ( B-Prime ) – can be determined by operation
of a Transition Matrix P ( the entries of which are populated by the
components of x ) .
The matrix product of P and a column vector [ x ]B′ of scalars
ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix
of x relative to the alternate basis B′ ( B-Prime ) yield the
column vector [ x ]B′ , the “ root ” basis.

The “ Change of Basis ” is thus the solution to the unknown column
vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] .

© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 204)
The matrix product of P and a column vector [ x ]B′ of scalars
ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix
of x relative to the alternate basis B′ ( B-Prime ) yield the
column vector [ x ]B′ .

P = B´ = { u1 , u2 , u3 }
P = B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
xn = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un
Note the convention wherein
blue font is assigned to those
elements of the B-Basis set,
whose component vectors are
assigned to “ u ”
Red font accordingly is
assigned to those elements of
the B-Prime ( B′ ) Basis set,
whose component vectors are
assigned to “ v ”
P = B´ =
Coefficient Matrix
u1 u2 u3
d1
d2
d3
xi = [ x ]B′
=
p11
Alternative Basis
constituent vectors
expressed in Set
Notation Form SNF
Note that Matrix P = B′ is
expressed in Transition
Matrix Form ( TMF )
wherein the columns are
populated by the individual
Basis vector components
with the rows collecting like
“ unknown ” terms.
xi = [ x ]B′
c1
c2
c3
Root BasisAlternate Basis
p21
p31
p12
p22
p32
p13
p23
p33
© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 204)
The matrix product of P and a column vector [ x ]B′ of scalars
ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix
of x relative to the alternate basis B′ ( B-Prime ) yield the
column vector [ x ]B′ .

P = B´ = { u1 , u2 , u3 }
P = B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) }
xn = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un
P = B´ =
1
0
1
0
– 1
2
– 2
– 3
– 5
Coefficient Matrix
u1 u2 u3
d1
d2
d3
xi = [ x ]B′
=
xi = [ x ]B′
Root BasisAlternate Basis
– 1
– 2
– 1
© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 204)
The matrix product of P and a column vector [ x ]B′ of scalars
ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix
of x relative to the alternate basis B′ ( B-Prime ) yield the
column vector [ x ]B′ .

P [ x ]B′ = [ x ]B′
P [ x ]B′ = [ x ]B′
P2 P2
P [ x ]B′ = [ x ]B′
P2 P2
P [ x ]B′ = P – 1 [ x ]B′
Whereas the product of the
Transition Matrix and the
Alternate Basis ( aB ) yields
the Root Basis, it is
equivalent to state that the
product of the Transition
Matrix Inverse and the Root
Basis will yield the Alternate
Basis.Change of Basis
B′ B
Change of Basis
B  B′
We recall that to find an
inverse matrix we adjoin the
Identity Matrix In (on RHS ) to
P forming matrix [ A In ]
and perform EROs by GJE to
arrive at an RREF which will
then result in matrix [ In A –1 ]
with the inverse occupying the
RHS of the resultant matrix
© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 204)
P = B´ =
Transition Matrix
u1 u2 u3
d1
d2
d3
xi = [ x ]B′
=
p11
xi = [ x ]B′
c1
c2
c3
Root BasisAlternate Basis
p21
p31
p12
p22
p32
p13
p23
p33
P – 1 = B´
Transition Matrix Inverse
d1
d2
d3
xi = [ x ]B′
=
p – 1
xi = [ x ]B′
c1
c2
c3
Root BasisAlternate Basis
p12
p22
p32
p13
p23
p33
v1 v2 v3
11
p – 1
21
p – 1
31
© Art Traynor 2011
Mathematics
Change of Basis ( via Transition Matrix)
Basis Transformation
Vector Space
Section 4.7 (Pg. 206)
[ B′ B ]
The Transition Matrix ( TM ) from a Root Basis ( RB = B ) to
an Alternate Basis ( aB = B′ , B-Prime ) is found, as is the case for
similar matrix inverses, by adjoining the Root Basis matrix with the
aB matrix ( on the LHS ) and applying EROs via GJE to arrive at
a RREF reduced matrix, the result of which will form an adjoined
matrix composed of the Identity Matrix ( IM – on the LHS ) and the
P – 1 Transition Matrix ( from B to B′ – on the RHS ).

Adjoin
[ In P – 1 ]
RREF

GJE
( EROs ) 
© Art Traynor 2011
Mathematics
Transition Matrix
Basis Transformation
Vector Space
Section 4.7 (Pg. 208)
When the Root Basis ( RB = B ) is equivalent to the Identity Matrix
( IM ) the process of finding a basis-changing Transition Matrix
( TM ) is simplified to a symmetric operation whereby the Root-
Identity Matrix is adjoined ( on the RHS ) to an Alternate Basis
Matrix ( ABM = B′ , B-Prime, on the LHS ) and applying EROs
via GJE to arrive at a RREF reduced matrix, the result of which will
form an adjoined matrix composed of the Root-Identity Matrix ( RIM
– on the LHS ) and the P – 1 Transition Matrix ( from B to B′ –
on the RHS ).

[ B′ In ]
Adjoin
[ In P – 1 ]
RREF

GJE
( EROs ) 
P – 1 = ( B′ ) – 1 ′ Change of Basis
Standard  Non-Standard

LinearAlgebra_160423_01

  • 1.
    © Art Traynor2011 Mathematics Definition Mathematics Wiki: “ Mathematics ” 1564 – 1642 Galileo Galilei Grand Duchy of Tuscany ( Duchy of Florence ) City of Pisa Mathematics – A Language “ The universe cannot be read until we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language…without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth. ”
  • 2.
    © Art Traynor2011 Mathematics Definition Algebra – A Mathematical Grammar Mathematics A formalized system ( a language ) for the transmission of information encoded by number Algebra A system of construction by which mathematical expressions are well-formed Expression Symbol Operation Relation Designate expression elements or Operands ( Terms / Monomials ) Transformations or LOC’s capable of rendering an expression into a relation A mathematical Structure between operands represented by a well-formed Expression A well-formed symbolic representation of Operands ( Terms or Monomials ) , of discrete arity, upon which one or more Operations ( Laws of Composition - LOC’s ) may structure a Relation 1. Identifies the explanans by non-tautological correspondences Definition 2. Isolates the explanans as a proper subset from its constituent correspondences 3. Terminology a. Maximal parsimony b. Maximal syntactic generality 4. Examples a. Trivial b. Superficial Mathematics Wiki: “ Polynomial ” Wiki: “ Degree of a Polynomial ”
  • 3.
    © Art Traynor2011 Mathematics Disciplines Algebra One of the disciplines within the field of Mathematics Mathematics Others are Arithmetic, Geometry, Number Theory, & Analysis  The study of expressions of symbols ( sets ) and the well-formed rules by which they might be manipulated to preserve validity .  Algebra Elementary Algebra Abstract Algebra A class of Structure defined by the object Set and its Operations ( or Laws of Composition – LOC’s )  Linear Algebra Mathematics
  • 4.
    © Art Traynor2011 Mathematics Definitions Expression Symbol Operation Relation Designate expression elements or Operands ( Terms / Monomials ) Transformations or LOC’s capable of rendering an expression into a relation A mathematical structure between operands represented by a well-formed expression A well-formed symbolic representation of Operands ( Terms or Monomials ) , of discrete arity, upon which one or more Operations ( LOC’s ) may structure a Relation Expression – A Mathematical Sentence Proposition A declarative expression asserting a fact, the truth value of which can be ascertained Formula A concise symbolic expression positing a relation VariablesConstants An alphabetic character representing a number the value of which is arbitrary, unspecified, or unknown Operands ( Terms / Monomials ) A transformation invariant scalar quantity Mathematics Predicate A Proposition admitting the substitution of variables O’Leary, Section 2.1, Pg. 41 Expression constituents consisting of Constants and Variables exhibiting exclusive parity Polynomial An Expression composed of Constants ( Coefficients ) and Variables ( Unknowns) with an LOC’s of Addition, Subtraction, Multiplication and Non-Negative Exponentiation Wiki: “ Polynomial ” Wiki: “ Degree of a Polynomial ”
  • 5.
    © Art Traynor2011 Mathematics Definitions Expression Symbol Operation Relation Designate expression elements or Operands ( Terms / Monomials ) Transformations capable of rendering an expression into a relation A mathematical structure between operands represented by a well-formed expression Expression – A Mathematical Sentence Proposition A declarative expression the truth value of which can be ascertained Formula A concise symbolic expression positing a relation VariablesConstants An alphabetic character representing a number the value of which is arbitrary, unspecified, or unknown Operands ( Terms / Monomials ) A transformation invariant scalar quantity Equation A formula stating an equivalency class relation Inequality A formula stating a relation among operand cardinalities Function A Relation between a Set of inputs and a Set of permissible outputs whereby each input is assigned to exactly one output Univariate: an equation containing only one variable ( e.g. Unary ) Multivariate: an equation containing more than one variable ( e.g. n-ary ) Mathematics Expression constituents consisting of Constants and Variables exhibiting exclusive parity Polynomial
  • 6.
    © Art Traynor2011 Mathematics Definitions Expression Symbol Operation Relation Expression – A Mathematical Sentence Proposition Formula VariablesConstants Operands ( Terms ) Equation A formula stating an equivalency class relation Linear Equation An equation in which each term is either a constant or the product of a constant and (a) variable[s] of the first degree Mathematics Polynomial
  • 7.
    © Art Traynor2011 Mathematics Expression Mathematical Expression A representational precursive discrete composition to a Mathematical Statement or Proposition ( e.g. Equation ) consisting of :  Operands / Terms Expression A well-formed symbolic representation of Operands ( Terms or Monomials ) , of discrete arity, upon which one or more Operations ( LOC’s ) may structure a Relation Mathematics n Scalar Constants ( i.e. Coefficients ) n Variables or Unknowns The Cardinality of which is referred to as the Arity of the Expression Constituent representational Symbols composed of : Algebra Laws of Composition ( LOC’s ) Governs the partition of the Expression into well-formed Operands or Terms ( the Cardinality of which is a multiple of Monomials )
  • 8.
    © Art Traynor2011 Mathematics Arity Arity Expression The enumeration of discrete symbolic elements ( Variables ) comprising a Mathematical Expression is defined as its Arity  The Arity of an Expression can be represented by a non-negative integer index variable ( ℤ + or ℕ ), conventionally “ n ”  A Constant ( Airty n = 0 , index ℕ )or Nullary represents a term that accepts no Argument  A Unary expresses an Airty n = 1 A relation can not be defined for Expressions of Arity less than two: n < 2 A Binary expresses Airty n = 2 All expressions possessing Airty n > 1 are n-ary, Multary, Multiary, or Polyadic VariablesConstants Operands Expression Polynomial
  • 9.
    © Art Traynor2011 Mathematics Expression Arity Operand  Arithmetic : a + b = c The distinct elements of an Expression by which the structuring Laws of Composition ( LOC’s ) partition the Expression into discrete Monomial Terms  “ a ” and “ b ” are Operands  The number of Variables of an Expression is known as its Arity n Nullary = no Variables ( a Scalar Constant ) n Unary = one Variable n Binary = two Variables n Ternary = three Variables…etc. VariablesConstants Operands Expression Polynomial n “ c ” represents a Solution ( i.e. the Sum of the Expression ) Arity is canonically delineated by a Latin Distributive Number, ending in the suffix “ –ary ”
  • 10.
    © Art Traynor2011 Mathematics Arity Arity ( Cardinality of Expression Variables ) Expression A relation can not be defined for Expressions of Arity less than two: n < 2 Nullary Unary n = 0 n = 1 Binary n = 2 Ternary n = 3 1-ary 2-ary 3-ary Quaternary n = 4 4-ary Quinary n = 5 5-ary Senary n = 6 6-ary Septenary n = 7 7-ary Octary n = 8 8-ary Nonary n = 9 9-ary n-ary VariablesConstants Operands Expression Polynomial 0-ary
  • 11.
    © Art Traynor2011 Mathematics Operand Parity – Property of Operands Parity n is even if $ k | n = 2k n is odd if $ k | n = 2k+1 Even  Even Integer Parity Same Parity Even  Odd Opposite Parity
  • 12.
    © Art Traynor2011 Mathematics Polynomial Expression A well-formed symbolic representation of operands, of discrete arity, upon which one or more operations can structure a Relation Expression Polynomial Expression A Mathematical Expression , the Terms ( Operands ) of which are a compound composition of : Polynomial Constants – referred to as Coefficients Variables – also referred to as Unknowns And structured by the Polynomial Structure Criteria ( PSC ) arithmetic Laws of Composition ( LOC’s ) including : Addition / Subtraction Multiplication / Non-Negative Exponentiation LOC ( Pn ) = { + , – , x bn ∀ n ≥ 0 } Wiki: “ Polynomial ” An excluded equation by Polynomial Structure Criteria ( PSC ) Σ an xi n i = 0 P( x ) = an xn + an – 1 xn – 1 +…+ ak+1 xk+1 + ak xk +…+ a1 x1 + a0 x0 Variable Coefficient Polynomial Term From the Greek Poly meaning many, and the Latin Nomen for name    
  • 13.
    © Art Traynor2011 Mathematics Degree Expression Polynomial Degree of a Polynomial Polynomial Wiki: “ Degree of a Polynomial ” The Degree of a Polynomial Expression ( PE ) is supplied by that of its Terms ( Operands ) featuring the greatest Exponentiation For a multivariate term PE , the Degree of the PE is supplied by that Term featuring the greatest summation of Variable exponents  P = Variable Cardinality & Variable Product Exponent Summation & Term Cardinality Arity Latin “ Distributive ” Number suffix of “ – ary ” Degree Latin “ Ordinal ” Number suffix of “ – ic ” Latin “ Distributive ” Number suffix of “ – nomial ” 0 = 1 = 2 = 3 = Nullary Unary Binary Tenary Constant Linear Quadratic Cubic Monomial Binomial Trinomial An Expression composed of Constants ( Coefficients ) and Variables ( Unknowns) with an LOC of Addition, Subtraction, Multiplication and Non- Negative Exponentiation
  • 14.
    © Art Traynor2011 Mathematics Degree Polynomial Degree of a Polynomial Nullary Unary p = 0 p = 1 Linear Binaryp = 2 Quadratic Ternaryp = 3 Cubic 1-ary 2-ary 3-ary Quaternaryp = 4 Quartic4-ary Quinaryp = 5 5-ary Senaryp = 6 6-ary Septenaryp = 7 7-ary Octaryp = 8 8-ary Nonaryp = 9 9-ary “ n ”-ary Arity Degree Monomial Binomial Trinomial Quadranomial Terms Constant Quintic P Wiki: “ Degree of a Polynomial ” Septic Octic Nonic Decic Sextic aka: Heptic aka: Hexic
  • 15.
    © Art Traynor2011 Mathematics Degree Expression Polynomial Degree of a Polynomial Polynomial Wiki: “ Degree of a Polynomial ” An Expression composed of Constants ( Coefficients ) and Variables ( Unknowns) with an LOC of Addition, Subtraction, Multiplication and Non- Negative Exponentiation The Degree of a Polynomial Expression ( PE ) is supplied by that of its Terms ( Operands ) featuring the greatest Exponentiation For a PE with multivariate term(s) , the Degree of the PE is supplied by that Term featuring the greatest summation of individual Variable exponents  P( x ) = ai xi 0 Nullary Constant Monomial P( x ) = ai xi 1 Unary Linear Monomial P( x ) = ai xi 2 Unary Quadratic Monomial ai xi 1 yi 1P( x , y ) = Binary Quadratic Monomial Univariate Bivariate
  • 16.
    © Art Traynor2011 Mathematics Degree Expression Polynomial Degree of a Polynomial Polynomial Wiki: “ Degree of a Polynomial ” The Degree of a Polynomial Expression ( PE ) is supplied by that of its Terms ( Operands ) featuring the greatest Exponentiation For a multivariate term PE , the Degree of the PE is supplied by that Term featuring the greatest summation of Variable exponents  P( x ) = ai xi 0 Nullary Constant Monomial P( x ) = ai xi 1 Unary Linear Monomial P( x ) = ai xi 2 Unary Quadratic Monomial ai xi 1 yi 1P( x , y ) = Binary Quadratic Monomial ai xi 1 yi 1zi 1P( x , y , z ) = Ternary Cubic Monomial Univariate Bivariate Trivariate Multivariate
  • 17.
    © Art Traynor2011 Mathematics Quadratic Expression Polynomial Quadratic Polynomial Polynomial Wiki: “ Degree of a Polynomial ” A Unary or greater Polynomial composed of at least one Term and : Degree precisely equal to two Quadratic ai xi n ∀ n = 2  ai xi n yj m ∀ n , m n + m = 2|: Etymology From the Latin “ quadrātum ” or “ square ” referring specifically to the four sides of the geometric figure Wiki: “ Quadratic Function ” Arity ≥ 1  ai xi n ± ai + 1 xi + 1 n ∀ n = 2 Unary Quadratic Monomial Binary Quadratic Monomial Unary Quadratic Binomial  ai xi n yj m ± ai + 1 xi + 1 n ∀ n + m = 2 Binary Quadratic Binomial
  • 18.
    © Art Traynor2011 Mathematics Equation Equation Expression An Equation is a statement or Proposition ( aka Formula ) purporting to express an equivalency relation between two Expressions :  Expression Proposition A declarative expression asserting a fact whose truth value can be ascertained Equation A symbolic formula, in the form of a proposition, expressing an equality relationship Formula A concise symbolic expression positing a relationship between quantities VariablesConstants Operands Symbols Operations The Equation is composed of Operand terms and one or more discrete Transformations ( Operations ) which can render the statement true ( i.e. a Solution ) Polynomial
  • 19.
    © Art Traynor2011 Mathematics Equation Solution Solution and Solution Sets  Free Variable: A symbol within an expression specifying where a substitution may be made Contrasted with a Bound Variable which can only assume a specific value or range of values  Solution: A value when substituted for a free variable which renders an equation true Analogous to independent & dependent variables Unique Solution: only one solution can render the equation true (quantified by $! ) General Solution: constants are undetermined General Solution: constants are value-specified (bound?) Unique Solution Particular Solution General Solution Solution Set n A family (set) of all solutions – can be represented by a parameter (i.e. parametric representation)  Equivalent Equations: Two (or more) systems of equations sharing the same solution set Section 1.1, (Pg. 3) Section 1.1, (Pg. 3) Section 1.1, (Pg. 6) Any of which could include a Trivial Solution Section 1.2, (Pg. 21)
  • 20.
    © Art Traynor2011 Mathematics Equation Solution Solution and Solution Sets  Solution: A value when substituted for a free variable which renders an equation true Unique Solution: only one solution can render the equation true (quantified by $! ) General Solution: constants are undetermined General Solution: constants are value-specified (bound?) Solution Set n For some function f with parameter c such that f(xi , xi+1 ,…xn – 1 , xn ) = c the family (set) of all solutions is defined to include all members of the inverse image set such that f(x) = c  f -1(c) = x f -1(c) = {(ai , ai+1 ,…an-1 , an )  Ti· Ti+1 ·…· Tn-1· Tn | f(ai , ai+1 ,…an-1 , an ) = c } where Ti· Ti+1 ·…· Tn-1· Tn is the domain of the function f o f -1(c) = { }, or empty set ( no solution exists ) o f -1(c) = 1, exactly one solution exists ( Unique Solution, Singleton) o f -1(c) = { cn } , a finite set of solutions exist o f -1(c) = {∞ } , an infinite set of solutions exists Inconsistent Consistent Section 1.1, (Pg. 5)
  • 21.
    © Art Traynor2011 Mathematics Linear Equation Linear Equation Equation An Equation consisting of: Operands that are either Any Variables are restricted to the First Order n = 1 Linear Equation An equation in which each term is either a constant or the product of a constant and (a) variable[s] of the first order Expression Proposition Equation Formula n Constant(s) or n A product of Constant(s) and one or more Variable(s) The Linear character of the Equation derives from the geometry of its graph which is a line in the R2 plane  As a Relation the Arity of a Linear Equation must be at least two, or n ≥ 2 , or a Binomial or greater Polynomial  Polynomial
  • 22.
    © Art Traynor2011 Mathematics Equation Linear Equation Linear Equation  An equation in which each term is either a constant or the product of a constant and (a) variable[s] of the first order Term ai represents a Coefficient b = Σi= 1 n ai xi = ai xi + ai+1 xi+1…+ an – 1 xn – 1 + an xn Equation of a Line in n-variables  A linear equation in “ n ” variables, xi + xi+1 …+ xn-1 + xn has the form: n Coefficients are distributed over a defined field ( e.g. N , Z , Q , R , C ) Term xi represents a Variable ( e.g. x, y, z ) n Term a1 is defined as the Leading Coefficient n Term x1 is defined as the Leading Variable Section 1.1, (Pg. 2) Section 1.1, (Pg. 2) Section 1.1, (Pg. 2) Section 1.1, (Pg. 2) Coefficient = a multiplicative factor (scalar) of fixed value (constant) Section 1.1, (Pg. 2)
  • 23.
    © Art Traynor2011 Mathematics Linear Equation Equation Standard Form ( Polynomial )  Ax + By = C  Ax1 + By1 = C For the equation to describe a line ( no curvature ) the variable indices must equal one   ai xi + ai+1 xi+1 …+ an – 1 xn –1 + an xn = b  ai xi 1 + ai+1 x 1 …+ an – 1 x 1 + a1 x 1 = bi+1 n – 1 n n ℝ 2 : a1 x + a2 y = b ℝ 3 : a1 x + a2 y + a3 z = b Blitzer, Section 3.2, (Pg. 226) Section 1.1, (Pg. 2) Test for Linearity  A Linear Equation can be expressed in Standard Form As a species of Polynomial , a Linear Equation can be expressed in Standard Form  Every Variable term must be of precise order n = 1 Linear Equation An equation in which each term is either a constant or the product of a constant and (a) variable[s] of the first order Expression Proposition Equation Formula Polynomial
  • 24.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Solution Consistency  Solution: A value when substituted for a free variable which renders an equation true Unique Solution Particular Solution General Solution Solution Set n A family (set) of all solutions – can be represented by a parameter No Solution - Inconsistent 1 0 0 2 1 0 – 1 0 0 4 3 – 2 Represents “ 0 = – 2 ” , a contradiction, and thus no solution {  } to the LE system for which the augmented matrix stands 1x1 + 0x2 – 3x3 = – 1 System 0x1 + 1x2 – 1x3 = 0 x1 – 3x3 = – 1 System x2 = x3 Section 1.1, (Pg. 8)
  • 25.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Solution Consistency  Solution: A value when substituted for a free variable which renders an equation true Solution Set - Consistent n A family (set) of all solutions – can be represented by a parameter 1x1 + 0x2 – 3x3 = – 1 System 0x1 + 1x2 – 1x3 = 0 x1 = 3x3 – 1 System x2 = x3 Note that x3 can be parameterized ( as a composite function f ○ g → ( f ○ g )( x ) = f ( g ( x )) with y = f ( u ) and u = g ( x3 ) ) x2 = x3 = u Tautology/Identity* x1 = 3u – 1 The solution set for “ f(u) ” can thus be indexed by/over Z+ representing a countably infinte solution set * Section 1.1, (Pg. 3)
  • 26.
    © Art Traynor2011 Mathematics ‘ f(x) ’ ‘– f ’ ‘ x ’ ‘ f(x) ’ ‘ +f -1 ’ ‘ x ’ Linear Algebra Solution Linear Equation – Solution Set  Solution: A value when substituted for a free variable which renders an equation true Solution Set n For some function f with parameter c such that f ( xi , xi+1 ,…xn – 1 , xn ) = c the family ( set ) of all solutions is defined to include all members of the inverse image set such that f ( x ) = c  f -1( c ) = x
  • 27.
    © Art Traynor2011 Mathematics Linear Algebra Solution aij xi + aij+1 xi+1 + . . . + ain – 1 xn – 1 + ain xn = bi ai+1j xi + ai+1j+1 xi+1 + . . . + ai+1n – 1 xn – 1 + ai+1n xn = bi+1 am – 1j xi + am – 1j+1 xi+1 + . . . + am – 1n – 1 xn – 1 + am – 1n xn = bm – 1 . . . . . . . . . . . . . . . amj xi + amj+1 xi+1 + . . . + amn xn – 1 + amn xn = bm Linear Equation – System  A system of m linear equations in n variables is a set of m equations , each of which is linear in the same n variables Linear Equation System Solution Set The set S = { si , si+1 ,…sn-1 , sn } which renders each of the equations in the system true Section 1.1, (Pg. 4) Section 1.1, (Pg. 4)
  • 28.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Back Substitution x – 2y = 5 y = – 2 System x – 2 y = 5 x – 2 (– 2 ) = 5 x + 4 = 5 x = 5 – 4 x = 1 Solution Set – Singleton, Unique Solution, ( exactly one solution ) S = { 1, – 2 } Section 1.1, (Pg. 6)
  • 29.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation Equivalence  Equivalent Linear Equations: Two (or more) systems of linear equations sharing the same solution set  Gaussian Elimination Operations Producing Linear Equation Equivalent Systems Permutation/Interchange – of two equations Multiply – an equation by a non-zero constant Add – a multiple of an equation to another equation Section 1.1, (Pg. 6) Section 1.1, (Pg. 7) Otherwise known as Elementary Row Operations Section 1.2, (Pg. 14) n ERO’s should always proceed with an Augend/Multiplicand of lesser rank and Summand/Multiplier of greater rank ( Aij < Amn ) yielding a Sum/Product substituted for the second Operand n Multiplication by a scalar ( non-zero constant ) need not affect any change in rank for the resultant row
  • 30.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Row Echelon Form (REF) 1 0 0 2 1 0 – 1 0 1 4 3 – 2  A matrix in Row-Echelon Form ( REF ) has three distinguishing characteristics Any rows consisting entirely of zeros is positioned at the bottom of the matrix For each row that does not consist entirely of zeros, the first non-zero entry is a “ 1 ” ( called the Leading One, aka Pivot )  1 0 0 2 1 0 – 1 0 1 4 3 – 2 Section 1.1, (Pg. 6) Section 1.2, (Pg. 15) Section 1.2, (Pg. 15) 0 0 0 0 0 0 0 0 Section 1.2, (Pg. 16) Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF )
  • 31.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Row Echelon Form (REF)  A matrix in Row-Echelon Form ( REF ) has three distinguishing characteristics For two successive (non-zero) rows, the leading one in the higher row is farther to the left than the leading one ( Pivot ) in the lower row  1 0 0 2 1 0 – 1 0 1 4 3 – 2 Section 1.1, (Pg. 6) Section 1.2, (Pg. 15) 0 0 0 0  A matrix in Reduced Row-Echelon Form ( RREF ) has one additional characteristic Every column that has a leading one ( Pivot ) has zeros in every position above and below its leading one ( Pivot )  1 0 0 0 1 0 0 0 1 4 3 – 2 Section 1.2, (Pg. 15) 0 0 0 0  Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF )
  • 32.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) 1 – 1 2 – 2 3 – 5 3 0 5 9 – 4 17 x – 2y + 3z = 9 – x + 3y = – 4 2x – 5y + 5z = 17 System Augmented Matrix R2 : R2 + R1  R2´ – 1 3 0 – 4 + R1 : 1 – 2 3 9 = R2´ : 0 1 3 5 1 0 2 – 2 1 – 5 3 3 5 9 5 17 R3 – 2R1  R3´
  • 33.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) 1 2 – 2 – 5 3 5 9 17 Augmented Matrix R3 : – 2R1 : – 2 4 – 6 – 18 = R3´ : 0 – 1 – 1 – 1 1 0 0 – 2 1 – 1 3 3 – 1 9 5 – 1 R3 + R2  R3´´ 0 1 3 5 R3 – 2R1  R3´ 2 – 5 5 17
  • 34.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) Augmented Matrix R3 : + R2 : = R3´´ : 0 0 2 4 1 0 – 2 1 3 3 9 5 R3 + R2  R3´´ 1 – 2 3 9 0 1 3 5 0 – 1 – 1 – 1 0 – 1 – 1 – 1 0 1 3 5 0 0 2 4 R3  R3´´ ´1 2
  • 35.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) Augmented Matrix R3 : 1 – 2 3 9 0 1 3 5 R3  R3´´ ´1 2 1 – 2 3 9 0 1 3 5 0 0 2 4 R3 : 1 2 R3´´ ´ : 0 0 2 4 0 0 1 2 0 0 1 2 0 0 1 2 Matrix is in Row-Echelon Form ( REF )  Proceeding to Reduced Row-Echelon Form ( RREF ) R1 + 2R2  R1´
  • 36.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) Augmented Matrix R1 : + 2R2 : R1´ : 0 2 6 10 1 0 9 19 1 0 9 19 0 1 3 5 0 0 1 2 1 – 2 3 9 0 1 3 5 0 0 1 2 R1 + 2R2  R1´ 1 – 2 3 9 R2 – 3R3  R2´´
  • 37.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) Augmented Matrix R2 : – 3R3 : R2´´ : 0 0 – 3 – 6 0 1 0 – 1 1 0 9 19 0 1 0 – 1 0 0 1 2 R1 + 9R3  R1´´ 1 0 9 19 0 1 3 5 0 0 1 2 R2 – 3R3  R2´´ 0 1 3 5
  • 38.
    © Art Traynor2011 Mathematics Linear Algebra Solution Linear Equation – Reduced Row Echelon Form (RREF) Augmented Matrix R1 : – 9R3 : = R1´´ : 0 0 – 9 – 18 1 0 0 1 0 1 0 – 1 0 0 1 2 R1 – 9R3  R1´´ 1 0 9 19 0 1 0 – 1 0 0 1 2 1 0 0 1 1 0 9 19 Matrix is in Reduced Row-Echelon Form ( RREF )
  • 39.
    © Art Traynor2011 Mathematics Matrices Matrix For positive integers m and n, an m x n (“m by n”) matrix is a rectangular array populated by entries aij , located at the i-th row and the j-th column: Linear Algebra  M = N: the matrix is a square of order n  The a11 , a22 , a33 , amn , sequence of entries is the main diagonal ( ↘ ) of the matrix M = # of Rows i = Row Number Index N = # of Columns j = Column Number Index Row 1 a11 Row 2 Row 3 Row m . . . a21 a31 am1 . . . a12 a22 a32 am2 . . . a13 a23 a33 am3 . . . . . . . . . . . . . . . . . . a1n a2n a3n amn . . . mi mi+1 mi+2 mm nj nj+1 nj+2 nn C1 C2 C3 . . . C4 Section 1.2, (Pg. 13) Section 1.2, (Pg. 13) Section 1.2, (Pg. 13)
  • 40.
    © Art Traynor2011 Mathematics Matrices Matrix For positive integers m and n, an m x n (“m by n”) matrix is a rectangular array Row 1 populated by entries aij , located at the i-th row and the j-th column: Linear Algebra a11 Row 2 Row 3 Row m . . . a21 a31 am1 . . . a12 a22 a32 am2 . . . a13 a23 a33 am3 . . . . . . . . . . . . . . . . . . a1n a2n a3n amn . . . mi mi+1 mi+2 mm nj nj+1 nj+2 nn M = # of Rows i = Row Number Index N = # of Columns j = Column Number Index C1 C2 C3 . . . Cn Section 1.2, (Pg. 13)  “ i ” is the row subscript  “ j ” is the column subscript
  • 41.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Diagonal Matrix A matrix Anx n is said to be diagonal when all of the entries outside the main diagonal ( ↘ ) are zero Section 2.1, (Pg. 50) The matrix Dnx n is diagonal if : dij = 0 if i ≠ j "ij  { dii , di+1,i+1 ,…, dn–1,n–1 , dnn }   Any square diagonal matrix is also a Symmetric Matrix A diagonal matrix is also both Upper-Triangular and Lower-Triangular The Identity Matrix In is a diagonal matrix Any square Zero Matrix is a diagonal matrix d11 d21 d31 dm1 . . . d12 d22 d32 dm2 . . . d13 d23 d33 dm3 . . . . . . . . . . . . . . . . . . d1n d2n d3n dmn . . . Dnx n =
  • 42.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Tr ( A ) = Σi = 1 n aii = a11 + a22 +…+ an –1n – 1 + ann Matrix Trace The trace of a matrix Anx n is the sum of the main diagonal entries Section 2.1, (Pg. 50) a11 a21 a31 am1 . . . a12 a22 a32 am2 . . . a13 a23 a33 am3 . . . . . . . . . . . . . . . . . . a1n a2n a3n amn . . . A
  • 43.
    © Art Traynor2011 Mathematics Matrices Linear Algebra 1 – 1 2 – 4 3 0 3 – 1 – 4 5 – 3 6 x – 4y + 3z = 5 – x + 3y – z = – 3 2x – 4z = – 6 System Augmented Matrix 1 – 1 2 – 4 3 0 3 – 1 – 4 Coefficient Matrix M = # of Rows i = Row Number Index N = # of Columns j = Column Number Index Augmented Matrix  A matrix representing a system of linear equations including both the coefficient and constant terms Coefficient = a multiplicative factor (scalar) of fixed value (constant) Section 1.2, (Pg. 13) Coefficient Matrix  A augmented matrix excluding any constant terms and populated only by the variable coefficients Section 1.2, (Pg. 13)
  • 44.
    © Art Traynor2011 Mathematics Linear Algebra Solution Gaussian Elimination With Back-Substitution  Express the system of linear equations as an Augmented Matrix Interchange – of two equations Multiply – an equation by a non-zero constant Add – a multiple of an equation to another equation Section 1.2, (Pg. 16) Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF ) Section 1.2, (Pg. 16)  Apply ERO’s to restate the matrix in Row Echelon Form (REF) Section 1.2, (Pg. 13) Section 1.2, (Pg. 14) Section 1.1, (Pg. 6) Section 1.2, (Pg. 15)  Use Back Substitution to solve for unknown variables Section 1.1, (Pg. 6) Order Matters! Operate from left-to-right Multiply – an equation by a non-zero constant
  • 45.
    © Art Traynor2011 Mathematics Linear Algebra Solution Gauss-Jordan Elimination  Follow steps 1 & 2 of Gaussian Elimination Section 1.2, (Pg. 19) Every matrix is row equivalent to a matrix in Row-Echelon Form ( REF ) Section 1.2, (Pg. 16) Apply ERO’s to restate the matrix in Row Echelon Form (REF) Section 1.2, (Pg. 13) Section 1.2, (Pg. 14) Section 1.1, (Pg. 6) Section 1.2, (Pg. 15)  Use Back Substitution to solve for unknown variables Section 1.1, (Pg. 6) Order Matters! Operate from left-to-right Multiply – an equation by a non-zero constant  Express the system of linear equations as an Augmented Matrix n Interchange – of two equations n Multiply – an equation by a non-zero constant n Add – a multiple of an equation to another equation  Keep Going! Continue to apply ERO’s until matrix assumes Reduced Row Echelon Form ( RREF )  Section 1.2, (Pg. 15)
  • 46.
    © Art Traynor2011 Mathematics Linear Algebra Homogeneity Homogenous Systems of Linear Equations A linear equation system in which each of the constant terms is zero Section 1.2, (Pg. 21) aij xi + aij+1 xi+1 + . . . + ain – 1 xn – 1 + ain xn = 0 ai+1j xi + ai+1j+1 xi+1 + . . . + ai+1n – 1 xn – 1 + ai+1n xn = 0 am – 1j xi + am – 1j+1 xi+1 + . . . + am – 1n – 1 xn – 1 + am – 1n xn = 0 . . . . . . . . . . . . . . . amj xi + amj+1 xi+1 + . . . + amn xn – 1 + amn xn = 0  A homogenous LE system Must have At Least One Solution Section 1.2, (Pg. 21) Every homogenous LE system is Consistent # Equations < # Variables  Infinitely Many Solutions
  • 47.
    © Art Traynor2011 Mathematics Matrix Representation Linear Algebra Matrix Representation Methods  Uppercase Letter Designation Section 1.2, (Pg. 40) A , B , C  Bracket-Enclosed Representative Element [ aij ] , [ bij ] , [ cij ] a11 a21 a31 am1 . . . a12 a22 a32 am2 . . . a13 a23 a33 am3 . . . . . . . . . . . . . . . . . . a1n a2n a3n amn . . .  Rectangular Array Brackets denote a Matrix ( i.e. not a specific element/real number)
  • 48.
    © Art Traynor2011 Mathematics Matrix Equality Linear Algebra Matrix Equality Section 1.2, (Pg. 40) A = [ aij ] B = [ bij ] are equal when Amxn = Bmxn aij = bij 1 ≤ i ≤m 1 ≤ j ≤n a1 a2 a3 . . . ana = C1 C2 C3 . . . Cn Row Matrix / Row Vector A 1 x n (“ 1 by n ”) matrix is a single row Column Matrix / Column Vector b1 b2 b3 bm . . . C1 An m x 1 (“ m by 1 ”) matrix is a single column
  • 49.
    © Art Traynor2011 Mathematics Matrix Operations Linear Algebra Matrix Summation Section 1.2, (Pg. 41) A = [ aij ] B = [ bij ] is given by + A + B = [ aij + bij ] – 1 0 2 1 1 – 1 3 2 + = ( – 1 + 1 ) = ( 0 + [ – 1] ) ( 2 + 3 ) ( 1 + 2 ) 0 – 1 5 3 Scalar Multiplication 1 – 3 2 2 0 1 4 – 1 2 A = 3A = 3 ( 1 ) 3 ( 2 ) 3 ( 4 ) 3 ( – 3 ) 3 ( 0 ) 3 ( – 1 ) 3 ( 2 ) 3 ( 1 ) 3 ( 2 ) 3 – 9 6 6 0 3 12 – 3 6 3A =
  • 50.
    © Art Traynor2011 Mathematics Matrix Operations Linear Algebra Section 1.2, (Pg. 42)Matrix Multiplication – 1 4 5 3 – 2 0 A = A = [ aij ] Amx n B = [ bij ] Bnx p then AB = [ cij ] = Σk = 1 n aik bkj = ai 1 b1 j + ai2 b2j +…+ ain –1 bn-1j + ain bnj The entries of Row “ Aik” ( the i-th row ) are multiplied by the entries of “ Bkj” ( the j-th column ) and sequentially summed through Row “ Ain” and Column “ Bnj” to form the entry at [ cij ] – 3 – 4 2 1 B = c11 c12 C = c21 c22 c31 c32 a11b11 + a12b21 a11b12 + a12b22 = a21b11 + a22b21 a21b12 + a22b22 a31b11 + a32b21 a31b12 + a32b22 Product Summation Operand Count For Each Element of AB (single entry) Product Summation (Column-Row) Index  For the product of two matrices to be defined, the column count of the multiplicand matrix must equal the row count of the multiplier matrix ( i.e. Ac = Br )  ABmx p
  • 51.
    © Art Traynor2011 Mathematics Systems Of Linear Equations Linear Algebra Linear Equation System a11 x1 + a12 x2 + a13 x3 = b1 a21 x1 + a22 x2 + a23 x3 = b2 a31 x1 + a32 x2 + a33 x3 = b3 Matrix-Vector Notation a11 a13 Ax = b  a21 a23 a31 a33 = a12 a22 a32 A x1 x2 x3 x b1 b2 b3 b
  • 52.
    © Art Traynor2011 Mathematics Systems Of Linear Equations Linear Algebra Partitioned Matrix Form (PMF) Ax = b  = A x b a11 a21 am1 . . . a12 a22 am2 . . . . . . . . . . . . . . . a1n a2n amn . . . x1 x2 xn . . . Ax = b  = ai1 b a11 a21 am1 . . . x1 ai2 a12 a22 am2 . . . + x2 + . . . + xn ain a1n a2n amn . . . Ax = b  = Ax b a11 x1 a21 x1 am1 x1 . . . + a12 x2 + + a22 x2 + + am2 x2 + . . . . . . . . . . . . . . . + a1n xn + a2n xn + amn xn . . . Ax = x1 a1 + x2 a2 + . . . + xn an = b
  • 53.
    © Art Traynor2011 Mathematics Section 2.1 Review Linear Algebra Section 2.1 Review  Introduce Three Basic Martrix Operations Matrix Addition Scalar Multiplication Matrix Multiplication
  • 54.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Addition & Scalar Multiplication Commutative (Addition) Associative (Addition)A +( B + C ) = ( A +B ) + C Changes Order of Operations as per “PEM-DAS”, Parentheses are the principal or first operation A +B = B +A Re-Orders Terms Does Not Change Order of Operations – PEM-DAS Associative (Multiplication)( cd ) A = c ( dA ) Distributive ( Scalar Over Matrix Addition )c ( A + B ) = c A + cB Distributive ( Scalar Addition Over Matrix Addition ) ( c + d ) A = c A + dA
  • 55.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Proofs Let A = [ aij ], B = [ bij ]  Introduce/Define/Declare The Constituents to be Proven This statement declares A & B to be Matrices Specifies the row & column count index variables
  • 56.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Identities & Zero Matrices Multiplicative Identity Multiplicative Zero Identity 1A = A Additive IdentityA + 0mx n = A Additive InverseA + ( – A ) = 0mx n c A = 0mx n if c = 0 or A = 0mx n
  • 57.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Matrix Multiplication Distributive ( LHS ) A( BC ) = ( AB ) C Order of Terms is preserved Affects Order of Operations Sequence – PEM-DAS Distributive ( RHS ) Associative ( Scalar Over Matrix Multiplication ) A( B + C ) = AB + AC ( A + B ) C = AC + BC c ( AB ) = ( c A )B = A ( c B ) Associative (Multiplication) Order of Terms is preserved Order of Terms is preserved Order of Terms is preserved Order of Terms is preserved AC = BC CA = CB ( C is invertible ) Right Cancellation Property A = B if then Left Cancellation Property
  • 58.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Proofs A( BC ) = ( AB ) C Order of Terms is preserved Affects Order of Operations Sequence – PEM-DAS Associative (Multiplication) Σk = 1 n T = [ tij ] = ( aik bkj )ckjΣk = 1 n Σi = 1 n ( yj )Σj = 1 n ( xi ) Σi = 1 n ( xi ) ( y1 + y2 +…+ yn –1 + yn ) ( x1 + x2 +…+ xn –1 + xn ) y1 + ( x1 + x2 +…+ xn –1 + xn ) y2 +… ( x1 + x2 +…+ xn –1 + xn ) yn –1 + ( x1 + x2 +…+ xn –1 + xn ) yn x1 y1 + x2 y1 +…+ xn –1 y1 + xn y1 + x1 y2 + x2 y2 +…+ xn –1 y2 + xn y2 +… x1 yn –1 + x2 yn –1 +…+ xn –1 yn –1 + xn yn –1 + x1 yn + x2 yn +…+ xn –1 yn + xn yn
  • 59.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Proofs A( BC ) = ( AB ) C Order of Terms is preserved Affects Order of Operations Sequence – PEM-DAS Associative (Multiplication) Σk = 1 n T = [ tij ] = ( aik bkj )ckjΣk = 1 n Σi = 1 n ( yj )Σj = 1 n ( xi ) Σi = 1 n ( xi ) ( y1 + y2 +…+ yn –1 + yn ) ( x1 + x2 +…+ xn –1 + xn ) y1 + ( x1 + x2 +…+ xn –1 + xn ) y2 +… ( x1 + x2 +…+ xn –1 + xn ) yn –1 + ( x1 + x2 +…+ xn –1 + xn ) yn x1 y1 + x2 y1 +…+ xn –1 y1 + xn y1 + x1 y2 + x2 y2 +…+ xn –1 y2 + xn y2 +… x1 yn –1 + x2 yn –1 +…+ xn –1 yn –1 + xn yn –1 + x1 yn + x2 yn +…+ xn –1 yn + xn yn Σi = 1 n Σj = 1 n xi yj
  • 60.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Proofs A = [ aij ] Amx n B = [ bij ] Bnx p then AB = [ cij ] = Σk = 1 n aik bkj = ai 1 b1 j + ai2 b2j +…+ ain –1 bn-1j + ain bnj Product Summation Operand Count Product Summation (Column-Row) Index ABmx p The entries of Row “ Aik” ( the i-th row ) are multiplied by the entries of “ Bkj” ( the j-th column ) and sequentially summed through Row “ Ain” and Column “ Bnj” to form the entry at [ cij ]  ai,1 b1, j + ai,1 b1, j +1 ai,1 b1,n – 1 + ai,1 b1,n ai+1,2 b2, j + ai+1,2 b2, j +1 ai+1,2 b2, n – 1 + ai+1,2 b2, n . . . . . . . . . + . . .+ + . . .+ . . . . . . an – 1,n – 1bn – 1, j + an – 1,n – 1bn – 1, j +1 an – 1,n – 1bn – 1, n – 1 + an – 1,n – 1bn an,nbn, j + an,n bn, j +1 an,n bn,n – 1 + an,n bn,n + . . .+ + . . .+ Section 1.2, (Pg. 42) For Each Element of AB (single entry)
  • 61.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Identity Matrix For Amx n  A In = A A Im = A Matrix Exponentiation For Ak = AA…A K factors
  • 62.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Transpose Matrix a11 a21 am1 . . . a12 a22 am2 . . . . . . . . . . . . . . . a1n a2n amn . . . A = a11 a12 a1n . . . a21 a22 a2n . . . . . . . . . . . . . . . am1 am2 amn . . . AT = 1 2 0 2 1 0 0 0 1 C = 1 2 0 2 1 0 0 0 1 CT = Symmetric Matrix: C = CT If C = [ cij ] is a symmetric matrix, cij = cji for i ≠ j C = [ cij ] is a symmetric matrix, Cmx n = CT nx p for m = n = p
  • 63.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra 1 2 0 2 1 0 0 0 1 C = 1 2 0 2 1 0 0 0 1 CT = If C = [ cij ] is a symmetric matrix, cij = cji , "i,j | i ≠ j C = [ cij ] is a symmetric matrix, Cmx n = CT nx p , "m,n, p | m = n = p Symmetric Matrix A Symmetric Matrix is a Square Matrix that is equal to it Transpose ( e.g. Cmx n = CT mx n , "m,n | m = n) 
  • 64.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Matrices – Transposes ( AT ) T = A Transpose of a Scalar Multiple ( A + B ) T = A T + B T ( c A ) T = c ( A T ) Transpose of a Transpose Transpose of Sum ( AB ) T = B T A T Transpose of a Product Reverse Order of Terms ( interchange multiplicand & multiplier terms in the product expression ) Symmetry of A Matrix & The Product of Its Transpose AAT = ( AAT ) T ATA is also symmetric
  • 65.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse A matrix Anx n is Invertible or Non-Singular when $ matrix Bnx n | AB = BA = In In : Identity Matrix of Order n Bnx n : The Multiplicative Inverse of A A matrix that does not have an inverse is Non-Invertible or Singular Non-square matrices do not have inverses n For matrix products Amx n Bnx p where m ≠ n ≠ p, AB ≠ BA as [ aij ≠ bij ] ??  
  • 66.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse There are two methods for determining an inverse matrix A-1 of A ( if an inverse exists): Solve Ax = In for X  Adjoin the Identity Matrix In (on RHS ) to A forming the doubly- augmented matrix [ A In ] and perform EROs concluding in RREF to produce an [ In A-1 ] solution  A test for determining whether an inverse matrix A-1 of A exists: Demonstrate that either/or AB = In = BA Section 2.3 (Pg. 64)
  • 67.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse Uniqueness Property If A is invertible, then its inverse is Unique Notation: The inverse of A is denoted as A-1 If A is invertible, then the LE system represented by Ax = b has a Unique Solution given by x = A– 1 b 
  • 68.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – by Matrix Equation x11 x12 x21 x22 1 – 1 4 – 3 + = A x 1 0 0 1 In For a coefficient matrix Anx n the A-1 nx n matrix is that whose product yields a solution matrix to the corresponding In identity matrix  1x11 + – 1x21 + 4x21 ( – 3x21 ) = Ax 1x11 + – 1x21 + 4x21 ( – 3x21 ) 1 0 0 1 In 1x11 + 4x21 = 1 – 1x21 + ( – 3x21 ) = 0 1x11 + 4x21 = 0 – 1x21 + ( – 3x21 ) = 1  – 3 1 – 4 1 A-1Ax = In Ax = In
  • 69.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – by Gauss-Jordan Elimination x11 x12 x21 x22 1 – 1 4 – 3 + = A x 1 0 0 1 In An invertible coefficient Anx n matrix can be combined with its corresponding xnx n unknown/variable matrix to form an Axnx n = In equation matrix  This equation matrix is composed itself of identical coefficient column vectors  1 x11 + – 1 x21 + 4 x21 ( – 3 x21 ) = Ax 1 x11 + – 1 x21 + 4 x21 ( – 3 x21 ) 1 0 0 1 In
  • 70.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – by Gauss-Jordan Elimination An invertible coefficient Anx n matrix can be combined with its corresponding xnx n unknown/variable matrix to form an Axnx n = In equation matrix  This equation matrix is composed itself of identical coefficient column vectors  1 x11 + – 1 x21 + 4 x21 ( – 3 x21 ) = Ax 1 x11 + – 1 x21 + 4 x21 ( – 3 x21 ) 1 0 0 1 In 1x11 + 4x21 = 1 – 1x21 + ( – 3x21 ) = 0 1x11 + 4x21 = 0 – 1x21 + ( – 3x21 ) = 1 Ax = In Ax = In Rather than solve the two column equation vectors separately, they can be solved simultaneously by adjoining the identity matrix to the shared coefficient matrix 
  • 71.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – by Gauss-Jordan Elimination ( GJE ) An invertible coefficient Anx n matrix can be combined with its corresponding xnx n unknown/variable matrix to form an Axnx n = In equation matrix  This equation matrix is composed itself of identical coefficient column vectors  1 x11 + 4 x21 = 1 – 1 x21 + ( – 3 x21 ) = 0 1 x11 + 4 x21 = 0 – 1 x21 + ( – 3 x21 ) = 1 Ax = In Ax = In Rather than solve the two column equation vectors separately, they can be solved simultaneously by adjoining the identity matrix to the shared coefficient matrix…  1 – 1 4 – 3 A 1 0 0 1 In …then execute ERO’s to effect a GJ-Elimination of the “ doubly augmented ” [ A I ] matrix the conclusion of which will yield an [ I A-1 ] inverse matrix
  • 72.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – by Gauss-Jordan Elimination An invertible coefficient Anx n matrix can be combined with its corresponding xnx n unknown/variable matrix to form an Axnx n = In equation matrix  This equation matrix is composed itself of identical coefficient column vectors  1 x11 + 4 x21 = 1 – 1 x21 + ( – 3 x21 ) = 0 1 x11 + 4 x21 = 0 – 1 x21 + ( – 3 x21 ) = 1 Ax = In Ax = In The adjoined, “ doubly-augmented ” coefficient matrix , by means of ERO’s , is reduced by GJ-Elimination to produce the [ I A-1 ] inverse matrix  1 – 1 4 – 3 A 1 0 0 1 In  – 3 1 – 4 1 A-1 1 0 0 1 In Which is confirmed by verifying either of the following n AA-1 = I n AA-1 = A-1 A
  • 73.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Matrix Inverse – 2x2 Matrix ( Special Case ) For a square matrix A2x 2 , given by: The inverse A-1 of the root matrix A2x 2 is given by the following product: a c b d = ad – cb d – c – b a The difference of the diagonal products forms the multiplicand denominator of the matrix whose product yields the inverse of the root matrix 1 ad – cb A-1 = NegateSwitcheroo Abstract Algebra, Lecture 2 @ 18:30 The scalar multiple is the inverse of the root matrix Determinant! The “multiplier” matrix is a half-negated permutation of the root matrix!
  • 74.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties Of Inverse Matrices A : An Invertible Matrix k : A positive integer , Z+ c : A non-zero scalar, c ≠ 0 A– 1 Ak c A AT are Invertible and the following are true: ( A– 1 ) – 1 = A ( Ak ) – 1 = A– 1 A– 1 … A– 1 = ( A– 1 ) k K factors ( cA ) – 1 = A– 1 1 c ( AT ) – 1 = ( A– 1 ) T Aj Ak = Aj+k ( Aj ) k = Ajk ( AB ) – 1 = B– 1 A– 1 ( B is also invertible )
  • 75.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Properties of Matrix Exponentiation Aj Ak = Aj+k ( Aj ) k = Ajk
  • 76.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Elementary Matrices An Elementary Matrix, Anx n is: A square matrix ( n x n ) Obtained from a corresponding Identity Matrix In  Results from a single Elementary Row Operation ( ERO ) If E is an Elementary Matrix, then: E is obtained from an ERO on a corresponding Identity Matrix Im  EA is the product of the same ERO performed on an Am x n matrix Matrices Amx n & Bmx n are Row Equivalent when: $ a finite set of Elementary Matrices E1 , E2 ,… , Ek such that B = Ek Ek – 1 … E2 E1 A
  • 77.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Elementary Matrices - Properties If E is an Elementary Matrix then: E– 1 exists E– 1 is an Elementary Matrix A square matrix A is invertible if-and-only-if: A can be expressed as the product of elementary matrices Every Elementary Matrix has an inverse Matrix Equivalency conditions, for Anx n matrix: “ A ” is invertible Ax = b has a unique solution for every n x 1 column matrix b Ax = 0 has only the trivial solution “ A ” is row-equivalent to In  “ A ” can be written as the product of elementary matrices
  • 78.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Upper & Lower Triangular Matrices a11 a21 a31 am1 . . . 0 a22 a32 am2 . . . 0 0 a33 am3 . . . . . . . . . . . . . . . . . . 0 0 0 amn . . . L For an Anx n square matrix: “ L ” is a lower triangular matrix where all entries above the Main Diagonal are zero, and only the lower half is populated with non-zero entries.  a11 0 0 0 . . . a12 a22 0 0 . . . a13 a23 a33 0 . . . . . . . . . . . . . . . . . . a1n a2n a3n amn . . . U “ U ” is a lower triangular matrix where all entries below the Main Diagonal are zero, and only the upper half is populated with non-zero entries. 
  • 79.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra LU Factorization A square matrix Anx n can be written as a product A = LU if: “ L ” is a lower triangular matrix where all entries above the Main Diagonal are zero, and only the lower half is populated with non-zero entries, and …  “ U ” is a lower triangular matrix where all entries below the Main Diagonal are zero, and only the upper half is populated with non-zero entries 
  • 80.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Determinants Every square matrix Anx n can be associated with a real number defined as its Determinant  Notation: det ( A ) = |a | a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 Example: 2-LE System with (2) unknowns  yields solutions with common denominators b1 a22 – b2 a12 a11 a22 – a21 a12 x1 = b2 a11 – b1 a21 a11 a22 – a21 a12 x2 = Determinant of a 2 x 2 Matrix a11 a12 a21 a22 A = = det ( A ) = |A | = a11 a22 – a21 a12 a11 a21 a12 a22 = a11a22 – a21a12 Determinant is the difference of the product of the diagonals The Determinant is a polynomial of Order “ n ”
  • 81.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Determinants Every square matrix Anx n can be associated with a real number defined as its Determinant  Notation: det ( A ) = |a | a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 Example: 2-LE System with (2) unknowns  yields solutions with common denominators b1 a22 – b2 a12 a11 a22 – a21 a12 x1 = b2 a11 – b1 a21 a11 a22 – a21 a12 x2 = Determinant of a 2 x 2 Matrix Determinant is the difference of the product of the diagonals The Determinant is a polynomial of Order “ n ” a c b d = ad – cb The Determinant is the Area (an n-Manifold ) of the parallelogram suggested by the addition of the vectors represented by the matrix
  • 82.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Minors & Cofactors For a square matrix Anx n  The Minor Mij of the entry aij is the determinant of the matrix obtained by deleting the ith row and jth column of A  The Cofactor Cij of the entry aij is Cij = ( – 1 )i+j Mij  Example: a11 a13 a21 a23 a31 a33 a12 a22 a32 Minor of a21 a11 a13 a21 a23 a31 a33 a12 a22 a32 Minor of a22 a12 a32 a13 a33 , M21 = a11 a31 a13 a33 , M22 = A Minor IS A DETERMINANT!!
  • 83.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Minors & Cofactors For a square matrix Anx n  The Cofactor Cij of the entry aij is Cij = ( – 1 )i+j Mij  Example: a11 a13 a21 a23 a31 a33 a12 a22 a32 Minor of a21 a11 a13 a21 a23 a31 a33 a12 a22 a32 Minor of a22 a12 a32 a13 a33 , M21 = a11 a31 a13 a33 , M22 = Cofactor of a21 Cofactor of a22 C21 = ( – 1 )2+1 M21 = – M21 C22 = ( – 1 )2+2 M22 = M22
  • 84.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Determinant of a Square Matrix For a square matrix Anx n of order n ≥ 2 , then: The Determinant of A is the sum of the entries in the first row of A multiplied by their respective Cofactors  det ( A ) = |A | = a1j C1j = a11 C11 + a12 C12 +…+ a1, n –1 Cn, n –1 + a1n C1nΣj = 1 n The process of determining this sum is Expanding The Cofactors ( in the first row ) Section 3.1, (Pg. 106)
  • 85.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Expansion By Cofactors For a square matrix Anx n of order n , the determinant of A is given by: An ith row expansion det ( A ) = |A | = aij Cij = ai1 Ci1 + ai2 Ci2 +…+ ai, n –1 Ci, n –1 + ain CinΣj = 1 n A jth column expansion det ( A ) = |A | = aij Cij = a1j C1j + a2j C2j +…+ an –1, j Cn –1, j + anj CnjΣj = 1 n Section 3.1, (Pg. 107)
  • 86.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Determinants – 3x3 Matrix ( Special Case ) For a square matrix A3x 3 , the determinant of A is given by: The first two columns are adjoined to the RHS of the matrix a11 a13 a21 a23 a31 a33 a12 a22 a32 a11 a21 a31 a12 a22 a32 Product sums are formed by first multiplying along the main diagonal proceeding to the right a11 a13 a21 a23 a31 a33 a12 a22 a32 a11 a21 a31 a12 a22 a32 ➀ ➁ ➂ = a11a22 a33 + a12a23 a31 + a13a21 a32 = UD Upper Diagonal
  • 87.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Determinants – 3x3 Matrix ( Special Case ) For a square matrix A3x 3 , the determinant of A is given by: Remaining product differences are then formed by multiplying along the LHS bottom diagonal proceeding to the right  a11 a13 a21 a23 a31 a33 a12 a22 a32 a11 a21 a31 a12 a22 a32 ➃ ➄ ➅ = UD – a31a22 a13 – a32a23 a11 – a33a21 a12 = UD – LD Upper Diagonal minus Lower Diagonal
  • 88.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Diagonal Matrix A matrix Anx n is said to be diagonal when all of the entries outside the main diagonal ( ↘ ) are zero Section 2.1, (Pg. 50) The matrix Dnx n is diagonal if : dij = 0 if i ≠ j "ij  { dii , di+1,i+1 ,…, dn–1,n–1 , dnn }   d11 d21 d31 dm1 . . . d12 d22 d32 dm2 . . . d13 d23 d33 dm3 . . . . . . . . . . . . . . . . . . d1n d2n d3n dmn . . . Dnx n = A matrix Anx n that is both upper AND lower triangular is said to be diagonal  The determinant of a triangular matrix Dnx n is the product of its main diagonal elements det ( D ) = |D | = aii = a11 a22 … a n –1, n –1 ain  Πi = 1 n
  • 89.
    © Art Traynor2011 Mathematics Linear Algebra Definitions EROs & Determinants (Properties) Permutation: [ A ]  Pij  [ B ] det ( B ) = – det ( A ) |B | = – | A |  Multiplication by a Scalar: [ A ]  cRi  [ B ] det ( B ) = c det ( A ) |B | = c | A |  Addition to a Row Multiplied by a Scalar: [ A ]  Ri + cRj  [ B ] det ( B ) = det ( A ) |B | = | A |  There are three “effects” to a resultant matrix which are unique to each of the three EROs Permutation Scalar Multiplication Row Addition
  • 90.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Zero Determinants A matrix Anx n will feature a determinant of zero det ( A ) = 0 |A | = 0 if any of the following pertain  One row/column of “ A ” consists of all zeros Two rows/columns of “ A ” are equal One row/column of “ A ” is a multiple of another
  • 91.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Determinant of a Matrix Product For matrices Anx n & Bnx n , of order “ n ” det ( AB ) = det ( A ) det ( B ) |AB | = | A | | B |  Determinant of a Scalar Multiple of a Matrix For matrix Anx n of order “ n ” , and Scalar “ c ” det ( cA ) = cn det ( A ) |A | = cn | A |  Determinant of an Invertible Matrix For matrix Anx n A is invertible if-and-only-if det ( A ) ≠ 0 |A | ≠ 0  Factors are not row-column specific (for whatever reason??) An invertible matrix must have a non- zero determinant, elsewise one would be dividing by zero to obtain the inverse of the matrix (undefined)
  • 92.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Determinant of an Inverse Matrix For matrix Anx n A is invertible if-and-only-if det ( A– 1 ) = |A | =  1 det ( A ) 1 |A | Determinant of a Transpose For matrix Anx n det ( A ) = det ( AT ) |A | = |AT |  An invertible matrix must have a non- zero determinant, elsewise one would be dividing by zero to obtain the inverse of the matrix (undefined)
  • 93.
    © Art Traynor2011 Mathematics Linear Algebra Definitions Equivalent Conditions For A Non-Singular Matrix For matrix Anx n , the following statements are equivalent “ A ” is invertible Ax = b has a unique solution for every n x 1 column matrix b Ax = 0 has only the trivial solution “ A ” is row-equivalent to In  “ A ” can be written as the product of elementary matrices det ( A ) ≠ 0 ; |A | ≠ 0
  • 94.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Adjoint of a Matrix For a square matrix Anx n  The Cofactor Cij of the entry aij is Cij = ( – 1 )i+j Mij  C11 C21 Cn1 . . . C12 C22 Cn2 . . . . . . . . . . . . . . . C1n C2n Cnn . . . Cofactor Matrix of A C11 C12 C1n . . . C21 C22 C2n . . . . . . . . . . . . . . . Cn1 Cn2 Cnn . . . adj ( A ) = Adjoint Matrix of A The transpose of the Cofactor Matrix Cij of “ A ” is Cij T 
  • 95.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Adjoint Equivalence with Matrix Inverse For invertible matrix Anx n , A– 1 is defined by A– 1 = adj ( A ) A– 1 = adj ( A )  1 det ( A ) 1 |A |
  • 96.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Cramer’s Rule Given a square matrix Anx n in “ n ” equations ( i.e. LE count = Airty ) a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2 2-LE System with (2) unknowns  yields solutions with common denominators b1 a22 – b2 a12 a11 a22 – a21 a12 x1 = b2 a11 – b1 a21 a11 a22 – a21 a12 x2 = which denominator forms the Determinant of the matrix “ A ” b1 b2 a12 a22 x1 = a11 a21 a12 a22 a11 a21 b1 b2 , x2 = a11 a21 a12 a22 , a11 a22 – a21 a12 ≠ 0 |A1 | = b1 b2 a12 a22 |A2 | = a11 a21 b1 b2 |A1 | |A | x1 = |A2 | |A | x2 =
  • 97.
    © Art Traynor2011 Mathematics Algebra Of Matrices Linear Algebra Cramer’s Rule Given a system of “ n ” linear equations in “ n ” variables ( i.e. LE count = Airty ) with coefficient matrix “ A ” and non-zero determinant |A | the solution of the system is given as:  |A1 | |A | x1 = , x2 = , … , xn = where the ith column of Ai is the “ constant ” vector in the LE system |A2 | |A | |An | |A | Linear Equation System a11 x1 + a12 x2 + a13 x3 = b1 a21 x1 + a22 x2 + a23 x3 = b2 a31 x1 + a32 x2 + a33 x3 = b3 Matrix-Vector Notation Example: a11 b1 a21 b2 a31 b3 a12 a22 a32 a11 a13 a21 a23 a31 a33 a12 a22 a32 |A3 | |A | x3 = =
  • 98.
    © Art Traynor2011 Mathematics Topological Spaces Space Mathematical Space Mathematical Space A Mathematical Space is a Mathematical Object that is regarded as a species of Set characterized by:  Structure Heirarchy and Inner Product Spaces Normed Vector Spaces Vector Spaces Metric Spaces Subordinate Spaces ( Subspaces ) inherit the properties of Parent Spaces such that subordinate Subspaces are said to Induce their properties onto the parent spaces in a recursive fashion e.g. an Algebra or Algebraic Structure
  • 99.
    © Art Traynor2011 Mathematics ProjectiveEuclidean Mathematical Space Distance between two points is defined Distance is Undefined Space Mathematical Space Heirarchy Upper Level Classification Second Level Classification Non-EuclideanEuclidean Finite Dimensional Infinite Dimensional Compact Non-Compact Second Level Classification N n , Z n , Q n , R n , C n , E n This slide is very slippery It really needs a deeper dive to achieve necessary cogency
  • 100.
    © Art Traynor2011 Mathematics MapFunction Morphism A Relation between a Set of inputs and a Set of permissible outputs whereby each input is assigned to exactly one output A Relation as a Function but endowed with a specific property of salience to a particular Mathematical Space A Relation as a Map with the additional property of Structure preservation as between the sets of its operation Structure A Set attribute by which several species of Mathematical Object are permitted to attach or relate to the Set which expand the enrichment of the Set  Space Mathematical Space Measure The manner by which a Number or Set Element is assigned to a Subset Algebraic Structure A Carrier Set defined by one or more Finitary Operations Field A non-zero Commutative Ring with Multiplicative Inverses for all non-zero elements (an Abelian Group under Multiplication)  
  • 101.
    © Art Traynor2011 Mathematics FMM A unique Relation between Sets Structure A Set attribute by which several species of Mathematical Object are permitted to attach or relate to the Set which expand the enrichment of the Set  Space Mathematical Space Measure The manner by which a Number or Set Element is assigned to a Subset Algebraic Structure A Carrier Set defined by one or more Finitary Operations Field A non-zero Commutative Ring with Multiplicative Inverses for all non-zero elements (an Abelian Group under Multiplication) Satisfies Group Axioms plus Commutativity Arithmetic Operations are defined ( +, – , x ,÷ ) Salient to a Mathematical Space Preserving of Structure FMM = Function~Map~Morphism Akin to the Holy Trinity Topology Those properties of a Mathematical Object which are invariant under Transformation or Equivalence Metric Space A Set for which distance between all Elements of the Set are defined The Triangle Inequality constitutes the principle Axiom from which three subsidiary axioms are derived F ≡ C R Q Z N
  • 102.
    © Art Traynor2011 Mathematics Topology Structure A Set attribute by which several species of Mathematical Object are permitted to attach or relate to the Set which expand the enrichment of the Set  Space Mathematical Space Manifold A Topologic Space resembling a Euclidean Space whose features may be charted to Euclidean Space by Map Projection Metric Space A Carrier Set defined by one or more Finitary Operations Riemann Manifold Order A Binary Set Relation exhibiting the Reflexive, Antisymmetric, and Transitive properties Equivalence Class Those properties of a Mathematical Object which are invariant under Transformation or Equivalence Surface of a Sphere is not a Euclidean Space! A Real Manifold enriched with an inner product on the Tangent Space varying smoothly at each point Geometry A Complete, Locally Homogenous, Reimann Manifold Scale Invariant - Exhibits Multiplicative Scaling Convergent A Binary Set Relation exhibiting the Reflexive, Symmetric, and Transitive properties
  • 103.
    © Art Traynor2011 Mathematics Topology Structure A Set attribute by which several species of Mathematical Object are permitted to attach or relate to the Set which expand the enrichment of the Set  Space Mathematical Space Manifold A Topologic Space resembling a Euclidean Space whose features may be charted to Euclidean Space by Map Projection Order A Binary Set Relation exhibiting the Reflexive, Antisymmetric, and Transitive properties Equivalence Class Those properties of a Mathematical Object which are invariant under Transformation or Equivalence Surface of a Sphere is not a Euclidean Space! A Binary Set Relation exhibiting the Reflexive, Symmetric, and Transitive properties Differential Structures A Structure on a Set rendering the Set into a Differential Manifold with n-dimensional Continuity defined by a CK Atlas of Bijection/Charts Categories Comprised of Object and Morphism Classes and Morphisms relating the Objects admitting Composition and satisfying the Associativity and Identity Axioms
  • 104.
    © Art Traynor2011 Mathematics FMM A unique Relation between Sets Structure A Set attribute by which several species of Mathematical Object are permitted to attach or relate to the Set which expand the enrichment of the Set  Space Mathematical Space Measure The manner by which a Number or Set Element is assigned to a Subset Salient to a Mathematical Space Preserving of Structure FMM = Function~Map~Morphism Akin to the Holy Trinity SurjectionInjective Functions One-to-One Onto Bijection Inversive One-to-One & Onto f : X  Y A Function which returns a CoDomain equivalent to the Domain of another Function returning that same CoDomain aka: Automorphism
  • 105.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M “① Example:
  • 106.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 107.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 108.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 109.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 110.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 111.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself Proof: •Let M be a Space over some field F. •Every Space must contain at least two elements: the empty set { } , and itself Ms P ( M ) Ms
  • 112.
    © Art Traynor2011 Mathematics Subspace Subspace ( General ) Mathematical Space Somewhat trivially, a mathematical Subspace is a Subset of a parent Mathematical Space which inherits and enriches the Structure of the superordinating Mathematical Space  Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Mathematical Space M Mathematical Space Given Mathematical Space “ M ”① Example: Set Theory entails that at least two improper subspaces are constituent of the Space: the Empty Set and “ M ” itself P ( M ) Ss ② We next introduce into this Power Set / Spanning space a well defined non-zero element “ S ” and at least one additional Structuring operation (∗) such that Si ∗ Si Fi  Ss Si ∗
  • 113.
    © Art Traynor2011 Mathematics Vector Space Vector Space ( General ) Mathematical Space Vector Spaces Vector Spaces Metric Spaces Topological Spaces A Vector Space V is a species of Set over a Field F of scalars (e.g. R or C ) whose constituent point elements can be uniquely characterized by an ordered tuple of n-dimension ( Vectors ) Structured the Superposition Principle ( and its derivative Linear Operations ):  Addition ( aka Additivity Property ) A function that assigns to the combination of any two or more elements of the space a resultant unique n-tuple ( Vector ) composed of the sum of the respective operand vector components. f ( 〈 a , b 〉 ) = 〈 an + bn 〉 = 〈 rn 〉 = r ( e.g. rn  R n ) f : V + V  V = a  V  f ( a )  V Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements The simplest Vector Space is the space populated by only the Field itself, known as a Coordinate Space Vector Addition corresponds to the Motion of Translation
  • 114.
    © Art Traynor2011 Mathematics Addition Vectors Vector (General ) x y O initial point terminal point Free Vector r A  Sum of Vectors – Vector Addition (Tail –to–Tip) B a ( ax, ay ) b ( bx, by ) ║a ║ ║b ║  Any two (or more) vectors can be summed by positioning the operand vector (or its corresponding-equivalent vector) tail at the tip of the augend vector.  The summation (resultant) vector is then extended from (tail) the origin (tail) of the augend vector to the terminal point (tip) of the operand vector (tip-to-tip/head-to-head). ry rx r ( rx , ry ) θ “ Tail-to-Tip ” “ Tip-to-Tip ” Same procedure, sequence of operations whether for vector addition (summation) or vector subtraction (difference) Resultant is always tip-to-tip Operands are oriented “ tip-to-tail ” resultant vector is oriented “ tip-to- tip ” The resultant vector in a summation always originates at the displacement origin and terminates coincident at the terminus of the final displacement vector (e.g. tip-to-tip) Chump Alert: A vector summation is a species of Linear Comination
  • 115.
    © Art Traynor2011 Mathematics Vector Space Vector Space ( General ) Mathematical Space Vector Spaces Vector Spaces Metric Spaces Topological Spaces A Vector Space V is a species of Set over a Field F of scalars (e.g. R or C ) whose constituent point elements can be uniquely characterized by an ordered tuple of n-dimension ( Vectors ) Structured by the following Linear Operations:  Addition f ( c 〈 an 〉 ) = 〈 can 〉 = 〈 rn 〉 = r ( e.g. rn  R n ) Scalar Multiplication A function that assigns to the combination of any element of the multiplicand field and any multiplier vector space element a resultant unique n-tuple ( Vector ) composed of the product of the respective multiplicand scalar and the constituent multiplier vector n-components supra. f : F x V  V = a  V  f ( a )  V Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements
  • 116.
    © Art Traynor2011 Mathematics Vectors Vector (General ) PQ x y Position Vector O initial point Free Vector C  Vector Scalar Multiple – a species of Transformation where the CoDomain Set if positive effects an Expansion, or if negative effects a Dilation.  O C ( cax , cay ) A ( ax , ay ) c OA = OC terminal point Example: F = ma Vector Scalar Multiple Operands are oriented “ tip-to-tail ” with the multiplicand ( vector to be scaled ) “ scaled ” by the multiplier-scalar. The result constitutes a vector addition of the product of the scalar and the multiplicand normalized unit vector (NUV) thus preserving multiplicand orientation in the result c 〈 ax , ay 〉 = 〈 cax , cay 〉 Chump Alert: A vector scalar is a species of Linear Comination
  • 117.
    © Art Traynor2011 Mathematics Vector Space Vector Space ( General ) Mathematical Space Vector Spaces Vector Spaces Metric Spaces Topological Spaces A Vector Space V is a species of Set over a Field F of scalars (e.g. R or C ) whose constituent point elements can be uniquely characterized by an ordered tuple of n-dimension ( Vectors ) Structured by the following Linear Operations:  Addition Scalar Multiplication Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Vector/Linear (VL ) Spaces are said to be “Algebraic” VL Space operations define figures (subspaces?) such as lines and planes The Dimension of a VL Space is determined by the maximal number of Linear Independent variables (identical to the minimal number of vectors that Span the space) Additional Structure apart from that characterizing general Vector Space is needed to define Nearness, Angles, or Distance A Vector Space V is a species of Set over a Field F
  • 118.
    © Art Traynor2011 Mathematics Vector Space Vector Space ( General ) Mathematical Space Vector Spaces Vector Spaces Metric Spaces Topological Spaces A Vector Space V is a species of Set over a Field F of scalars (e.g. R or C ) whose constituent point elements can be uniquely characterized by an ordered tuple of n-dimension ( Vectors ) Structured by the following Linear Operations:  Addition Scalar Multiplication Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements Vector Spaces are said to be “Linear” spaces (as distinct from Topological Spaces) Vector/Linear (VL ) Spaces are said to be “Algebraic” VL Space operations define figures (subspaces?) such as lines and planes The Dimension of a VL Space is determined by the maximal number of Linear Independent variables (identical to the minimal number of vectors that Span the space) Additional Structure apart from that characterizing general Vector Space is needed to define Nearness, Angles, or Distance
  • 119.
    © Art Traynor2011 Mathematics Vector Space Vector Space ( General ) Mathematical Space Vector Spaces Vector Spaces Metric Spaces Topological Spaces A Vector Space V is a species of Set over a Field F of scalars (e.g. R or C ) whose constituent point elements can be uniquely characterized by an ordered tuple of n-dimension ( Vectors ) Structured by the following Linear Operations:  Addition Scalar Multiplication Closed under the operation of Addition and Scalar Multiplication And which satisfy the ten axioms governing vector space elements The essential Structure of a Vector Space enables transformations of its elements that correspond to classes of Motion Vector Addition (as well as Scalar Multiplication, which is by extension repeated vector addition) corresponds to Translation Translation is classed as one of three species of Rigid Motion the other two Rotation and Reflection require additional Structure A Vector is understood to represent a difference (Displacement ) between the respective values of its constituent ordered tuples
  • 120.
    © Art Traynor2011 Mathematics Vectors Vector Properties Multiplicative Inverse If c( v ) = 0 Zero Vector Scalar Identity – 1( v ) = – v Properties of Scalar Multiplication If v represents any element of a vector space V ( v  V ) and c represents any scalar, then the following properties pertain:  Zero Vector Multiplicative Identity0( v ) = 0 Scalar Zero Vector Multiplicative Identity c( 0 ) = 0 then c = 0 or v = 0
  • 121.
    © Art Traynor2011 Mathematics Vector Space Axioms – Addition Abstraction Mathematical Space Vector Space A vector space is comprised of four elements: a set of vectors, a set of scalars, and two operations:  u + v ( is in V ) Closure Under Addition ( u + v ) + w = u + ( v + w ) Changes Order of Operations as per “PEM-DAS”, Parentheses are the principal or first operation u + v = v + u Commutative Property of Addition Re-Orders Terms Does Not Change Order of Operations – PEM-DAS Associative Property of Addition u + 0 = u Additive Identity u + ( – u ) = 0 Additive Inverse If V is a vector space then $ 0 | "u  V " u  V , $ – u | "u  V Operations: Addition & Scalar Mult. Section 4.2, (Pg. 155) Represents “ 0 = – 2 ” , a contradiction, and thus no solution {  } to the LE system for which the augmented matrix stands Note that there is nothing in these axioms that entails Length/Distance or Magnitude of Vectors, nor corresponding attributes such as Angle or Nearness
  • 122.
    © Art Traynor2011 Mathematics Multiplicative Identity cu ( is in V ) Closure Under Scalar Multiplication c ( u + v ) = cu + cv Distributive ( c + d )u = cu + du Distributive c( du ) = ( cd )u Associative Property of Multiplication 1( u ) = u Vector Space Axioms – Scalar Multiplication Abstraction A vector space is comprised of four elements: a set of vectors, a set of scalars, and two operations:  Operations: Addition & Scalar Mult. Section 4.2, (Pg. 155) Let c = 0 and you therefore don’t need to state a separate scalar multiplicative zero element Mathematical Space Vector Space Note that there is nothing in these axioms that entails Length/Distance or Magnitude of Vectors, nor corresponding attributes such as Angle or Nearness
  • 123.
    © Art Traynor2011 Mathematics Normed Vector Space Normed Vector Space Mathematical Space “Length” (aka Distance/Magnitude) is one type of norm The vector norm ║X║ is more formally defined as the ℓ2-norm Somewhat trivially, a Normed Vector Space is a Vector Space Structured by an Norm  A norm is defined as a Mathematical Structure of the species of a Measure. Inner Product Spaces Normed Vector Spaces Vector Spaces There are several species of Norm A Norm on a vector space V is a function that maps a vector a in V to an element r of a Field F f : V R = a  V  f ( a )  F = rn ( e.g rn  R n ) 2 2 ax + ay║a ║ = ║〈 ax , ay 〉║ =Magnitude p-Norm aka Euclidean Norm, L2 Norm, or ℓ2Norm ( or “ Length ” ) ║a ║p = ( Σ | ai |p )i = 1 n 1 p
  • 124.
    © Art Traynor2011 Mathematics Magnitude Vectors Vector ( General ) Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) x y O θ A ( ax , ay )  Magnitude a Position Vector PVF: Position Vector Form xO θ a (adj ) b (opp ) r = c (hyp ) M A (1, 0) P ( cos θ, sin θ ) 1 tan θ cos θ Q sin θ y UCF: Unit Circle Form ay ax Unit Circle In PVF the magnitude of a vector a = 〈 ax , ay 〉 is equivalent to the hypotenuse ( c = ║a ║ ) of a right triangle whose adjacent side ( a ) is given by the coordinate a1 , and whose opposite side ( b ) is given by the coordinate a2 : 2 2 ax + ay║a ║ = ║ 〈 ax , ay 〉 ║ = Pythagorean Theorem derived
  • 125.
    © Art Traynor2011 Mathematics Inner Product Space Inner Product Space ( IPS ) An Inner Product Space is a Vector Space over a Field of Scalars (e.g. R or C ) Structured by an Inner Product  Inner Product Spaces Normed Vector Spaces For a Euclidean Space the Inner Product is defined as the Dot Product Positive-Definite Symmetric Bilinear Form Length/Distance or Magnitude of Vectors This Structure moreover defines IPS salients such as: Vector Subtended Angle Orthogonality of Vectors These spaces have a well-ordered semantic construction of the form: A Vector Space with an Inner Product “on” it… “ The IPS of conventional multiplication over the field of R ” “ The IPS of the dot product over the field of R ” Mathematical Space
  • 126.
    © Art Traynor2011 Mathematics Inner Product Space ( IPS ) Axioms Inner Product Spaces Normed Vector Spaces A summation of the Scalar Product of the vector components, a = 〈 ax , ay 〉 , b = 〈 bx , by 〉 For a Euclidean Space the Inner Product is defined as the Dot Product Note here the possibility of describing the dot product as an equivalence class for its alternate expressions  to every n-tuple of vectors a and b in V, a scalar in F Let V be a vector space over F , a Field of Scalars (e.g. R or C ) . An Inner Product on V is a function that assigns, f ( 〈 a , b 〉 ) = ( an bn + an bn ) = rn ( e.g. rn  R n ) Inner Product Space Mathematical Space
  • 127.
    © Art Traynor2011 Mathematics Inner Product Axioms Given vectors u , v , and w in Rn , and scalars c , the following axioms pertain: 〈 u , v 〉 = 〈 v , u 〉 Symmetry 〈 u , v + w 〉 = 〈 u , v 〉 + 〈 u , w 〉 Additive Linearity Positive Definiteness Inner Product Space Mathematical Space c 〈 u , v 〉 = 〈 cu , v 〉 Multiplicative Linearity 〈 v , v 〉 ≥ 0 〈 v , u 〉 〈 v , v 〉 = 0 if and only if v = 0 Section 5.2, (Pg. 237)
  • 128.
    © Art Traynor2011 Mathematics Vectors x y Position Vector O initial point terminal point Free Vector A B O A ( ax , ay ) B ( bx , by ) ║a ║ ║b ║ Vector ( Euclidean )  Dot Product The dot product of two vectors is the scalar summation of the product Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) PVF: Position Vector Form UCF: Unit Circle Form a · b = ax bx + ay by of their components, a = < ax , ay > , b = < bx , by > Also referred to as the Scalar Product or Inner Product Pythagorean Theorem derived Inner (Dot) Product
  • 129.
    © Art Traynor2011 Mathematics Vectors x y Position Vector O initial point terminal point Free Vector A B O A ( ax , ay ) B ( bx , by ) ║a ║ ║b ║ Vector ( Euclidean )  Dot Product & Angle Between Vectors For any two non-zero vectors sharing a common initial point the dot product of the two vectors is equivalent to the product of their magnitudes and the cosine of the angle between Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) Inner (Dot) Product θ θ a · b = ax bx + ay by a · b = ║b ║║a ║ cosθ cosθ = ║a ║║b ║ a · b You will be asked to find the angle between two vectors sharing a common initial point (origin)…a lot θ = cos– 1 ║a ║║b ║ a · b
  • 130.
    © Art Traynor2011 Mathematics Vectors Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) x y Position Vector O Free Vector Physical Quantities represented by vectors include: Displacement, Velocity, Acceleration, Momentum, Gravity, etc. O A ( ax , ay ) B ( bx , by ) a b c A B θ θ ║a ║ cos θ  Dot Product & Angle Between Vectors For any two non-zero vectors sharing a common initial point the dot product of the two vectors is equivalent to the product of their magnitudes and the cosine of the angle between Vector ( Euclidean ) a · b = ax bx + ay by a · b = ║b ║║a ║ cosθ = a · b OB – OA = AB Inner (Dot) Product
  • 131.
    © Art Traynor2011 Mathematics Vectors Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) x y Position Vector O Free Vector Physical Quantities represented by vectors include: Displacement, Velocity, Acceleration, Momentum, Gravity, etc. O A ( a1, a2 ) B ( b1x , by ) a b c A B θ θ ║a ║ cos θ ║b ║ Area = ║b ║║a ║ cosθ = a · b  Dot Product & Angle Between Vectors For any two non-zero vectors sharing a common initial point the dot product of the two vectors is equivalent to the product of their magnitudes and the cosine of the angle between Vector ( Euclidean ) a · b = ax bx + ay by OB – OA = AB Inner (Dot) Product
  • 132.
    © Art Traynor2011 Mathematics Vectors x y Position Vector O A ( ax , ay ) B ( bx , by ) a b θ  Vector Component as Projection Vector ( Euclidean ) Inner (Dot) Product ② ③ ④ The intersection of any two vectors with common origin will feature a shared angle ( the “Angle Between” ). ① ② In Position Vector Form (PVF), the vector system can be aligned so that the vector common origin coincides with a coordinate system origin and one of the vectors (the Multiplier vector “ b ”) can then be aligned along the x-coordinate axis ③ In this orientation the Multiplicand vector “ a ” (if the angle between is acute) will terminate in the first quadrant of the coordinate system. O A B θ Free Vector ① ④ Note that the X-component of a (i.e. ax) is geometrically equivalent to a vertical projection from a onto the X-axis and b ax
  • 133.
    © Art Traynor2011 Mathematics Vectors x y Position Vector O A ( ax , ay ) B ( bx , by ) a b θ  Vector Component as Projection Vector ( Euclidean ) Inner (Dot) Product ⑤ Recalling the trigonometric relationships of the Unit Circle, it can be further noted that the X-component of a (i.e. ax) – previously noted to be geometrically equivalent to a vertical projection from a onto the X-axis and b – is also geometrically equivalent to the product of the length of a (its Magnitude) and the cosine of the angle formed with the x-axis ax 2 2 ax + ay║a ║ = ║〈 ax , ay 〉║ = ax = ║a ║ cosθ ║b ║ 1 compb a = a · b O U ! ! cos θ r = 1 = c r = c (hyp ) tan θ sin θ θ x y a (adj ) b (opp ) Unit Circle
  • 134.
    © Art Traynor2011 Mathematics Vectors x y Position Vector O A ( ax , ay ) B ( bx , by ) a b θ  Vector Component as Projection Vector ( Euclidean ) Inner (Dot) Product ⑤ Recalling the trigonometric relationships of the Unit Circle, it can be further noted that the X-component of a (i.e. ax) – previously noted to be geometrically equivalent to a vertical projection from a onto the X-axis and b – is also geometrically equivalent to the product of the length of a (its Magnitude) and the cosine of the angle formed with the x-axis ax 2 2 ax + ay║a ║ = ║〈 ax , ay 〉║ = Another way to express this geometrical equivalence is to note the inherent relationship between the lengths of two vectors in composition sharing a common origin and the angle between the two supplied by the inner (dot) product relationship xO θ a (adj ) b (opp ) r = c (hyp ) M A (1, 0) P ( cos θ, sin θ ) 1 tan θ (r|c|1) cos θ Q sin θ y Unit Circle a OA = OP = AP = t = θ = 1
  • 135.
    © Art Traynor2011 Mathematics Vectors Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) x y Position Vector O Physical Quantities represented by vectors include: Displacement, Velocity, Acceleration, Momentum, Gravity, etc. A ( a1, a2 ) B ( b1x , by ) a b c θ ║a ║ cos θ ║b ║ Area = ║b ║║a ║ cosθ  Vector Component as Projection For any two non-zero vectors sharing a common initial point the dot product of the two vectors is equivalent to the product of their magnitudes and the cosine of the angle between Vector ( Euclidean ) Inner (Dot) Product a · b = ║b ║║a ║ cosθ
  • 136.
    © Art Traynor2011 Mathematics Vectors Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) xO The notion of “component along” is a direct consequence of the definition of an inner (dot) product – relating the two “sides” of a vector “triangle” via a ratio given by the cosine of the “angle between” O A ( ax , ay ) B ( b1, b2 ) a b c A Bθ θ ║a ║ cos θ  Vector Component Along an Adjoining Vector Vector ( Euclidean ) y Position Vector The component of OA along OB that has the same direction as OB ║b ║ 1 compb a = a · b x y Position Vector ║b ║ b compb a = a · Compb a = a ·║b ║ b is the dot product of OA with the unit vector 1 ║u ║ u ║a ║ û = u =( ( Dot Product
  • 137.
    © Art Traynor2011 Mathematics Vectors Col. 1 Col. 2 Col. 3 . . . Col. n a1 a2 a3 . . . ana · b = aTb = Col. 1 b1 b2 b3 bm . . . Vector ( Euclidean )  Dot Product ( Determinant Form ) The dot product of two vectors, is the matrix product of the 1 x n transpose of the multiplicand vector and the m x 1 multiplier vector Col. 1 b1 b = b2 b3 bm . . . Col. 1 a1 a = a2 a3 an . . . a · b = aTb = a1b1 + a2b2 + a3b3 …+ anbm Inner (Dot) Product
  • 138.
    © Art Traynor2011 Mathematics Vector Spaces – Rn Vectors Tuple Properties An “ n-tuple” is characterized by the following: A sequence An ordered list Comprising “ n ” elements ( n is a non-negative integer) Canonical “ n-tuples ” 0-tuple: null tuple 1-tuple: singleton 2-tuple: ordered pair 3-tuple: triplet
  • 139.
    © Art Traynor2011 Mathematics Vector Spaces – Rn Vectors Vector “ n-tuple ” Representation An ordered n-tuple represents a vector in n-space Section 4.1, (Pg. 149) n Of the form ( ai , ai+1 ,…an – 1 , an ) n The Set of all n-tuples is n-space, denoted by Rn n An n-tuple can be rendered as a point in Rn whose coordinates describe a unique vector a n n-tuples delineate Rn such that all points in Rn can be represented by a unique n-tuple Tuple Properties
  • 140.
    © Art Traynor2011 Mathematics Vector Spaces – Rn Vectors “ n-tuple ” distinguished from a set n-tuples delineate Rn space such that tuples of disparate n-order: ( 1, 2, 3, 2 ) ≠ ( 1, 2, 3 ) are not equal as the same sequence expressed as elements of a set { 1, 2, 3, 2 } = { 1, 2, 3 }  Tuple elements are ordered: ( 1, 2, 3 ) ≠ ( 3, 2, 1 ) whereas for a set { 1, 2, 3 } = { 3, 2, 1 }  A tuple is composed of a finite population of elements whereas a set may contain infinitely many elements  Tuple: Sequence Matters Set: Sequence Does Not Matter Set: Order Matters Set: Order Does Not Matter Tuple Properties
  • 141.
    © Art Traynor2011 Mathematics Vector Spaces – Rn Vectors Tuples as Functions An n-tuple can be rendered as a function “ F ” the domain of which is represented by the tuple’s element index/indices or “ X ” the codomain of which is represented by the tuple’s elements or “ Y ” X = { i , i + 1 ,…, n – 1 , n } ( ai , ai+1 ,…an – 1 , an ) = ( X , Y , F ) ( a1 , a2 ,…, an – 1 , an ) = ( X , Y , F ) X = { 1 , 2 ,…, n – 1 , n } or or Y = { a1 , a2 ,…, an –1 , an } F = { ( 1, a1 ) , ( 2, a2 ) ,…, ( n – 1, an – 1 ) , ( n, an ) } Tuple Properties
  • 142.
    © Art Traynor2011 Mathematics Definition Vectors Vector ( Euclidean ) A geometric object (directed line segment) describing a physical quantity and characterized by Direction: depending on the coordinate system used to describe it; and Magnitude: a scalar quantity (i.e. the “length” of the vector) Aka: Geometric or Spatial Vector originating at an initial point [ an ordered pair : ( 0, 0 ) ] and concluding at a terminal point [ an ordered pair : ( a1 , a2 ) ] Other mathematical objects describing physical quantities and coordinate system transforms include: Pseudovectors and Tensors  Not to be confused with elements of Vector Space (as in Linear Algebra)  Fixed-size, ordered collections  Aka: Inner Product Space  Also distinguished from statistical concept of a Random Vector From the Latin Vehere (to carry) or from Vectus…to carry some- thing from the origin to the point constituting the components of the vector 〈 a1 , a2 〉
  • 143.
    © Art Traynor2011 Mathematics Vectors Vector ( Euclidean ) Aka: Geometric or Spatial Vector From the Latin Vehere (to carry)  Vector – Properties ( PVF Form) Each (position) vector determines a unique Ordered Pair ( a1 , a2 ) The coordinates a1 and a2 form the Components of vector 〈 a1 , a2 〉 x y Position Vector ║a ║ O θ A ( a1, a2 ) initial point terminal point a a1 a2   Position Vector  A vector represented in PVF is Unique n There is precisely one free-vector equivalent in PVF: a = OA n The unique ordered pair describing the vector is a unique n-tuple in Rn
  • 144.
    © Art Traynor2011 Mathematics Vector Standard Operations in Rn Vectors Sum of Vectors: u + v u + v = ( u1+ v1 , u2+ v2 , … , un –1 + vn –1 , un + vn ) Given u = ( ui , ui+1 ,…un – 1 , un ) and v = ( vi , vi+1 ,…vn – 1 , vn ) Scalar Multiple of Vectors: cu cu = ( cu1 , cu2 , … , cun –1 , cun ) Vector Operations
  • 145.
    © Art Traynor2011 Mathematics Addition Vectors Vector ( Euclidean ) x y O initial point terminal point Free Vector r A  Sum of Vectors – Vector Addition (Tail –to–Tip) B a ( ax, ay ) b ( bx, by ) ║a ║ ║b ║  Any two (or more) vectors can be summed by positioning the operand vector (or its corresponding-equivalent vector) tail at the tip of the augend vector.  The summation (resultant) vector is then extended from (tail) the origin (tail) of the augend vector to the terminal point (tip) of the operand vector (tip-to-tip/head-to-head). ry rx r ( rx , ry ) θ “ Tail-to-Tip ” “ Tip-to-Tip ” Same procedure, sequence of operations whether for vector addition (summation) or vector subtraction (difference) Resultant is always tip-to-tip Operands are oriented “ tip-to-tail ” resultant vector is oriented “ tip-to- tip ” The resultant vector in a summation always originates at the displacement origin and terminates coincident at the terminus of the final displacement vector (e.g. tip-to-tip) Chump Alert: A vector summation is a species of Linear Comination
  • 146.
    © Art Traynor2011 Mathematics Vector Standard Operations in Rn Vectors Vector Operations Difference of Vectors: u – v u – v = ( u1 – v1 , u2 – v2 , … , un –1 – vn –1 , un – vn ) Given u = ( ui , ui+1 ,…un – 1 , un ) and v = ( vi , vi+1 ,…vn – 1 , vn ) Scalar Multiplicative Inverse: – cu ( c = 1) – u = ( – u1 , – u2 , … , – un –1 , – un )
  • 147.
    © Art Traynor2011 Mathematics Subtraction Vectors Vector ( Euclidean ) x y O ry rx θ initial point terminal point Free Vector r = a + bcorr a b O “ Tail-to-Tip ” “ Tip-to-Tip ” ( Addition ) bcorr – bcorr “ Tip-to-Tip ” ( Difference ) Position Vector r = a – bcorr Difference of Vectors – Vector Subtraction ( Tail –to–Tip )  Any two (or more) vectors can be subtracted by positioning the tail of a corresponding-equivalent subtrahend vector (initial point) at the tip (terminal point) of the minuend vector.  The difference (resultant) vector is then extended from the tail (initial point ) of the minuend vector (tail-to-tail) to the terminal point (tip) of the subtrahend vector (tip-to-tip). minuend subtrahend Same procedure, sequence of operations whether for vector addition (summation) or vector subtraction (difference) Resultant is always tip-to-tip r r = a + bcorr bcorr – bcorr a b Operands are oriented “ tip-to-tail ” resultant vector is oriented “ tip-to- tip ” Chump Alert: A vector difference is a species of Linear Comination
  • 148.
    © Art Traynor2011 Mathematics Vector Properties – Additive Identity & Additive Inverse Vectors Vector Properties Given vectors u , v , and w in Rn , and scalars c and d, the following properties pertain  0v = 0 Scalar Zero Element If u + v = v then u = 0 Additive Identity is Unique If v + u = 0 then u = – v Additive Inverse is Unique c0 = 0 Scalar Multiplicative Identity of Zero Vector If cv = 0 then c = 0 or v = 0 Zero Vector Product Equivalence – ( – v ) = v Negation Identity Section 4.1, (Pg. 151)
  • 149.
    © Art Traynor2011 Mathematics Vector Spaces – Classification Real Number Vector Spaces R = set of all real numbers R2 = set of all ordered pairs R3 = set of all ordered triplets Rn = set of all n-tuple Matrix Vector Spaces Mm,n = set of all m x n matrices Mn,n = set of all n x n square matrices Section 4.2, (Pg. 157) Vector Spaces Vector Space
  • 150.
    © Art Traynor2011 Mathematics Vector Spaces Vector Spaces – Classification Polynomial Vector Spaces P = set of all polynomials Pn = set of all polynomials of degree ≤ n Continuous Functions ( Calculus ) Vector Spaces C ( – ∞ , ∞) = set of all continuous functions defined on the real number line  C [ a, b ] = set of all continuous functions defined on a closed interval [ a, b ]  Section 4.2, (Pg. 157) Vector Space
  • 151.
    © Art Traynor2011 Mathematics Vector Subspaces Subspace Definition A non-empty subset W ( W ≠  ) of a vector space V is a subspace of V when the following conditions pertain:  W is a vector space under addition in V W is a vector space under scalar multiplication in V Subspace Test For a non-empty subset W ( W ≠  ) of a vector space V, W is a subspace of V if-and-only if the following pertain:  If u and v are in W, then u + v is in W If u is in W and c is any scalar, then cu is in W Zero Subspace W = { 0 } Section 4.3, (Pg. 162) Section 4.3, (Pg. 162) Section 4.3, (Pg. 163) Vector Space
  • 152.
    © Art Traynor2011 Mathematics Vector Subspaces W 1 Polynomial functions W 5 Functions W 2 Differentiable functions W 3 Continuous functions W 4 Integrable functions W5 = Vector Space " f Defined on [ 0, 1 ] W4 = Set " f Integrable on [ 0, 1 ] W3 = Set " f Continuous on [ 0, 1 ] W2 = Set " f Differentiable on [ 0, 1 ] W2 = Set " Polynomials Defined on [ 0, 1 ] W1  W2  W3  W4  W5 W 1 – Every Polynomial function is Differentiable W1  W2 W 2 – Every Differentiable function is Continuous W2  W3 W 3 – Every Continuous function is Integrable W3  W4 W 4 – Every Integrable function is a Function W4  W5 Function Space Section 4.3, (Pg. 164) Vector Space
  • 153.
    © Art Traynor2011 Mathematics Vector Subspaces U V W V  W Properties of Scalar Multiplication If V & W are both subspaces of a vector space U, then the intersection of V & W ( V  W ) is also a subspace of U  Vector Space
  • 154.
    © Art Traynor2011 Mathematics Linear Combination of Vectors ( Definition ) A vector v in a vector space V with scalars c = ( ci , ci+1 ,…cn – 1 , cn ) is a Linear Combination of the vectors ( ui , ui+1 ,…un – 1 , un ) expressed as:  v = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un Section 4.1, (Pg. 152) Section 4.4, (Pg. 169) Linear Combination Example: S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) } v1 v2 v3 v1 = 3v2 + v3 v1 = 3( 0 , 1 , 2 ) + (1 , 0 , – 5 ) v1 = ( 0 , 3 , 6 ) + (1 , 0 , – 5 ) v1 = ( 0 + 1 ) , ( 3 + 0 ) , ( 6 – 5 ) = ( 1 , 3 , 1 ) V1 can be expressed as a combination of components of the other two vectors in the set S $ c  F | ( cv2 v3 )  v1 $ c  F | ( cv2  v3 )  v1 Chump Alert: Each of the Vector Space operations (e.g. summation, difference, scalar multiplications) is a species of Linear Combination Vector Space
  • 155.
    © Art Traynor2011 Mathematics Spanning Set of a Vector Space Given S = { vi , vi+1 ,…vk – 1 , vk }, a subset of vector space V the set S is a Spanning Set of V when every vector in V can be written as a Linear Combination of vectors in S, “ S spans V ”  x = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn Section 4.1, (Pg. 152) Section 4.4, (Pg. 169) Section 4.4, (Pg. 171) Span Sounds analogous to the power set of a vector equaton? The set of unit vectors S = { i, j, k } are the minimal spanning set (Basis) for Rn Vector Space P ( S ) = { , S, cn û } Vector Space
  • 156.
    © Art Traynor2011 Mathematics Vectors Vector ( Euclidean ) Aka: Geometric or Spatial Vector From the Latin Vehere (to carry) x y Position Vector O θ A ( ax , ay )  Unit Vector ( Components ) Any vector in PVF can be expressed as a scalar product of the vector sum of its unit ( multiplicative scalar identity ) components î = 〈 1, 0 〉 , ĵ = 〈 1, 0 〉 a ĵ î x y Position Vector O θ A ( ax , ay ) ay ĵ ax î a a = ax î + ay ĵ PVF: Position Vector Form c ( î ) = 〈 c1, c0 〉 , c ( ĵ ) = 〈 c 0, c 1 〉 ax ( î ) = 〈 ax 1, ax 0 〉 , ay ( ĵ ) = 〈 ay 0, ay 1 〉 Unit Vector The set of unit vectors S = { i, j, k } are the minimal spanning set (Basis) for Rn Vector Space
  • 157.
    © Art Traynor2011 Mathematics Vectors Vector ( Euclidean ) Aka: Geometric or Spatial Vector Aka: Versor (Cartesian) x y Position Vector O θ  Normalized Unit Vector A normalized unit vector ( NUV ) is the vector of unitary magnitude corresponding to the set of all vectors which share its direction ĵ î x y Position Vector O θ A ( ax, ay ) a2 ĵ a1 î A ( ax , ay ) û ( ax , ay )║a ║ 1 ║a ║ 1  Any vector can be specified by the scalar product of its corresponding normalized unit vector and its magnitude ( identity )  a = ax î + ay ĵ 1 ║a ║ a ║a ║ û = a = The NUV of a vector is the scalar product of the reciprocal of its magnitude ĵ î û ûa Normalized Unit Vector
  • 158.
    © Art Traynor2011 Mathematics Span of a Set Given S = { vi , vi+1 ,…vk – 1 , vk }, is a set of vectors in a vector space V with scalars c = ( ci , ci+1 ,…cn – 1 , cn ) then the span of S is the set of all Linear Combinations of the vectors in S,  span ( S ) = { ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk } Section 4.4, (Pg. 172) The span of S is denoted: span ( S ) or span { vi , vi+1 ,…vk – 1 , vk } When span ( S ) = V, it is said that: V is spanned by { vi , vi+1 ,…vk – 1 , vk } or S spans V Span P ( S ) = { , S, cn û } The set of unit vectors S = { i, j, k } are the minimal spanning set (Basis) for Rn Vector Space Vector Space
  • 159.
    © Art Traynor2011 Mathematics Span ( S ) as Subspace of V Span Given S = { vi , vi+1 ,…vk – 1 , vk }, is a set of vectors in a vector space V then the span of S span ( S ) or span { vi , vi+1 ,…vk – 1 , vk } is a Subspace of V  Section 4.4, (Pg. 172) The span of S denoted span ( S ) or span { vi , vi+1 ,…vk – 1 , vk } is the smallest Subspace of V containing Ssuch that every other Subspace of V containing S Must also contain span ( S ) It is not sufficiently obvious from this that the minimal cardinality of Spanning Set corresponds precisely to the dimension of the space The set of unit vectors S = { i, j, k } are the minimal spanning set (Basis) for Rn Vector Space P ( S ) = { , S, cn û } W = span ( S ) = { Σ ci vi | k  N , vi  S , ci  F }i = 1 k V Sk – i SiSi W Si Si F = {ci , ci+1…ck – 1 , ck } Vector Space
  • 160.
    © Art Traynor2011 Mathematics Linear Independence Linear Independence Section 4.4, (Pg. 173) Given S = { vi , vi+1 ,…vk – 1 , vk }, within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn ) a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0 expresses Linear Dependence if the solution set includes at least one non-zero solution and if the vector equation features only the trial solution 0 = ( ci , ci+1 ,…cn – 1 , cn ) it is said to express Linear Independence  By setting it equal to the zero vector, are we looking for solutions to a homogenous system? Example: S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) } v1 v2 v3 0 = c1v1 + c2v2 + c3v3 0 = c1 ( 1 , 3 , 1 ) + c2 ( 0 , 1 , 2 ) + c3 ( 1 , 0 , – 5 ) ci = { 1 , – 3 , – 1 } 0 = ( 1 , 3 , 1 ) + ( 0 , – 3 , – 6 ) + ( – 1 , 0 , 5 ) 0 = xi ( 1 + 0 – 1 ) , yi ( 3 – 3 + 0 ) , zi ( 1 – 6 + 5 ) 0 = xi ( 0 ) , yi ( 0 ) , zi ( 0 ) It is not sufficiently obvious from this that the cardinality of the maximum set of linearly independent vectors corresponds precisely to the dimension of the space xi yi zi = x1 , y1, z1 x2 , y2, z2 x3 , y3, z3 Vector Space
  • 161.
    © Art Traynor2011 Mathematics Linear Independence Linear Independence Section 4.4, (Pg. 173) Given S = { vi , vi+1 ,…vk – 1 , vk }, within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn ) a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0 expresses Linear Dependence if the solution set includes at least one non- zero solution and if the vector equation features only the trial solution  By setting it equal to the zero vector, are we looking for solutions to a homogenous system? Example: S = { ( 1 , 3 , 1 ) , ( 0 , 1 , 2 ) , ( 1 , 0 , – 5 ) } v1 v2 v3 0 = c1v1 + c2v2 + c3v3 0 = c1 ( 1 , 3 , 1 ) + c2 ( 0 , 1 , 2 ) + c3 ( 1 , 0 , – 5 ) ci = { 1 , – 3 , – 1 } 0 = ( 1 , 3 , 1 ) + ( 0 , – 3 , – 6 ) + ( – 1 , 0 , 5 ) 0 = xi ( 1 + 0 – 1 ) , yi ( 3 – 3 + 0 ) , zi ( 1 – 6 + 5 ) 0 = xi ( 0 ) , yi ( 0 ) , zi ( 0 ) It is not sufficiently obvious from this that the cardinality of the maximum set of linearly independent vectors corresponds precisely to the dimension of the space 0 = 1 v1 + – 3 v2 + – 1 v3 xi yi zi = x1 , y1, z1 x2 , y2, z2 x3 , y3, z3 Vector Space
  • 162.
    © Art Traynor2011 Mathematics Linear Independence Linear Independence Section 4.4, (Pg. 173) Given S = { vi , vi+1 ,…vk – 1 , vk }, within vector space V over a field of scalars c = ( ci , ci+1 ,…cn – 1 , cn ) a vector equation of the form ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0 expresses Linear Dependence if the solution set includes at least one non-zero solution and if the vector equation features only the trial solution 0 = ( ci , ci+1 ,…cn – 1 , cn ) it is said to express Linear Independence  Example: A Visitor to New York City asks directions to Carnegie Hall. He is instructed to proceed 3-blocks North then 4-blocks East. you are here 3N 4E 5NE These two directions are sufficient to allow him to reach his destination The system of vectors is Linearly Independent and corresponds to the dimension of the space (which would be elsewise if he needed to go to the 6th Floor)  Adding that the destination is 5-blocks Northeast renders the system of vectors Linearly Dependent (as one of the vectors can be expressed as a Linear Combination of the other two).  Vector Space
  • 163.
    © Art Traynor2011 Mathematics Linear Independence Linear Independence Example: A Visitor to New York City asks directions to Carnegie Hall. He is instructed to proceed 3-blocks North then 4-blocks East. you are here 3N 4E 5NE These two directions are sufficient to allow him to reach his destination The system of vectors is Linearly Independent and corresponds to the dimension of the space (which would be elsewise if he needed to go to the 6th Floor)  Adding that the destination is 5-blocks Northeast renders the system of vectors Linearly Dependent (as one of the vectors can be expressed as a Linear Combination of the other two).  Vector Space Ban the Hypotenuse! It is not necessary. Any triangle can be described completely by a & b. C is not necessary
  • 164.
    © Art Traynor2011 Mathematics Linear Independence Test Linear Independence Section 4.4, (Pg. 174) For S = { vi , vi+1 ,…vk – 1 , vk }, constituting a set of vectors within vector space V , to determine the linear independence of S:  ci vi + ci+1 vi+1 +…+ ck –1 vik– 1 + ck vk = 0  State the set as a homogenous equation equivalent to a sum of vector products ( of a scalar coefficient and the respective constituent vector components)  Perform GJE (Gauss-Jordan Elimination) to determine if the system has a Unique Solution  Where only the trivial solution ci = 0 with satisfy the system, the Set S is demonstrated to be Linear Independent where the system is also satisfied by one or more non-trivial solutions, the Set S is demonstrated to be Linear Independent Vector Space
  • 165.
    © Art Traynor2011 Mathematics Linear Dependence & Linear Combination Linear Independence Section 4.4, (Pg. 176) For S = { vi , vi+1 ,…vk – 1 , vk }, k ≥ 2 constituting a set of vectors within vector space V , the set can be demonstrated to be Linear Dependent if-and-only-if   At least one element vector of the set can be expressed as a linear combination of any of the other vectors in the set For S = { v , u } constituting a set of vectors within vector space V , the set can be demonstrated to be Linear Dependent if-and-only-if   One element vector of the set can be expressed as a scalar multiple of the other Vector Space
  • 166.
    © Art Traynor2011 Mathematics Basis Criteria Basis Section 4.5, (Pg. 180) For S = { vi , vi+1 ,…vk – 1 , vk }, constituting a set of vectors within vector space V , the set can be demonstrated to form a Basis for the vector set if   S spans V  S is Linear Independent Infinite Dimensional examples: Vector Space P ( all polynomials ) Vector Space C ( all continuous functions)  Finite Dimensional examples: Zero Vector { 0 }  Standard Basis for an n x n matrix features a diagonal populated with ones with all other entries occupied by zeros  Vector Space
  • 167.
    © Art Traynor2011 Mathematics Basis Representation - Uniqueness Basis Section 4.5, (Pg. 182) For S = { vi , vi+1 ,…vk – 1 , vk }, constituting a set of vectors within vector space V , and forming a basis for that vector space, then Every element vector can only be expressed as a unique Linear Combination of the constituent vectors   If there were more than one, their difference would yield the zero vector which would violate the Basis criteria requiring Linear Independence Vector Space
  • 168.
    © Art Traynor2011 Mathematics Basis Cardinality Basis Section 4.5, (Pg. 184) For S = { vi , vi+1 ,…vn – 1 , vn }, constituting a set of vectors within vector space V , and forming a basis for that vector space with precisely “ n ” vectors, then Every basis for V will include precisely “ n ” vectors  Basis Cardinality - Maximum For S = { vi , vi+1 ,…vn – 1 , vn }, constituting a set of vectors within vector space V , and forming a basis for that vector space, then any set in Rn with k vectors, where k > n will be Linear Dependent  Section 4.5, (Pg. 183) Vector Space Easier way to think of it… Refer to the “Standard Basis” for the vector space (e.g. R3) Any alternative Basis must then include precisely that many vectors
  • 169.
    © Art Traynor2011 Mathematics Vector Space Dimension Vector Space Dimension Section 4.5, (Pg. 185) For S = { vi , vi+1 ,…vk – 1 , vk }, constituting a set of vectors within vector space V , and forming a basis for that vector space with precisely “ n ” vectors, then The number “ n ” is denoted as the Dimension of V , or dim ( V )   dim ( Rn ) = n  dim ( Pn ) = n + 1  dim ( Mm,n ) = m · n To determine the Dimension of a Subspace W of vector space V  Identify a set S of Linear Independent vectors that Span subspace W  The set S thus identified is a Basis for the subset W  The dimension of subspace W is thus the count ( or cardinality ) of the vectors in the Basis
  • 170.
    © Art Traynor2011 Mathematics Vector Space Dimension Vector Space Dimension Section 4.5, (Pg. 185) To determine the Dimension of a Subspace W of vector space V  Identify a set S of Linear Independent vectors that Span subspace W  The set S thus identified is a Basis for the subset W  The dimension of subspace W is thus the count ( or cardinality ) of the vectors in the Basis Example: W = { ( d , c – d , c ) } Inspection reveals that this subspace has precisely two factors: c and d Wc = { c ( 0c · d , 1c – d , 1c ) } Wc = { c ( 0 , 1, 1 ) } Wd = { d ( c · 0d , c – 1d , c · 0d ) } Wd = { d ( 0 , – 1, 0 ) } Wcd = { c ( 0 , 1, 1 ) + d ( 0 , – 1, 0 ) } The Dimension of the subspace is thus two because we have precisely two vectors spanned by the set S = { (0,1,1), (1, –1, 0) }, which can be shown to be Linear Independent and thus a Basis for W
  • 171.
    © Art Traynor2011 Mathematics Basis Test for an n-Dimensional Space Basis Section 4.5, (Pg. 186) For a vector space V of Dimension n if S = { vi , vi+1 ,…vk – 1 , vk }, constitutes a set of Linear Independent vectors in V, then:   S is a Basis for V Vector Space  If S spans V then S is a basis for V
  • 172.
    © Art Traynor2011 Mathematics Subspace Spans of Vector Representations Vector Representations Section 4.6, (Pg. 189) Matrices A = A a11 a21 am1 . . . a12 a22 am2 . . . . . . . . . . . . . . . a1n a2n amn . . . ( a11 , a12 , … a1n ) Row Vectors of A ( a21 , a22 , … a2n ) . . . ( a21 , a22 , … a2n ) A = A a11 a21 am1 . . . a12 a22 am2 . . . . . . . . . . . . . . . a1n a2n amn . . . a11 a21 am1 . . . a12 a22 am2 . . . . . . . . . . . . . . . a1n a2n amn . . . Column Vectors of A For a matrix A of m-rows and n-columns Amx n The Row Space ( a subspace of Rn ) is spanned by the row vectors of A The Column Space ( a subspace of Rn ) is spanned by the column vectors of A
  • 173.
    © Art Traynor2011 Mathematics Row Space for Row Equivalent Matrices Matrix Sub Spaces Section 4.6, (Pg. 190) Another Row-Equivalent m-by-n matrix Bm x n will share the same Row Space with Amx n  Vector Space Row Space Basis Section 4.6, (Pg. 190) For a matrix A of m-rows and n-columns Amx n Another Row-Equivalent m-by-n matrix Bm x n expressed in Row Echelon Form ( REF ) will feature non-zero Row Vectors constituting a Basis for Amx n  For a matrix A of m-rows and n-columns Amx n Row/Column Space Dimensional Equivalence Section 4.6, (Pg. 192) Both Row Space and Column Space share the same value for their respective Dimensions  For a matrix A of m-rows and n-columns Amx n
  • 174.
    © Art Traynor2011 Mathematics Matrix Rank Matrix Rank Section 4.6, (Pg. 193) The Dimension of a Row or Column Space defines the Rank of the Matrix, denoted rank ( A )  Vector Space For a matrix A of m-rows and n-columns Amx n
  • 175.
    © Art Traynor2011 Mathematics Matrix Nullspace and Nullity Nullspace Section 4.6, (Pg. 194) The solution set for the system forms a subspace of Rn designated as the Nullspace of A and denoted as N ( A )  Vector Space For a homogenous linear system Ax = 0 where A is an matrix of m-rows and n-columns Amx n , x = [ xi , xi+1 ,…xn – 1 , xn ] T is a column vector of unknowns 0 = [ 0 , 0 , … 0 , 0 ] T is the zero vector in Rm N ( A ) = { x  Rn | Ax = 0 } The Dimension of the Nullspace of A is designated as its Nullity dim ( N ( A ) )
  • 176.
    © Art Traynor2011 Mathematics 1 3 1 2 6 2 – 2 – 5 0 1 4 3 Standard Coefficient Matrix Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix A = Standard Matrix Form (SMF) is that arrangement of matrix elements in which constituent rows are populated with individual expressions (equations) constituting the linear system (of which the matrix is a representation), and in which the columns are arrayed such that each is populated by a distinct “unknown” (variable) the entries of which are populated by their individual coefficients. Section 6.3, (Pg. 314) Section 4.6, (Pg. 195), Example 7 Designate each row with an Uppercase Alpha Character…this will allow the Elementary Row Operations (EROs) to be performed to be described in a summarized algebraic fashion. ① 1 3 1 2 6 2 – 2 – 5 0 1 4 3 A1 B1 C1
  • 177.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Utilize Permutation to manipulate the simplest (most easily reduced) rows into the primary and higher positions in the matrix 1 3 1 2 6 2 – 2 – 5 0 1 4 3 A1 B1 C1 ② 3 1 6 2 – 5 0 4 3 B1 C1 The book prefers Row Three to be in the first position, so we’ll begin by permuting Rows One and Three A ⇌ C1 1 2 – 2 1A1 Once “fixed” at a value of one (1) circle this entry as an established pivot
  • 178.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Continued…② 3 1 6 2 – 5 0 4 3 B1 C1 1 2 – 2 1A1 A ⇌ B1 3 1 6 2 – 5 0 4 3 B1 C1 1 2 – 2 1A1 Row Three appears to present a very simple reduction (by simply scaling it by a factor of negative one), so it too should be permuted with Row Two for clarity and ease of succeeding EROs operations. A ⇌ B1
  • 179.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 From this established “Pivot” move down to the next row and render the entry there into a “zero” using EROs 1 ③ So now we perform our first real operation on the system. It can be noted “by inspection” that Row Two scaled by a factor of – 1 can make quick work of yielding zeros just where we want them – 1A1 = A1 3 6 2 – 5 0 4 3 B1 C1 1 2 – 2 1A1 – 1 – 2 2 – 1– 1A1 1 2 0 3C1 0 0 2 2A1 Adding the scaled Row Two to Row One we get a cleaned up replacement for Row Two in the next evolution of the GJE reduction C2 – 1A1 = A1
  • 180.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Continued… 1 ③ So now we perform our first real operation on the system. It can be noted “by inspection” that Row Two scaled by a factor of – 1 can make quick work of yielding zeros just where we want them – 1A1 = A1 3 6 2 – 5 0 4 3 B1 C1 1 2 – 2 1A1 – 1 – 2 2 – 1– 1A1 1 2 0 3C1 0 0 2 2A1
  • 181.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Continued…③ 1 3 6 2 – 5 0 4 3 B1 C1 0 0 2 2A1 Inspection suggests that Row one scaled by a factor of – 3 would allow for a handy reduction of Row Three into the desired zero element in the first column position B – 3C1 = B1 – 3 – 6 0 – 9– 3C1 0 0 – 5 – 5B1 3 6 – 5 4B1
  • 182.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Having “zeroed-out” the first column, we proceed to the right… 1 2 0 3C1 0 0 2 2A1 0 0 – 5 – 5B1 Rows Two and Three can be dispatched with a scaling by their product 5A1 = A2 2B1 = B2 0 0 10 10A2 0 0 – 10 – 10B2 A simple summation of Rows Two and Three will thereby reduce this matrix to a very tame state A2 + B2 = B3 0 0 0 0B3 ④
  • 183.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Continued… 1 2 0 3C1 0 0 0 0B3 0 0 10 10A2 Finally we note that Row 2 can be reduced by a common factor ( 10 ) to yield a maximally simplified row A2 = A3 1 10 1 2 0 3C1 0 0 0 0B3 0 0 1 1A3 Arrived!! ④
  • 184.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Now we look to parameterizing the system. We look for coefficients ≠ 1 to prefer for selection so as to maximally simplify substitution back into the system. 1 2 0 3 0 0 0 0 0 0 1 1 x1 + 2x2 + 2x3 + 3x4 = 0 System of Linear Equations x1 + 2x2 + 1x3 + 1x4 = 0 ⑤ x1 + 2s2 + 2x3 + 3t4 = 0 x1 + 2x2 + 1x3 + 1t4 = 0 x1 + 2x2 + 1x3 + 1x2 = s x1 + 2x2 + 1x3 + 1x4 = t B =
  • 185.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Continued… 1 2 0 3 0 0 0 0 0 0 1 1 System of Linear Equations⑤ x1 + 2s2 + 2x3 + 3t4 = 0 x1 + 2x2 + 1x3 3 + 1t4 = 0 x1 + 2x2 + 1x3 + 1x2 = s x1 + 2x2 + 1x3 + 1x4 = t x1 + 2s2 + 2x3 + 3t4 = – 2s – 3t x1 + 2s2 + 2x3 + x1 = – 2s – 3t x1 + 2s2 + 2x3 + x3 = – t B =
  • 186.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 Now we re-configure the system into Partitioned Matrix Form (PMF) arraying the “unknowns” (variables) and the parameterized solutions into respective column vectors 1 2 0 3 0 0 0 0 0 0 1 1 System of Linear Equations x1 + 2s2 + 2x3 + 3t4 = 0 x1 + 2x2 + 1x3 3 + 1t4 = 0 x1 + 2x2 + 1x3 + 1x2 = s x1 + 2x2 + 1x3 + 1x4 = t x1 + 2s2 + 2x3 + 3t4 = – 2s – 3t x1 + 2s2 + 2x3 + x1 = – 2s – 3t x1 + 2s2 + 2x3 + x3 = – t ⑥ – 2s 1s 0 – 3t 0t – 1t 0 1t x = – 2 1 0 – 3 0 – 1 0 1 = s + t B =
  • 187.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 1 2 0 3 0 0 0 0 0 0 1 1 – 2s 1s 0 – 3t 0t – 1t 0 1t x = – 2 1 0 – 3 0 – 1 0 1 = s + t B = N ( A ) = { ( – 2 , 1, 0, 0 ) , ( – 3 , 0, – 1, 1 ) } The Nullspace of the matrix thus forms a basis, synonymous with the Solution Space of the homogenous system Ax = 0 , ALL solutions of which are Linear Combinations of these two vectors 1 3 1 2 6 2 – 2 – 5 0 1 4 3 Standard Coefficient Matrix A = “ Row Equivalent” ( REF Basis Matrix ) v1 v2 v3 w1 w2
  • 188.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 – 2s 1s 0 – 3t 0t – 1t 0 1t x = – 2 1 0 – 3 0 – 1 0 1 = s + t dim ( N ( A ) ) = 2 ; i.e. { w1 , w2 } The Dimension of the Nullspace is equivalent to the cardinality of the non-zero “Row Equivalent” REF Basis matrix row constituents, which is otherwise known as the Nullity of the Matrix 1 3 1 2 6 2 – 2 – 5 0 1 4 3 Standard Coefficient Matrix A = v1 v2 v3 1 2 0 3 0 0 0 0 0 0 1 1B = “ Row Equivalent” ( REF Basis Matrix ) w1 w2
  • 189.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 – 2s 1s 0 – 3t 0t – 1t 0 1t x = – 2 1 0 – 3 0 – 1 0 1 = s + t The Rank and Nullity of the Solution Space “Row Equivalent” REF Basis matrix can be determined by the number of columns featuring leading “ ones ” ( i.e. two, the Rank , columns 1 & 3 ) and those columns which correspond to the free variable column vectors ( i.e. two, the Nullity , columns 2 & 4 ) . 1 3 1 2 6 2 – 2 – 5 0 1 4 3 Standard Coefficient Matrix A = v1 v2 v3 1 2 0 3 0 0 0 0 0 0 1 1B = “ Row Equivalent” ( REF Basis Matrix ) w1 w2
  • 190.
    © Art Traynor2011 Mathematics Nullspace Vector Space Matrix Nullspace, Rank, and Nullity Example: find the nullspace of the matrix Section 4.6, (Pg. 195), Example 7 – 2s 1s 0 – 3t 0t – 1t 0 1t x = – 2 1 0 – 3 0 – 1 0 1 = s + t For a matrix A of m-rows and n-columns Amx n then, the total number of columns ( n ) is the sum of the Rank and Nullity of the Solution Space “Row Equivalent” REF Basis matrix ( i.e. Rank two, columns 1 & 3 and Nullity two, columns 2 & 4 ) . 1 3 1 2 6 2 – 2 – 5 0 1 4 3 Standard Coefficient Matrix A = v1 v2 v3 1 2 0 3 0 0 0 0 0 0 1 1B = “ Row Equivalent” ( REF Basis Matrix ) w1 w2
  • 191.
    © Art Traynor2011 Mathematics Solution Space Dimension – Matrix Metrics Solution Space Section 4.6, (Pg. 196) Vector Space For a matrix A composed of m-rows and n-columns Amx n , of Rank “ r ” , the Dimension of the Solution Space Ax = 0 is The difference of the matrix column cardinality “ n ” less the matrix Rank “ r ” , or n – r  The column space cardinality “ n ” is equal to n = rank ( A ) – nullity ( A ) or n = rank ( A ) – dim ( N ( A ) ) 
  • 192.
    © Art Traynor2011 Mathematics Solution Space for Non-Homogenous LE Systems ( nHLES ) Solution Space Section 4.6, (Pg. 197) Vector Space For a matrix A composed of m-rows and n-columns Amx n , the non-homogenous system Ax = b with particular solution xp can be expressed in the form x = xp + xh where Ax = 0 represents the associated homogenous system ( HLES ) with solution xh The zero vector 0 = { 0 , 0 , … 0 , 0 } cannot be a solution to an nHLES therefore the set of all solution vectors to the nHLES cannot structure a subspace. 
  • 193.
    © Art Traynor2011 Mathematics 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 Augmented Coefficient Matrix Solution Space Vector Space Particular Solution of a Non-Homogenous LE System ( nHLES ) Example: Find the set of all solution vectors of the LE system Section 4.6, (Pg. 197), Example 9 5 8 – 9 1x1 + 0x2 – 2x3 + 1x4 = 5s 3x1 + 1x2 – 5x3 + 0x4 = 8s 1x1 + 2x2 – 0x3 – 5x4 = – 9s Non-Homogenous System of Linear Equations ( nHLES) A = v1 v2 v3  x1 x2 x3 x4 b Section 6.3, (Pg. 314) The set of all solution vectors to an nHLES of the form Ax = b entails expressing the system as a matrix A composed of m-rows and n- columns Amx n , arrayed in augmented Standard Matrix Form ( aSMF ). ① The aSMF matrix features Row Vectors { v i , v i + 1 ,…vn – 1 , v n } corresponding to the constituent equations of the nHLES. Column Vectors in the aSMF are arrayed to collect each distinct “ unknown ” ( variable ) term { xi , x i + 1 ,…xn – 1 , x n } with each entry populated by their individual coefficients and adjoined by the solution vector of constants { b }.
  • 194.
    © Art Traynor2011 Mathematics 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 Augmented Coefficient Matrix Solution Space Vector Space Particular Solution of a Non-Homogenous LE System Example: Find the set of all solution vectors of the LE system A = Section 4.6, (Pg. 197), Example 9 Designate each row with an Uppercase Alpha Character…this will allow the Elementary Row Operations (EROs) of the Gauss-Jordan Elimination (GJE) to be applied to be described in a summarized algebraic fashion. 5 8 – 9 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 5 8 – 9 A1 B1 C1 ② v1 v2 v3 x1 x2 x3 x4 b Gauss-Jordan Elimination (GJE) is an algorithmic scheme applied to a Standard Matrix Form (SMF) representation of a system of Linear Equations resulting in a “row- equivalent” reduced matrix on which the main diagonal entries are all “ones” (pivots in Row Echelon Form - REF) and all entries above and below the “pivots” are populated by “zeros” or Reduced Row Echelon Form (RREF). Section 1.2, (Pg. 19)
  • 195.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Investigate permutation as a strategy to manipulate the simplest (most easily reduced) rows into the primary and higher positions in the matrix In this case, the first row already features a value of one in the first position along the main diagonal, so all is well to proceed to the next step in the GJE reduction process to arrive at a “Row Equivalent” REF Basis matrix… 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 5 8 – 9 A1 B1 C1 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 5 8 – 9 A1 B1 C1 Inspection seems to suggest that scaling Row Three by – 3 and summing with Row Two would yield an appealing reduction of Row Two into the form where the leading entry in the row is rendered into a zero. B2 – 3C1 = B1 Section 4.6, (Pg. 197), Example 9 Solution Space From this established “Pivot” move down to the next row and render the entry there into a “zero” using EROs Example: Find the set of all solution vectors of the LE system ③ ④
  • 196.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Section 4.6, (Pg. 197), Example 9 Solution Space Example: Find the set of all solution vectors of the LE system Continued… 3 – 3 1 – 6 – 5 0 0 15 8 27 B1 – 3C1 0 – 5 – 5 15 35B1 1 1 0 2 – 2 0 1 – 5 5 – 9 A1 C1 From this established “zero” move down to the next row and render the entry there into a “zero” using EROs 0 – 5 – 5 15 35B1 ④ Inspection suggests that scaling Row Two by – 1 and summing with Row One would yield an appealing reduction of Row Three into the form where the leading entry in the row is rendered into a zero. C2 – 1A1 = C1 ⑤
  • 197.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Section 4.6, (Pg. 197), Example 9 Solution Space Example: Find the set of all solution vectors of the LE system Continued… C1 – 1A1 0 2 2 – 6 – 14C1 1 0 0 2 – 2 2 1 – 6 5 – 14 A1 C1 With the second column entry in the first row already fixed at a “zero” value, we can proceed down to the next row and render the entry there into a “zero” using EROs 0 – 5 – 5 15 35B1 Inspection suggests that scaling Row Two by a factor of negative one–fifth would reduce Row Two to the desired “Pivot” value of “one” at the next position down along the main diagonal. ⑤ 1 2 0 – 5 – 9 – 1 0 2 – 1 – 5 – B1 = B2 1 5 ⑥
  • 198.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Section 4.6, (Pg. 197), Example 9 Solution Space Example: Find the set of all solution vectors of the LE system Continued… B1 0B2 1 0 0 2 – 2 2 1 – 6 5 – 14 A1 C1 With the second “pivot” fixed along the main diagonal, we can proceed down to the next row and render the entry there into a “zero” using EROs B2 Now for the coup de grâce – we notice that Row Three is a scalar multiple of Row Two, thus scaling Row Two by a factor of – 2 and summing with Row Three will yield a new Row Three populated entirely of all zero entries 0 1 1 – 3 – 7 ⑥ – B1 1 5 0 – 5 – 5 15 35 1 1 – 3 – 7 0 1 1 – 3 – 7 ⑦ C1 – 2B2 = C2
  • 199.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Section 4.6, (Pg. 197), Example 9 Solution Space Example: Find the set of all solution vectors of the LE system Continued… C1 0C2 1 0 0 0 – 2 0 1 0 5 0 We now need to examine this reduced matrix to determine an appropriate parameterization of the Solution Vectors of the nHLES The Gauss-Jordan Elimination (GJE) is now complete as we have arrived at a “row-equivalent” matrix to A, now designated B, as the remaining “non- zero” row vectors constitute a Basis set for the system (?) …QED 0 – 2 – 2 6 14 0 0 0 0 0 1 1 – 3 – 7 0 2 2 – 6 – 14 – 2B2 ⑦ B = ⑧ The RREF matrix, thus transformed from the root SMF or TMF (conventionally designated “ A ” with entries “ ci ” ) is now restated as RREF matrix “ B ” with entries “ di ” Section 4.6, (Pg. 193)
  • 200.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Section 4.6, (Pg. 197), Example 9 Solution Space Example: Find the set of all solution vectors of the LE system X3 and X4 with coefficients ≠ 1 are the best candidates for parameterization. 1 0 0 0 – 2 0 1 0 5 0 0 1 1 – 3 – 7B = RREF ( Basis Set ) for nHLES 1x1 + 0x2 – 2x3 + 1x4 = 5s 3x1 + 1x2 + 1x3 – 3x4 = – 7s x1 + 2x2 + 1x3 + 1x3 = s x1 + 2x2 + 1x3 + 1x4 = t 1x1 + 0x2 – 2s + 1t4 = 5s 1x1 + 0x2 – 2s + 1t4 = 2s – 1t + 5s 3x1 + 1x2 + 1s3 – 3t4 = – 7s 3x1 + 1x2 + 1s3 – 3t4 = – 1ss +3t – 7s 2s – 1s 1s – 1t + 3t + 0t 0s + 1t x = + 5 – 7 + 0 x1 x2 x3 x4 = + 0 ⑨
  • 201.
    © Art Traynor2011 Mathematics Vector Space Matrix Nullspace, Rank, and Nullity Solution Space Example: Find the set of all solution vectors of the LE system Finally we state the solution vectors in terms of their correspondence with the solution for the associated homogenous system 1 0 0 0 – 2 0 1 0 5 0 0 1 1 – 3 – 7B = x = x1 x2 x3 x4 = + 5 – 7 + 0 + 0 – 1t + 3t + 0t + 1t 2s – 1s 1s 0s + + x = x1 x2 x3 x4 = s + 5 – 7 + 0 + 0 – 1t + 3t + 0t + 1t 2s – 1s 1s 0s + t + RREF ( Basis Set ) for nHLES x1 + 2x2 + 1x3 + 1x3 = s x1 + 2x2 + 1x3 + 1x4 = t 1x1 + 0x2 – 2s + 1x1 = 2s – 1t + 5s 3x1 + 1x2 + 1s3 – 3x2 = – 1ss +3t – 7s 10 xi u1 u2 xp x = su1 + tu2 + xp System Solution(s) x = xh + xp xh = su1 + tu2 Ax = 0 ( HLES ) Ax = b ( nHLES ) Section 4.6, (Pg. 197), Example 9 xh thus represents an arbitrary vector in the solution space of Ax = 0 Section 4.6, (Pg. 197)
  • 202.
    © Art Traynor2011 Mathematics 1 3 1 0 1 2 – 2 – 5 0 1 0 – 5 Augmented Coefficient Matrix Solution Space Vector Space Particular Solution of a Non-Homogenous LE System ( nHLES ) Example: Find the set of all solution vectors of the LE system Section 4.6, (Pg. 197), Example 9 5 8 – 9 A = v1 v2 v3  x1 x2 x3 x4 b 1 0 0 0 – 2 0 1 0 5 0 0 1 1 – 3 – 7B = w1 w2 The wi vectors thus form a Basis for the Row Space of A, or the subspace spanned by S = { vi , vi + 1 ,…vn – 1 , vn } Section 4.6, (Pg. 191) Continued… “ Row Equivalent” ( REF Basis Matrix ) Only columns featuring a “leading one” ( “Pivot”) in the RREF matrix B are Linear Independent Section 4.6, (Pg. 192) Linear Independent Linear Dependent x1 x2 x3 x4 b x = x1 x2 x3 x4 = s + 5 – 7 + 0 + 0 – 1t + 3t + 0t + 1t 2s – 1s 1s 0s + t + 10
  • 203.
    © Art Traynor2011 Mathematics Solution Space Consistency Solution Space Section 4.6, (Pg. 198) Vector Space For a matrix A composed of m-rows and n-columns Amx n , of which Ax = b defines a particular solution xp to the nHLES The nHLES is consistent if-and-only-if  The solution vector b is among the Column Space of A  The solution vector b = [ bi , bi+1 ,…bn – 1 , bn ] T represents a Linear Combination of the columns of A  The solution vector b is among those populating the Subspace Rm Spanned by the columns of A There is an additional implication that the respective Ranks of the nHLES coefficient and augmented matrices are equivalent Section 4.6, (Pg. 198) I think what is meant by this is that if b is adjoined to A then reduced by GJE to an RREF matrix B, so long as the column representing b remains, the system is thus demonstrated consistent
  • 204.
    © Art Traynor2011 Mathematics Square Matrix Equivalency Conditions Solution Space Section 4.6, (Pg. 198) Vector Space For a matrix A composed of m-rows and n-columns Amx n , each of the following incidents implies each of the other A is invertible Ax = b has a unique solution for any n x 1 matrix b Ax = 0 has the trivial solution A is “row equivalent” to to In  |A | ≠ 0 Rank ( A ) = n The n row vectors of A are Linear Independent The n column vectors of A are Linear Independent “ A ” cannot be empty, or the zero vector
  • 205.
    © Art Traynor2011 Mathematics Coordinate Representation Relative to a Basis Basis Section 4.7, (Pg. 202) Vector Space For an Ordered Basis set B = { vi , vi+1 ,…vn – 1 , vn }, within Vector Space V , there exists a vector x in V that can be expressed as a Linear Combination, or sum of scalar multiples of the constituent vectors of x such that:  xn = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn The coordinate matrix, or coordinate vector, of x relative to B is the column matrix in Rn whose components are the coordinates of x the scalars of which ci = [ ci , ci+1 ,…cn – 1 , cn ] are otherwise referred to as the coordinates of x relative to the Basis B c1 c2 cn . . . [ xn ]Bn = Chump Alert: Coordinate matrix representation relative to a Basis ( standard or otherwise) is directly analogous to Normalized ( ? ) Unit Vector representation
  • 206.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis in the vector space to be described as a linear combination of the unit vector set U = { ui , u i + 1 ,…un – 1 , u n } x y O u2 u1 Svn · SBn = SBn ´ v11 v21 vn1 . . . vi 11 02 0n . . . In 01 12 0n . . . 01 02 1n . . . . . . . . . . . . . . . Svn · U = SBn ´ = v1 02 0n . . . 01 v2 0n . . . 01 02 vn . . . . . . . . . . . . . . . SBn ´ We begin by noting that the Standard Basis SBn allows any point v12 v22 vn2 . . . . . . . . . . . . . . . v1n v2n vnn . . . I am switching the text’s designation of B and B´ as it seems more intuitive to think of the “prime” set as the translated, or product set 1a
  • 207.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis This entails that any alternative Basis is necessarily a linear combination of the standard Basis (which is equal to the Identity Matrix) U = In = { ui , u i + 1 ,…un – 1 , u n } x y O u2 u1 Translation of the standard Basis by a set of vector scalars constituting an alternative Basis may thus appear to be trivial Svn · SBn = SBn ´ vi In Svn · U = SBn ´ = SBn ´ 111 112 021 222 xi yj v1 v2 111 012 021 122 111 112 021 222 u1 u2 B´ 1b
  • 208.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis This alternative Basis is akin to a Translation of the standard coordinate origin O → O ´ . x y O O ´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) Svn · SBn = SBn ´ vi Svn · U = SBn ´ = SBn ´ 111 112 021 222 xi yj v1 v2 111 112 021 222 B´ In 111 012 021 122 u1 u2 I’m going to keep the color coding of the B-Prime Matrix as green to emphasize it’s status as a “resultant” (i.e. the transformed, alternative Basis) 1c
  • 209.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis The Translated, or product vector set can be expressed in several equivalent forms. x y O O ´ B´ = { ( 1 , 0 ) , ( 1 , 2 ) } B´ = { v1 , v2 } r11 r12 r21 r22 B´ = v1 v2 xi yj 111 112 021 222 B´ = v1 v2 xi yj Basis constituent vectors expressed in set notation Restated in Transition Matrix Form (TMF) Basis constituent vector components expressed in set notation v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) I’m going to keep the color coding of the B-Prime Matrix as green to emphasize it’s status as a “resultant” (i.e. the transformed, alternative Basis) ②
  • 210.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis Supplying axes through the Translated origin suggests the shifted coordinate system introduced by the alternative Basis x y x´ O O ´ Note that the xi “Unknown” or “Variable” row vector, populated with the multiplicative identity (i.e. all “ones”) shifts the X-Axis of the alternative Basis system of coordinates by a scalar multiple of one 111 012 021 122 B = u1 u2 xi yj u2 u1 111 112 021 222 B ´ = v1 v2 xi yj “Standard” Basis Element 1B 1B v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) 3a
  • 211.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis Supplying axes through the Translated origin suggests the shifted coordinate system introduced by the alternative Basis x y x´ O O ´ 111 012 021 122 B = u1 u2 xi yj u2 u1 v2 v1 111 112 021 222 B ´ = v1 v2 xi yj “Standard” Basis Element “Alternative” Basis Element 1B 2B 3B 4B 5B 1B 1B The resultant vector ( v1 + v2 ) indicates the orientation of the “y” coordinate axis and introduces a “skew” in the alternative Basis system by comparison with the standard Basis v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) 1B´ 2B´ 3B´ 4B´ 5B´ 3b
  • 212.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis The alternative Basis maps input coordinates from the standard Basis incrementing each “X” value by +1 and each “Y” value by +2 Note that the “two” in the yi “Unknown” or “Variable” row vector also scales or elongates the alternative Basis “y” coordinate x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) 111 012 021 122 B = u1 u2 xi yj u2 u1 v2 v1 111 112 021 222 B ´ = v1 v2 xi yj “Standard” Basis Element “Alternative” Basis Element 1B 1B 1B´ 2B ⇌ 1B´ 1B´ 2B´ 3B´ 4B´ 5B´ 3c
  • 213.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis Thus the coordinate values of the two systems superimposed will bear the same values for the “X” coordinates, but will feature two values for the “Y” coordinates scaled by factor of two. x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) 111 012 021 122 B = u1 u2 xi yj u2 u1 v2 v1 111 112 021 222 B ´ = v1 v2 xi yj “Standard” Basis Element “Alternative” Basis Element 1B 1B 1B´ 2B ⇌ 1B´ 1B´ 2B´ 3B´ 4B´ 5B´ 3d
  • 214.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis Any arbitrary vector in the space can be represented as a scalar multiple (i.e. Linear Combination) of the basis vectors xn = ci vi + ci+1 vi+1 +…+ cn –1 vn – 1 + cn vn x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) xn = c1 v1 + c2 v2 cn = ( c1 , c2 ) B´ = { ( 1 , 0 ) , ( 1 , 2 ) } 111 112 021 222 B´ = v1 v2 xi yj Basis constituent vector components expressed in Set Notation Form SNF Restated in Transition Matrix Form (TMF) Arbitrary vector in Vector Space V c1 c2 cn . . . cn =  ci 311 221 The ordered pair of scalar vector coefficients of x Restated as a Column Vector (scalar coefficients of x) 1B´ 2B´ 3B´ 4B´ 5B´ ④
  • 215.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis the ordered pair collection of vector scalar coefficients of the arbitrary vector x is explicitly corresponded with the Basis vectors the products of which form a resultant Linear Combination. x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) xn = c1 v1 + c2 v2 cn = ( c1 , c2 ) A scalar coefficient of “one” is implicit c1 c2 cn . . .  ci 311 221 The ordered pair of scalar vector coefficients of x Restated as a Column Vector (scalar coefficients of x) B´ = { v1 , v2 } The set of Basis vectors in SNF Juxtaposed as below, it is clear that x is a linear combination of the Basis Vectors B´ = { 1 v1 , 1 v2 } ci ≠ 1 implies that the arbitrary vector is a Linear Combination of the Basis Vector set cn = [ x ]B´ = An illuminating change of notation is here introduced [ x ]B´ whereby 1B´ 2B´ 3B´ 4B´ 5B´ 5a
  • 216.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) c1 c2 cn . . .  ci 311 221 Stated as a Column Vector (scalar coefficients of x) Juxtaposed as below, it is clear that x is a linear combination of the Basis Vectors cn = [ x ]B´ = The novel notation for the set (ordered pair, or column vector form) of vector scalar coefficients ci directs our attention to the specific Basis by which the resulting coordinates are generated. e.g.: x is expressed relative to Basis B “ Relative to ” “ Defined by ” “ Within ” xn = c1 v1 + c2 v2 cn = ( c1 , c2 ) B´ = { v1 , v2 } B´ = { 1 v1 , 1 v2 }1B´ 2B´ 3B´ 4B´ 5B´ 5b
  • 217.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2 Example: Find the coordinate matrix of x relative to a non-standard basis ① x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) c1 c2 cn . . .  ci 311 221 Stated as a Column Vector (scalar coefficients of x) Juxtaposed as below, it is clear that x is a linear combination of the Basis Vectors cn = [ x ]B´ = Vector x therefore can be graphically represented by a Position Vector demarcated in the B-Prime coordinate system [ i.e. three increments of “ x ” from the alternative Basis ( aB ) origin and two increments of “ y ” from the aB origin ]. “ Relative to ” “ Defined by ” “ Within ” 111 012 021 122 B u1 u2 u2 u1 v2 v1 111 112 021 222 B ´ v1 v2 xi yj “Standard” Basis Element “Alternative” Basis Element 1B 1B 1B´ 2B ⇌ 1B´ xB = ( 3 , 2 ) 1B´ 2B´ 3B´ 4B´ 5B´
  • 218.
    © Art Traynor2011 Mathematics y´ Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2Example: Find the coordinate matrix of x relative to a non-standard basis ① x y x´ O O ´ 1B 2B 3B 4B 5B 1B ⇌ ½B´ 1B´ 2B´ 3B´ 2B ⇌ 1B´ 3B ⇌ 1½ B´ 4B ⇌ 2B´ v2 = 2 u2 ( 1 , 2 ) v1 = 1 u1 ( 1 , 0 ) To restate the scalar set of the arbitrary vector coefficients in terms of the standard basis ( sB ), the aB vector components need simply be scaled by the vector equation for x to yield the scalar solution set in terms of the sB xB = ( 3 , 2 ) xn = c1 v1 + c2 v2 B´ = { ( 1 , 0 ) , ( 1 , 2 ) } B´ = { v1 , v2 } xn = c1 ( 1 , 0 ) + c2 ( 1 , 2 ) cn = [ x ]B = ( c1 , c2 ) cn = [ x ]B = ( 3 , 2 ) xn = 3( 1 , 0 ) + 2 ( 1 , 2 ) x = ( 3·1 + 2·1 ) , ( 3·0 + 2·2 ) x = ( 3 + 2 ) , ( 0 + 4 ) x = ( 5 ) , ( 4 ) xn = 3 ( v1 ) + 2 ( v2 ) 1B´ 2B´ 3B´ 4B´ 5B´
  • 219.
    © Art Traynor2011 Mathematics Vector Space Basis Coordinate Matrices and Bases Section 4.7 (Pg. 203), Example 2Example: Find the coordinate matrix of x relative to a non-standard basis ① Note finally that, as B ′ ( B-Prime ) is the Identity Matrix ( IM ), the B matrix is equivalent to the Transition Matrix (from non-standard to standard Basis) conventionally noted as P – 1 Section 4.7 (Pg. 208) [ B′ In ] Adjoin [ In P – 1 ] RREF  GJE ( EROs )  P – 1 = ( B′ ) – 1 ′ Change of Basis Non-Standard  Standard 111 112 021 222 v1 v2 xi yj P – 1 = ( B′ ) = ′
  • 220.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis We first restate the given alternative Basis aB = B ´ ( expressed in Set Notation Form – SNF ) into Transformation Matrix Form ( TMF ). ① B´ = { u1 , u2 , u3 } Alternative Basis constituent vectors expressed in Set Notation Form SNF B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } 1 0 1 0 – 1 2 – 2 – 3 – 5 aB Coefficient Matrix B´ = c1 c2 c3 u1 u2 u3 Given the form of the problem, it is implicit that the arbitrary vector is proffered relative to the standard Basis
  • 221.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3 Example: Find the coordinate matrix of x relative to a non-standard basis With this restatement, we next introduce the arbitrary vector coordinates ( to which we are tasked to find the coordinate matrix corresponding to the aB ) rendering it into a column vector form. ① B´ = { u1 , u2 , u3 } B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } 1 0 1 0 – 1 2 – 2 – 3 – 5 aB Coefficient Matrix B´ = c1 c2 c3 u1 u2 u3 These given arbitrary vector coordinates are presumably introduced relative to the Standard Basis? xn = ( 1 , 2 , – 1 ) c1 c2 cn . . .  ci cn = [ x ]B´ = 1 2 c1 c2 c3 xi – 1“ Relative to ” “ Defined by ” “ Within ” Keep in mind that the ‘given’ arbitrary vector is stated relative to the standard Basis and thus represents a solution set by which the set of scalar multiples applicable to the alternative Basis will be derived. Alternative Basis constituent vectors expressed in Set Notation Form SNF
  • 222.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis We next introduce a column vector of scalars the solution to which will represent the coordinate matrix of the arbitrary vector with respect to the alternative Basis ( aB ) ① B´ = { u1 , u2 , u3 } B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } 1 0 1 0 – 1 2 – 2 – 3 – 5 aB Coefficient Matrix u1 u2 u3 Alternative Basis constituent vectors expressed in Set Notation Form SNF These given arbitrary vector coordinates are presumably introduced relative to the Standard Basis? 1 2 – 1 c1 c2 c3 xi xn = ( 1 , 2 , – 1 ) Keep in mind that the ‘given’ arbitrary vector is stated relative to the standard Basis and thus represents a solution set by which the set of scalar multiples associated with the alternative Basis will be determined c1 c2 ci – c3 = cn = [ x ]B = ( c1 , c2 , c3 )
  • 223.
    © Art Traynor2011 Mathematics 1 0 1 0 – 1 2 – 2 – 3 – 5 Augmented Coefficient Matrix Vector Space 1 2 – 1 1c1 + 0x2 + 2c3 = 1s 3x1 – 1c2 + 3c3 = 2s 1c1 + 2c2 – 5c3 = – 1s A = c1 c2 c3  u1 u2 u3 b The final “set-up” step entails restating the composed system once again rendering it into a non-Homogenous Linear Equation System ( nHLES) and its corresponding Augmented Coeffficient Matrix (ACM) ① Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Non-Homogenous Linear Equation System ( nHLES) B´ = { u1 , u2 , u3 } B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } xn = ( 1 , 2 , – 1 ) cn = ( c1 , c2 , c3 ) The ACM is thus expressed in Standard Matrix Form with the column vectors corresponding to the coefficients of the unknown aB scalars, the solution for which (by means of Gauss Jordan Elimination – GJE) will supply the coordinate matrix relative to the aB that the problem poses for resolution.
  • 224.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis 1 0 1 0 – 1 2 – 2 – 3 – 5 Augmented Coefficient Matrix 1 2 – 1 A = c1 c2 c3 u1 u2 u3 b Designate each row with an Uppercase Alpha Character…this will allow the Elementary Row Operations (EROs) of the Gauss-Jordan Elimination (GJE) to be applied to be described in a summarized algebraic fashion. ② 1 0 1 0 – 1 2 – 2 – 3 – 5 1 2 – 1 A1 B1 C1 Gauss-Jordan Elimination (GJE) is an algorithmic scheme applied to a Standard Matrix Form (SMF) representation of a system of Linear Equations resulting in a “row- equivalent” reduced matrix on which the main diagonal entries are all “ones” (pivots in Row Echelon Form - REF) and all entries above and below the “pivots” are populated by “zeros” or Reduced Row Echelon Form (RREF). Section 1.2, (Pg. 19)
  • 225.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis 1 0 1 0 – 1 2 – 2 – 3 – 5 1 2 – 1 A1 B1 C1 Investigate permutation as a strategy to manipulate the simplest (most easily reduced) rows into the primary and higher positions in the matrix ③ In this case, the first row already features a value of one in the first position along the main diagonal, as well as a zero directly below in the next row down, so all is well to proceed to the next step in the GJE reduction process to arrive at a “Reduced Row Equivalent” RREF Basis matrix… From this established “Pivot” move down to the next row and render the entry there into a “zero” using EROs ④ 1 0 1 0 – 1 2 – 2 – 3 – 5 1 2 – 1 A1 B1 C1 Inspection suggests that scaling Row One by – 1 and summing with Row Three would yield an appealing reduction of Row Three into the desired form where the leading entry in the row is rendered into a zero. C1 = C2 – 1A1
  • 226.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Continued…④ 1 0 1 0 – 1 2 – 2 – 3 – 5 1 2 – 1 A1 B1 C1 C1 = C2 – 1A1 The resultant should always be stated first (indexed to its succeeding value) in the reduction evolution. The Augend/Minuend term should always be the same term as the resultant (at its prevailing index). The Subtrahend/Summand should thus be the only term subject to any scaling.
  • 227.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis We replace the transformed Row then proceed to the next non- one entry (along the main diagonal) or non-zero entry (outside of the main diagonal), working from top-to-bottom, and left-to-right 1 0 0 – 1 – 2 – 3 1 2 A1 B1 C1 With the entry in Row One, Column Two already at zero, we move to reduce the next main diagonal entry to a value of One, which is easily accomplished by scaling Row One by – 1 B1 = – 1B1 Continued…④ 1 2 – 5 – 1C1 – 1A1 – 1 0 – 2 – 1 – 0 2 – 7 – 2C1 – 0 2 – 7 – 2 ⑤
  • 228.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Continued… 1 0 0 – 1 – 2 – 3 1 – 2 A1 B1 C1 – 0 2 – 7 – 2 ⑤ ⑥ Inspection suggests that Row Three be summed with Row Two scaled by a factor of – 2 With the second “pivot” fixed along the main diagonal, we can proceed down to the next row and render the entry there into a “zero” using EROs 1 0 0 – 1 – 2 – 3 1 – 2 A1 B1 C1 – 0 2 – 7 – 2 C2 = C1 – 2B1
  • 229.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis With the main diagonal in Column Two fixed at the desired pivot value of “one”, and all other entries in the column reduced to values of “zero”, we proceed next to the top of Column Three to continue our row reduction with further applied EROs 1 0 – 2 1A1 Inspection discloses that the lead entry in Column Three ( j = 3 ) can be reduced to “zero” by adding Row One to Row Three scaled by a factor of Two Continued… – 0 0 – 1 – 2C2 ⑥ C1 – 0 2 – 7 – 2 0 – 2 – 6 – 4– 2B1 ⑦ 0 – 1 – 3 – 2B1 – 0 0 – 1 – 2C2 A1 = A1 + 2C2
  • 230.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Proceeding with EROs down Column Three, we will next address the reduction of the entry in Row Two 1 0 – 0 5A1 Inspection suggests that the entry in Column Three, Row Two ( ij = 23 ) can be rendered into a zero value by the summation of Row Two with Row Three scaled by a factor of – 3. Continued… – 1 0 – 0 – 5A1 0 – 0 – 2 – 4+ 2C2 0 – 1 – 3 – 2B1 – 0 0 – 1 – 2C2 B2 = B1 – 3C2 1 0 – 2 1A1 ⑦ ⑧
  • 231.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Proceeding with EROs down Column Three, we will next address the reduction of the entry in Row Two 1 0 – 0 5A1 Inspection reveals that a simple scaling of Row Three by a factor of – 1 will render the system into its final RREF expression. Continued… – 0 1 – 0 – 8B2 – 3C2 0 – 1 – 0 – 8B2 – 0 0 – 1 – 2C2 ⑧ 0 – 1 – 3 – 2B1 – 0 0 – 3 – 6 ⑨ C3 = – 1C2
  • 232.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis Having arrived at the RREF expression of the nHLES, we can explicitly state the values for ci ( the coordinate matrix for the arbitrary vector relative to the alternative Basis ( aB ) ⑨ 1 0 – 0 5A1 0 – 1 – 0 – 8B2 – 0 0 – 1 – 2C3 1 0 – 0 5 0 – 1 – 0 – 8 – 0 0 – 1 – 2 1d1 + 0x2 + 2c3 = 5s 3x1 – 1d2 + 3c3 = – 8s 1c1 + 2c2 – 5d3 = – 2s u1 u2 u3 b d1 d2 d3 No parameterization is necessary as the RREF form gives explicit values for each unknown This differs from the result in example 9, pg 197…it would be interesting to note why this is so? The RREF matrix, thus transformed from the root SMF or TMF (conventionally designated “ A ” with entries “ ci ” ) is now restated as RREF matrix “ B ” with entries “ di ” Section 4.6, (Pg. 193)
  • 233.
    © Art Traynor2011 Mathematics Vector Space Basis Transformation Change of Basis ( aka Transformation ) Section 4.7 (Pg. 203), Example 3Example: Find the coordinate matrix of x relative to a non-standard basis No parameterization is necessary as the RREF form gives explicit values for each unknown 1 0 – 0 5 0 – 1 – 0 – 8 – 0 0 – 1 – 2 1d1 + 0x2 + 2c3 = 5s 3x1 – 1d2 + 3c3 = – 8s 1c1 + 2c2 – 5d3 = – 2s u1 u2 u3 b 10  5 – 2 d1 d2 d3 xi – 1 d1 d2 dn . . .  di dn = [ x ]B´ = “ Relative to ” “ Defined by ” “ Within ” 5 – 2 d1 d2 d3 xi – 1 d1 d2 d3 x = d1u1 + d2u2 + d3u3 + xp System Solution(s) x = xh + xp xh = d1u1 + d2u2 + d3u3 Ax = b Ax = 0 ( HLES ) ( nHLES ) RREF ( Basis Set ) for nHLES
  • 234.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3 Example: Find the coordinate matrix of x relative to a non-standard basis Continued… A = B´ = 11 1 0 1 0 – 1 2 – 2 – 3 – 5 Augmented Coefficient Matrix 1 2 – 1 c1 c2 c3 u1 u2 u3 b Only columns featuring a “leading one” ( “Pivot”) in the RREF matrix B are Linear Independent Section 4.6, (Pg. 192) Perhaps trivially so, it is worth noting that the wi vectors thus form a Basis for the Row Space of A = B´ , or the subspace spanned by A = B´ = { ui , ui + 1 ,…un – 1 , un } Section 4.6, (Pg. 191) Section 4.5, (Pg. 184) Also recall that the subspace spanned by A = B´ will thus feature precisely three Basis vectors (by the definition of Basis Cardinality ), exactly corresponding to the number of Linear Independent vectors in the space (any greater than which would entail Linear Dependence of one of the vectors). This cardinality will always coincide with that of the “Standard Basis” for the vector space (e.g. R3)  1 0 – 0 5 0 – 1 – 0 – 8 – 0 0 – 1 – 2 u1 u2 u3 b d1 d2 d3 B = “ Row Equivalent” ( RREF Basis Matrix ) w1 w2 Linear Independent = = = w3
  • 235.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3 Example: Find the coordinate matrix of x relative to a non-standard basis Continued…12 A = B´ =  1 0 – 0 5 0 – 1 – 0 – 8 – 0 0 – 1 – 2 u1 u2 u3 b d1 d2 d3 B = 1 0 1 0 – 1 2 – 2 – 3 – 5 Augmented Coefficient Matrix 1 2 – 1 c1 c2 c3 u1 u2 u3 b “ Row Equivalent” ( RREF Basis Matrix ) w1 w2 Linear Independent = = = w3 xn = ci · ui + ci+1 · ui+1 +…+ cn –1 vn – 1 + cn vn Arbitrary Vector, generally xn = c1 · u1 + c2 · u2 + c3 · v3 Arbitrary Vector, particular ( 1 , 2 , – 1 ) = c1 ( 1 , 0, 1 ) + c2 ( 0 , – 1, 2 ) + c3 ( 2 , 3, – 5 ) Arbitrary Vector, as nHLES linear combination relative to aB
  • 236.
    © Art Traynor2011 Mathematics Change of Basis ( aka Transformation ) Basis Transformation Vector Space Section 4.7 (Pg. 203), Example 3 Example: Find the coordinate matrix of x relative to a non-standard basis Continued…13 d1 d2 dn . . .  di dn = [ x ]B´ = “ Relative to ” “ Defined by ” “ Within ” 5 – 2 d1 d2 d3 xi – 1 xn = ci · ui + ci+1 · ui+1 +…+ cn –1 vn – 1 + cn vn Arbitrary Vector, generally xnn = c1 · u1 + c2 · u2 + c3 · v3 Arbitrary Vector, particular xn = 51 ( 1 , 0, 1 ) + ( – 82 ) ( 0 , – 1, 2 ) + ( – 22 )( 2 , 3, – 5 ) Arbitrary Vector, as nHLES linear combination relative to aB
  • 237.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 204) For an arbitrary vector x in vector space V described by coordinates relative to a Standard Basis B , an ancillary description – in coordinate terms relative to an Alternate Basis B′ ( B-Prime ) – can be determined by operation of a Transition Matrix P ( the entries of which are populated by the components of x ) . The matrix product of P and a column vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix of x relative to the alternate basis B′ ( B-Prime ) yield the column vector [ x ]B′ , the “ root ” basis.  The “ Change of Basis ” is thus the solution to the unknown column vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] . 
  • 238.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 204) The matrix product of P and a column vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix of x relative to the alternate basis B′ ( B-Prime ) yield the column vector [ x ]B′ .  P = B´ = { u1 , u2 , u3 } P = B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } xn = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un Note the convention wherein blue font is assigned to those elements of the B-Basis set, whose component vectors are assigned to “ u ” Red font accordingly is assigned to those elements of the B-Prime ( B′ ) Basis set, whose component vectors are assigned to “ v ” P = B´ = Coefficient Matrix u1 u2 u3 d1 d2 d3 xi = [ x ]B′ = p11 Alternative Basis constituent vectors expressed in Set Notation Form SNF Note that Matrix P = B′ is expressed in Transition Matrix Form ( TMF ) wherein the columns are populated by the individual Basis vector components with the rows collecting like “ unknown ” terms. xi = [ x ]B′ c1 c2 c3 Root BasisAlternate Basis p21 p31 p12 p22 p32 p13 p23 p33
  • 239.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 204) The matrix product of P and a column vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix of x relative to the alternate basis B′ ( B-Prime ) yield the column vector [ x ]B′ .  P = B´ = { u1 , u2 , u3 } P = B´ = { ( 1 , 0, 1 ) , ( 0 , – 1, 2 ) , ( 2 , 3, – 5 ) } xn = ci ui + ci+1 ui+1 +…+ cn –1 un – 1 + cn un P = B´ = 1 0 1 0 – 1 2 – 2 – 3 – 5 Coefficient Matrix u1 u2 u3 d1 d2 d3 xi = [ x ]B′ = xi = [ x ]B′ Root BasisAlternate Basis – 1 – 2 – 1
  • 240.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 204) The matrix product of P and a column vector [ x ]B′ of scalars ci = [ ci , ci+1 ,…cn – 1 , cn ] which forms the coordinate matrix of x relative to the alternate basis B′ ( B-Prime ) yield the column vector [ x ]B′ .  P [ x ]B′ = [ x ]B′ P [ x ]B′ = [ x ]B′ P2 P2 P [ x ]B′ = [ x ]B′ P2 P2 P [ x ]B′ = P – 1 [ x ]B′ Whereas the product of the Transition Matrix and the Alternate Basis ( aB ) yields the Root Basis, it is equivalent to state that the product of the Transition Matrix Inverse and the Root Basis will yield the Alternate Basis.Change of Basis B′ B Change of Basis B  B′ We recall that to find an inverse matrix we adjoin the Identity Matrix In (on RHS ) to P forming matrix [ A In ] and perform EROs by GJE to arrive at an RREF which will then result in matrix [ In A –1 ] with the inverse occupying the RHS of the resultant matrix
  • 241.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 204) P = B´ = Transition Matrix u1 u2 u3 d1 d2 d3 xi = [ x ]B′ = p11 xi = [ x ]B′ c1 c2 c3 Root BasisAlternate Basis p21 p31 p12 p22 p32 p13 p23 p33 P – 1 = B´ Transition Matrix Inverse d1 d2 d3 xi = [ x ]B′ = p – 1 xi = [ x ]B′ c1 c2 c3 Root BasisAlternate Basis p12 p22 p32 p13 p23 p33 v1 v2 v3 11 p – 1 21 p – 1 31
  • 242.
    © Art Traynor2011 Mathematics Change of Basis ( via Transition Matrix) Basis Transformation Vector Space Section 4.7 (Pg. 206) [ B′ B ] The Transition Matrix ( TM ) from a Root Basis ( RB = B ) to an Alternate Basis ( aB = B′ , B-Prime ) is found, as is the case for similar matrix inverses, by adjoining the Root Basis matrix with the aB matrix ( on the LHS ) and applying EROs via GJE to arrive at a RREF reduced matrix, the result of which will form an adjoined matrix composed of the Identity Matrix ( IM – on the LHS ) and the P – 1 Transition Matrix ( from B to B′ – on the RHS ).  Adjoin [ In P – 1 ] RREF  GJE ( EROs ) 
  • 243.
    © Art Traynor2011 Mathematics Transition Matrix Basis Transformation Vector Space Section 4.7 (Pg. 208) When the Root Basis ( RB = B ) is equivalent to the Identity Matrix ( IM ) the process of finding a basis-changing Transition Matrix ( TM ) is simplified to a symmetric operation whereby the Root- Identity Matrix is adjoined ( on the RHS ) to an Alternate Basis Matrix ( ABM = B′ , B-Prime, on the LHS ) and applying EROs via GJE to arrive at a RREF reduced matrix, the result of which will form an adjoined matrix composed of the Root-Identity Matrix ( RIM – on the LHS ) and the P – 1 Transition Matrix ( from B to B′ – on the RHS ).  [ B′ In ] Adjoin [ In P – 1 ] RREF  GJE ( EROs )  P – 1 = ( B′ ) – 1 ′ Change of Basis Standard  Non-Standard