Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this document? Why not share!

913 views

Published on

an embarrasingly simple intro to algebra

No Downloads

Total views

913

On SlideShare

0

From Embeds

0

Number of Embeds

1

Shares

0

Downloads

33

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Elements of Mathematics: an embarrasignly simple (but practical) introduction to algebra Jordi Vill` i Freixa (jordi.villa@upf.edu), Pau Ru´ a e November 23, 2011Contents1 Introduction 32 Sets 33 Groups and ﬁelds 84 Matrices 11 4.1 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.1.1 Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.1.2 Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.1.3 Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.1.4 Determinant of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1.5 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 Orthogonal/orthonormal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Systems of linear equations 18 5.1 Elementary matrices and inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Vector spaces 22 6.1 Basis change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1
- 2. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences 6.2 The vector space L(V, W ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3 Rangespace and nullspace (Kernel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.4 Composition and inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.5 Linear transforms and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 6.6 Composition and matrix product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Projection 36 7.1 Orthogonal Projection Into a Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 7.2 Gram-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 7.3 Projection Into a Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Diagonalization 46 8.1 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 8.2 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Singular value decomposition (SVD) and principal component analysis (PCA) 54 9.1 Spectral decomposition of a square matrix . . . . . . . . . . . . . . . . . . . . . . . . 54 9.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 9.3 Properties of a data matrix -ﬁrst and second moments . . . . . . . . . . . . . . . . . 55 9.4 Principal component analysis (PCA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 9.5 PCA by SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 9.6 PCA by SVD in Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 9.7 More samples than variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 9.8 Number of Principal Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 9.9 Similar Methods for Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . 59Algebra 2
- 3. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Summary. Playing around with matrices and their properties. Some examples of resolution ofsystems of linear equations.1 IntroductionThis is a non-exhaustive review of matrices and their properties. The practical part can be performedwith the help of octave (http://www.octave.org). There are versions of the program for cygwinand linux. Some on-line additional sources of information can be found at:http://joshua.smcvt.edu/linalg.htmlhttp://www.math.unl.edu/~tshores/linalgtext.htmlhttp://archives.math.utk.edu/tutorials.htmlhttp://www.cs.unr.edu/~bebis/MathMethods/http://en.wikibooks.org/wiki/Linear_Algebrahttp://nptel.iitm.ac.in/courses/Webcourse-contents/IIT-KANPUR/mathematics-2/book.htmlFor help on octave and linear algebra:http://math.iu-bremen.de/oliver/teaching/iub/resources/ octave/octave-intro/octave-intro.htmlhttp://www2.math.uic.edu/~hanson/Octave/OctaveLinearAlgebra.html Check also [1, 2, 3].2 SetsAny collection of objects, for example the points of a given segment, the collection of all integernumbers between 0 and 10, the students in a classroom, etc. is called a set. The objects insidethe set (the points, the numbers and the students) are called elements of the set. In algebra it iscommon to represent sets using uppercase letters and elements using lowercase letters. The elementsof a set are speciﬁed between curly brackets. For example A = {a, b, c, d} represents a set formed of4 elements. A set can be speciﬁed either in an extensive way as in the case of A = {a, b, c, d} or in an intesiveway, where there is no need to specify all the elements belonging to it but only the properties theysatisfy. As an example, the set, A, of all integer numbers between 0 and 10 can be speciﬁed asA = {x|x ∈ Z, 0 ≤ x ≤ 10}1 . There is a huge amount of literature descibing formally what a set is. We will just stick to theidea that a set is a collection of elements none of them equal to another. The following is a list ofbasic properties and deﬁnitons concerning sets: 1x ∈ A is the mathematical way of representing that the element x belongs to the set AAlgebra 3
- 4. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • A set A is said to be included within a set B (or that A is a subset of the set B, or that B contains A) if and only if all the elements of A belong to B. In this case we will write A ⊂ B. So in a strictly mathematical way we would write A⊂B if and only if ∀x ∈ A ⇒ x ∈ B 2 • Hence, two sets, A and B, are said to be equal, A = B, if and only if both conditions A ⊂ B and B ⊂ A are fulﬁlled. Example The usual numeric sets N = 1, 2, 3, · · · (the natural numbers), Z = 0, 1, −1, 2, −2, · · · (the integer numbers), Q (the rational numbers), R (the real numbers) and C (the complex numbers) are related in the following way: N ⊂ Z ⊂ Q ⊂ R ⊂ C. • There is only one unique set that contains no elements. It is called the empty set and it is denoted by ∅. • In addition, the universe of a given problem is the reference set, U , that contains all the sets used in that particular problem. ¯ • Given a universe U , and a subset A, the complement of A in U , A, is the set of all elements in U that do not belong to A. Formally, ¯ A = {x ∈ U |x ∈ A}. / • Given a universe U and two sets A and B we can deﬁne de following operations: 1. The union of A and B is the set having all the elements from both sets A and B and no other element. A ∪ B = {x ∈ U |x ∈ A or x ∈ B}. Notice that this operation is commutative: A ∪ B = B ∪ A and associative: A ∪ (B ∪ C) = (A ∪ B) ∪ C. Example Let A = {1, 2, 3, 4, 5} and B = {2, 3, 7, 8}, then A ∪ B = {1, 2, 3, 4, 5, 7, 8}. 2. The intersection of A and B is set the having all the elements common to A and B and no other element. A ∩ B = {x ∈ U |x ∈ A and x ∈ B}. This operation is also commutative: A ∩ B = B ∩ A and associative: A ∩ (B ∩ C) = (A ∩ B) ∩ C. 2∀ is a mathematical symbol meaning for all, as in ∀x ∈ A(for all element x in the set A)Algebra 4
- 5. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Example Let A = {1, 2, a, b} and B = {2, 3, a, c}, then A ∩ B = {2, a}. 3. The set diﬀerence of A and B is the set having all the elements in A that are not found in B and no other element. A B = {x ∈ U |x ∈ A and x ∈ B}. / This operation is neither commutative nor associative. Notice also that we can write AB =A∩B ¯ 4. Given two elements a and b, we call an ordered pair the collection of these two elements in a given order. We denote the ordered pair with a being the ﬁrst coordinate and b the second coordinate as in (a, b). Notice that with this deﬁniton order matters (i.e. (a, b) = (b, a)). The cartesian product of A and B, A × B, is then deﬁned as the set of all ordered pairs of elements where the element in the ﬁrst coordinate belongs to A, and the element in the second coordinate belongs to B. Example Let A = {1, 2, 3} and B = {a, b}, then A × B = {(1, a), (2, a), (3, a), (1, b), (2, b), (3, b)} and B 2 = B × B = {(a, a), (a, b), (b, a), (b, b)}. In the same manner, starting from the set of real number R, also known as the real line, we can generate the set known as the real plane R2 = R × R = {(x, y)|x, y ∈ R} and the real space R3 = R × R × R = {(x, y, z)|x, y, z ∈ R} • A binary operation on a set is a calculation involving two operands (elements of the set) and its result is an element of the set. Let be an operation in a set A. We write: : A×A → A (a, b) → c=a b which means that given two elements a, b ∈ A, the result of operating a and b is an element c = a b which also belongs to A. – The property that the result of operating two elements of the set A is also an element of the set A is called the closure property. ExampleAlgebra 5
- 6. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences ∗ The normal sum (+) of natural numbers (N) and real numbers (R) is an operation that fulﬁlls the closure property. ∗ The normal subtraction (−) of natural numbers is an operation that does not fulﬁll the closure property (2 − 4 = −2 ∈ N) while in the case of real numbers it is fulﬁlled. / ∗ The product of rational numbers (Q) is an operation that fulﬁlls the closure property. – Another property an operation can have is associativity. An operation in a set A is said to be associative if for all elements a, b and c in A it holds: a (b c) = (a b) c. Example Usual sum and product in the natural, integer, rational and real numbers are associative operations. On the other hand, subtraction and division are not. Take as examples these cases: 3 − (2 − 1) = (3 − 2) − 1 and 3/(5/2) = (3/5)/2 – Notice that as the deﬁnition of operation is based on the cartesian product (A × A) the order of the operands does matter in principle. An operation where the order of the operands does not matter is said to be commutative. Formally, an operation is commutative if for all a, b ∈ A it holds a b = b a. Example Again, normal addition and multiplication in the natural, integer, rational and real numbers are commutative operations while division and subtraction are not. Another operation that is non-commutative is the product of matrices. For instance, if 1 2 3 4 3 8 15 22 M= and N = then M · N = while N · M = 3 4 0 2 9 20 6 8 – We say that an operation in A has an identity (also called neutral) element, e, if there exists an element e ∈ A such that for all elements a ∈ A it holds a e=e a=a . Example The normal addition and multiplication in the integer, rational and real numbers have identity elements 0 and 1 respectively. Notice that 0 ∈ N / – Given an operation in A and a ∈ A, let e be the identity element of in A. An element b ∈ A is said to be an inverse of a if a b = b a = e. It can be easily shown that if this exists it is unique (no other element can be the inverse of a). We will write the inverse of a as −a or a−1 .Algebra 6
- 7. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Example In the normal addition in Z, Q and R all numbers have an inverse. This is not the case for the natural numbers. For the multiplication case in Q and R all numbers but 0 have inverse. – Given two operations and ◦ in A, we say that is distributive over ◦ if a (b ◦ c) = (a b) ◦ (a c) and (b ◦ c) a = (b a) ◦ (c a) for all a, b, c ∈ A Example Normal multiplication in the natural, integer, rational and real numbers is distributive over the normal addition.Algebra 7
- 8. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences3 Groups and ﬁeldsIn this section we will introduce the notions of group and ﬁeld. Both concepts are fundamentalin all ﬁelds of mathematics. A group is nothing other than a set of elements together with anoperation that combines any two of its elements to form a third element plus a few requirements onthe operation behavior which naturally lead to the concept of subtraction. Many basic mathematicalstructures are groups (say Z, Q and R with the usual addition, for instance). On the other hand, a ﬁeld is a set with two operations designated as addition and multiplicationwith some properties that lead naturally to the operations of subtraction and division.Deﬁnition 1. A group is a set, G, together with an operation : A×A → A (a, b) → c=a bwhich satisﬁes the following axioms: 1. a b ∈ G ∀a, b ∈ G (closure) 2. a (b c) = (a b) c ∈ G ∀a, b, c ∈ G (associativity) 3. ∃e ∈ G such that a e = e a = a ∀a, b ∈ G (identity element) 4. ∀a ∈ G ∃b ∈ G such that a b = e and b a = e (inverse element).. We denote the group as (G, ). As a remark, the associativity property is the one that allows us to get rid of the parentheseswhen summing of multiplying several numbers. That is, we usually write a · b · c · d instead ofa · (b · (c · d)) or (a · b) · (c · d) or ((a · b) · c) · d as the multiplication is deﬁned as a binary operation.It is correct to write it without parentheses beacuse the multiplication is associative. Example • (Z, +) is a group. • (N, +) is not a group, as there is no identity element for the sum. • (Z, −) is not a group, as the associativity property is not fulﬁlled. • (Z, ·) is not a group, as there are no inverse elements for the elements 2, 3, 4, etc. (i.e. there is no integer x such that 2 · x = 1). • (Q, +) and (R, +) are also groups. • (Q, ·) and (R, ·) are not groups. The only property that is violated is the inverse element. The element 0 does not have an inverse (i.e. there is no number x such that x · 0 = 1). If we remove 0 from these sets it is easy to see that (Q {0}, ·) and (R {0}, ·) are groups.Algebra 8
- 9. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • The set of polynomials of degree n with coeﬃcients in Z (an xn + an−1 xn−1 + · · · + a1 x + a0 |an , · · · , a0 ∈ Z) are a group with the addition of polynomials. The identity element is the constant polynomial 0. Given a polynomial p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0 its inverse is q(x) = −an xn + (−an−1 )xn−1 + · · · + (−a1 )x + (−a0 ). Notice that the commutativity property is not required in the deﬁnition of a group. There aregroups that are not commutative.Deﬁnition 2. A ﬁeld is a set, F , with two operations, + and ·, such that 1. a + b ∈ F ∀a, b ∈ F (closure for +). 2. a + (b + c) = (a + b) + c (associativity for +). 3. ∃e+ ∈ F such that a + e+ = e+ + a = a ∀a ∈ F (neutral element for +). We will denote this element as 0. 4. ∀a ∈ F ∃b such that a + b = b + a = 0 (inverse element for +). We will denote this element as −a. 5. a + b = b + a ∀a, b ∈ F (commutativity for +). 6. a · b ∈ F ∀a, b ∈ F (closure for ·). 7. a · (b · c) = (a · b) · c (associativity for ·). 8. ∃e· ∈ F such that a · e· = e· · a = a ∀a ∈ F (neutral element for ·). We will denote this element as 1. 9. ∀a ∈ F {0} ∃b such that a · b = b · a = 1 (inverse element for · for all elements but 0). We will denote this element as a−1 . 10. a · b = b · a ∀a, b ∈ F (commutativity for ·). 11. a · (b + c) = a · b + a · c ∀a, b, c ∈ F (distributive property of · with respect to +). 12. 1 = 0 (nontriviality, the neutral element for + and the neutral element for · must be diﬀerent).We will write (F, +, ·). From this deﬁnition one can easily see that (F, +) and (F {0}, ·) are commutative groups. Recallthat Q and R satisfy this. In fact, (Q, +, ·) and (R, +, ·) are ﬁelds but not (N, +, ·) and (Z, +, ·). The complex numbers C = {a + ib|a, b ∈ R and i2 = −1} with the complex addition andmultiplication: • (a + ib) + (c + id) = (a + c) + i(b + d) • (a + ib) · (c + id) = (ac − bd) + i(bc + ad)Algebra 9
- 10. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciencesare a ﬁeld. Not all ﬁelds have inﬁnite elements. This might seem counterintuitive if one takes into accountthat addition and multiplication operations are closed. The idea behind the ﬁnite groups is thatthese operations are adapted in such a way that the closure property is fulﬁlled together with allthe other required properties. For instance, let’s consider the set Z2 = {0, 1}. And let’s deﬁne the + 0 1 · 0 1operations 0 0 1 and 0 0 0 . Then (Z2 , +, ·) is a ﬁeld. 1 1 0 1 0 1Algebra 10
- 11. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences4 MatricesDeﬁnition 3. Let F be a ﬁeld (e.g. Q, R or C). A matrix A with coeﬃcients in F of order n × m(n, m ∈ N) is a collection of n × m ordered elements of F. a11 a12 ··· a1m a21 a22 ··· a2m A= . . = {aij }1≤i≤n,1≤j≤m . .. .. . . . . . an1 an2 ··· anm The ﬁrst index refers to the row number and the second to the column number. A 1 × n matrixis called a row vector whereas an m × 1 matrix is called column vector. In the case of n = m thematrix is said to be square of order n. Example 3 • (3 4 2) is a row vector while 5 is a column vector. 1 1.3 4 π e2 7 • The matrix 6 5 5 is a 3 × 2 matrix in Q and the matrix 2 3π 1 is a 2 × 3 matrix 1 0 5 e 7 7 with coeﬃcients in R We refer to the set of all n × m matrices with coeﬃcients in F by Mn×m (F ).Deﬁnition 4. • The main diagonal of a matrix A = (aij ) is the set of coeﬃcients ai j such that i = j. • A zero matrix, 0 is a matrix all of whose elements are equal to zero. • The unit matrix or identity matrix of order n, In is the square matrix of order n in which all the coeﬃcients in the main diagonal are equal to one and all other elements are 0. The identity matrix of order n is written as In . 1 0 ··· 0 0 1 · · · 0 In = . . . . . .. . .. . . 0 0 ··· 1Deﬁnition 5. For any square matrix, the trace is evaluated by: tr(A) = a11 + a22 + · · · + annwith properties: tr(AT ) = tr(A)Algebra 11
- 12. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences tr(A ± B) = tr(A) ± tr(B) tr(AB) = tr(BA) tr(AB) = tr(A)tr(B)4.1 Basic operations4.1.1 SumDeﬁnition 6. Let A, B ∈ Mn×m (F ). The sum of matrices A and B is deﬁned as the matrixC ∈ Mn×m (F ) such that cij = aij + bij ∀1 ≤ i ≤ n, 1 ≤ j ≤ m.We write then C = A + B. The sum of matrices is a commutative operation, hence for any two matrices A and B we haveA + B = B + A. Example 2 1 5 2 5 0 4 6 5 4 2 1 + 1 1 3 = 5 3 4 0 3 2 2 0 0 2 3 24.1.2 TranspositionDeﬁnition 7. Let A = (aij ) ∈ Mn×m (F ). The transpose of A, AT ∈ Mm×n (F ) is the matrix oforder m × n AT = (bij )1≤i≤n;1≤j≤m where bij = aji Example T 4 0 4 2 1 = 2 3 0 3 2 1 2Deﬁnition 8. Let A ∈ Mn (F ) be a square matrix. A is said to be symmetric if A = zAT andantisymmetric if A = −AT . Antisymmetric matrices have 0 diagonals (aii = −aii ⇒ aii = 0 ∀i)Algebra 12
- 13. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences4.1.3 ProductDeﬁnition 9. Let A = (aij ) ∈ Mn×p (F ) and B = (bij ) ∈ Mp×m (F ). The product of A and B isthe matrix C = A · B ∈ Mn×m (F ), where p cij = aik bkj . k=1 Notice that the number of columns of the ﬁrst (left) factor must be the same as the number ofrows of the second (right) factor. The resulting matrix has the same number of rows as the left factorand the same number of columns as the right factor. Notice also that, even though square matricescan be multiplied in either order (swapping the matrices order) the product is not commutative. Example 4 2 1 7 3Let A = and B = . Then 0 3 2 0 5 • A · B is not deﬁned, as the number of columns of A and the number of rows of B is not the same. 23 28 13 • On the other hand, B · A = 15 0 10 28 12 49 36 • B·B= and AT · B = 14 21 0 25 7 13 16 8 4 21 8 • AT · A = 8 13 8 and A · AT = 8 13 4 8 5octave:1> A=[1,2;4,0;3,-2;5,1]A = 1 2 4 0 3 -2 5 1octave:2> B=[1,2,0;5,-1,3]B = 1 2 0 5 -1 3octave:3> C=A*BC =Algebra 13
- 14. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences 11 0 6 4 8 0 -7 8 -6 10 9 3Deﬁnition 10. Let A ∈ Mn (F ) be a square matrix. A square matrix B is the inverse of A ifA · B = In and B · A = In . A matrix A is called invertible if it has an inverse. If the inverse of a matrix exists, it is unique. Hence, the inverse of a matrix A can be written asA−1 . Example −1 1 2 −2 1 = 3 1 3 4 2 −2 If A is not square, then we deﬁne the pseudo-inverse A+ as: A+ = (AT A)−1 ATand it can easily been shown that A+ A = I4.1.4 Determinant of a matrixDeﬁnition 11. The determinant is an operation from the set of all square matrices of order nwith coeﬃcients in a ﬁeld F to the ﬁeld F . That is, for any matrix A, det(A) is an element of F .The determinant can be deﬁned in many ways. The deﬁnition which leads to the simplest way ofcomputing determinants is by the Laplace expansion (cofactor expansion). We write det(A) = |A| Invertible matrices are precisely those matrices with a nonzero determinant. In the case of square matrices of order two, the determinant can be computed in the followingway a b a b det = = ad − bc c d c dAnd in the case of order 3 matrices: a11 a12 a13 a11 a12 a13det a21 a22 a23 = a21 a22 a23 = a11 a22 a33 +a12 a23 a31 +a13 a21 a32 −a13 a22 a31 −a12 a21 a33 −a11 a23 a32 . a31 a32 a33 a31 a32 a33 Let A ∈ Mn (F ), the Laplace expansion of det(A) is a way to express it as a sum of n determinantsof (n−1)×(n−1) sub-matrices of A. Let’s deﬁne the i, j–minor of A, Aij ∈ Mn−1 (F ) as the matrixAlgebra 14
- 15. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciencesobtained by removing the ith row and jth column of A. Fix any index j, then the determinant ofA can be deﬁned recursively as n det(A) = (−1)i+j aij det(Aij ). i=1 The same result is also true by ﬁxing a row and summing over the columns. Notice that thereare 2n diﬀerent possible expansions, the result does not depend on the chosen column/row. The following is a list of the most important properties of the determinants: • If a matrix has two columns or two rows equal (or proportional) then the determinant is zero. • If A is a matrix such that det(A) = 0 then there is a linear combination of rows (columns) of A equal to zero. • If one column/rows of a matrix is a linear combination of other columns/rows then its deter- minant is zero. • det(AT ) = det(A). • det(A · B) = det(A) · det(B)Proposition 1. Let A ∈ Mn . A is invertible if and only if det(A) = 0. There are diﬀerent ways of computing the inverse of a matrix.4.1.5 Rank of a matrixDeﬁnition 12. Let A ∈ Mn×m . A minor of order r of A is a square submatrix of A of order r.That is, a submatrix of A obtained by removing n − r rows and m − r columns.Deﬁnition 13. The rank of a matrix A ∈ Mn×m (F ) is r if any minor of order > r has a determinantof zero and there exists a minor of order r with a non–zero determinant. Example 3 1 5 3 1Let A = 2 1 4. Then, as det(A) = 0 but det = 1 = 0 the rank of A is rk(A) = 2. 2 1 5 0 5octave:4> D=[1,2,2,3;4,0,8,-1;3,-2,6,0;5,1,10,-8]D = 1 2 2 3 4 0 8 -1 3 -2 6 0 5 1 10 -8Algebra 15
- 16. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciencesoctave:5> det(D)ans = 0octave:9> E=[1,2,3;4,0,-1;3,-2,0]E = 1 2 3 4 0 -1 3 -2 0octave:10> det(E)ans = -32 In the above example, matrix D has rank 3. It is also the maximum number of linearly indepen-dent columns or rows of D. There several properties of matrices based on the rank: • rank(Am×n ) ≤ min m, n. • rank(An×n ) = n if and only if A is nonsingular (invertible). • rank(An×n ) = n if and only if det(A) = 0. • rank(An×n ) < n if and only if A is singular.4.2 Orthogonal/orthonormal matricesConsider the matrix A: a1,1 a1,2 ... a1,m a2,1 a2,2 ... a2,m A= . . . an,1 an,2 ... an,mLet’s take the vectors formed by the rows (or columns) of matrix A: uT = (a11 , a12 , . . . , a1n ) 1 uT = (a21 , a22 , . . . , a2n ) 2 . . . uT = (am1 , am2 , . . . , amn ) m Let us consider the properties: 1. uk uk = 1 or uk = 1, for every k 2. uj uk = 0, for every j = kAlgebra 16
- 17. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesA is orthonormal if both conditions are satisﬁed. A is ortogonal if only condition 2 is satisﬁed. If A is orthonormal, then: AAT = AT A = Ior what is the same, A−1 = AT Av = vAlgebra 17
- 18. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences5 Systems of linear equations Example Let’s consider the reaction of glucose (C6 H12 O6 ) oxidation in which carbon dioxide and waterare obtained. Suppose we don’t know the stoichiometric coeﬃcients of the reaction, which we willdesignate by the unknowns x, y, z and t as shown in: xC6 H12 O6 + yO2 −→ zCO2 + tH2 OThe number of atoms of each element must be the same on each side of the reaction, hence we canestablish the following relations: 6x = z 12x = 2t 6x + 2y = 2z + tWe will see that, this system is compatible indeterminate, meaning that it accepts inﬁnite solutions.Setting x = 1 we get only one solution which is (x, y, z, t) = (1, 6, 6, 6)Deﬁnition 14. A system of linear equations is a collection of linear equations a11 x1 + a12 x2 + · · · a1m xm = b1 a21 x1 + a22 x2 + · · · a2m xm = b2 . . . an1 x1 + an2 x2 + · · · anm xm = bnwhere the numbers aij ∈ R are the coeﬃcients and bi are the independent or constant term. Thissystem can be represented in the matrix form Ax = b, Twhere A = (aij )1≤i≤m;1≤j≤n is the matrix of the sytem, x = (x1 , · · · , xm ) is the variable vector T Tand b = (b1 , · · · , bn ) is the vector of independent terms. A column vector s = (s1 , · · · , sm ) ∈ Rnis a solution of the system if substituting x by s gives a true statement As = b. TThat is, s = (s1 , · · · , sm ) is a solution of all the equations in the system. • Not all systems have a unique solution (e.g. 2x + 4y = 0 accepts inﬁnite solutions). • There are systems with no solutions (e.g. −2x + 4y = 1, x − 2y = 3 has no solutions). Therefore, we need a criterium to decide whether a given system of linear equations has a solutionor not. One of the most used criteria is the Rouch´–Frobenius criterium which is based on the rank eof the systems matrix A and the augmented matrix A|b (which is A with the column vector bappended). It says:Algebra 18
- 19. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • If rk(A) = rk(A|b), the system is said to be compatible and it accepts solutions. – If rk(A) = m (the number of variables), the system is said to be determinate and it has one unique solution. – Otherwise, the system is said to be indeterminate and it has an inﬁnite number of solu- tions. • If rk(A) = rk(A|b), the system is said to be incompatible and there are no solutions to it. Once we know whether a system has solutions or not, we have to solve it (in the former case).Although there are many methods for solving systems of linear equations using computers, thestandard method of resolution by hand is the method of Gauss or Gaussian elimination method.This is based on replacing equations in the system by linear combinations of other equations insuch a way that the obtained system is equivalent to the original one in the sense that they sharethe same solutions but the new system is upper diagonal (and hence can be trivially solved). Thismethod is based on the following result:Theorem 1. If a system of linear equations is changed to another by one of these transformations: 1. an equation is swapped with another equation 2. an equation has both sides multiplied by a nonzero constant 3. an equation is replaced by the sum of itself and a multiple of anotherthen the two systems have the same set of solutions ExampleGiven the system −3 2 −6 x 6 5 7 −5 · y = 6 1 4 −2 z 8We know that the system is compatible determinate and hence it only has one unique solution. −3 2 −6 6 0 14 −12 30 1 0 7 −6 15 ρ1 +3ρ3 2 ρ1 −5ρ3 +ρ2A|b = 5 7 −5 6 − − − 5 7 − −→ −5 6 − → − 5 7 −5 6 − − − → −−− 1 4 −2 8 1 4 −2 8 1 4 −2 8 0 7 −6 15 0 7 −6 15 0 0 43 43 1 2ρ1 +ρ2 ρ1 −7ρ2 0 −13 5 −34 − − − 0 1 −7 −4 − − − 0 1 −7 −4 − − 43 ρ1 − −→ − −→ −→ 1 4 −2 8 1 4 −2 8 1 4 −2 8 0 0 1 1 1 4 −2 8 0 1 −7 −4 → 0 1 −7 −4 1 4 −2 8 0 0 1 1Hence z = 1, y − 7 = −4 ⇒ y = 3 and x + 12 − 2 = 8 ⇒ x = −2 Summarizing:Algebra 19
- 20. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences If A is invertible, AX = b has exactly one solution: x = A−1 bThe following statements are equivalent: 1. A is invertible 2. Ax = 0 has only the trivial solution 3. det(A) = 0 4. b is in the column space of A. a11 a12 a1n b1 a21 a22 a2n b2 . x1 + . x1 + · · · . x1 = . . . . . . . .. am1 am2 amn bn 5. rank(A|b) = rank(A) and rank(A) = n 6. The column/row vectors of A are linearly independent 7. The column/row vectors of A span RnThe system has no solution if rank(A|b) > rank(A). The system has inﬁnitely many solutions ifrank(A|b) = rank(A) < n.5.1 Elementary matrices and inverseThe same method Gauss-Jordan can be applied to obtain the inverse matrix. Let us deﬁne ﬁrst theabove transformation steps in a more precise way. Indeed, there are just three types of transforma-tions, and all can be associated to the product for a so-caled elementary matrix: 1. Switching two rows in the matrix. For example, switching rows 2 and 3 in a given 3×m matrix A is equivalent to do 1 0 0 0 0 1 A = E23 A 0 1 0 2. Multiplying a row by a given value. For example, c 0 0 0 1 0 A = E1 (c)A 0 0 1 3. Summing up one row to the product of another by a number. This is: 1 0 0 0 1 c A = E23 (c)A 0 0 1Algebra 20
- 21. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences It is easy to see that we can build an inverse matrix making use of elementary transformations.Let A be an invertible n × n matrix. Suppose that a sequence of elementary row-operations reducesA to the identity matrix. Then the same sequence of elementary row-operations when applied tothe identity matrix yields A−1 . To see how this is the case, let E1 , E2 , . . . , Ek be a sequence ofelementary row operations such that E1 E2 · · · Ek A = In . Then E1 E2 · · · Ek In = A−1 which, inturn, implies A−1 = E1 E2 · · · Ek .Algebra 21
- 22. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences6 Vector spacesVector spaces are the mathematical structures most oftenly found in Bioinformatics. The realnumbers R, real plane R2 and real space R are the most common vector spaces. The idea behinda vector space is that its elements, the vectors, can be added between them and also scaled by realnumbers.Deﬁnition 15. A vector space over R consists of a set V along with two operations + and · suchthat: 1. If v, w ∈ V then their vector sum v + w ∈ V and • v + w = w + v (commutative) • v + (w + u) = (v + w) + u for u ∈ V (associative) • there is a zero vector 0 ∈ V such that 0 + v = v • ∀v ∈ V ∃w such that w + v = 0 (additive inverse) 2. If r, s ∈ R (scalars) and v, w ∈ V , then rv ∈ V and • (r + s)v = rv + sv • r(v + w) = rv + rw • (rs)v = r(sv) • 1v = v Observe that we are using two kinds of additions, the real numbers addition and the vectoraddition in V (r + s)v = rv + sv real numbers addition vector addition Example • The set R2 is a vector space if the operations + and · have their usual meaning: x1 x2 x1 + x2 + = y1 y2 y1 + y2 x1 r · x1 r· = . y1 r · y1 0 The zero vector of this vector space is 0 = . In fact Rn is a vector space, for any n > 0. 0 x x • P = {y ∈ Rn |x + y + z = 0} is a vector space: If v = y ∈ Rn , then for any r ∈ R z z x r · v = y and r · x + r · y + r · z = r · (x + y + z) = 0, hence r · v ∈ P . zAlgebra 22
- 23. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • The set with only one element, the zero vector, is a vector space called the trivial vector space: {0}. • The set of polynomials of degree 3 or less with real coeﬃcients, P3 (R) = {a0 + a1 x + a2 x2 + a3 x3 |a0 , a1 , a2 , a3 ∈ R} is a vector space with the usual polynomial sum and product by constant. In fact, Pn (R) is a vector space for any n > 0. • The set of solutions of a homogeneous system of linear equations S = {v ∈ Rm |Av = 0}, A ∈ Mn×m (R) is also a vector space: v, w ∈ S ⇒ A(v + w) = Av + Aw = 0 v, ∈ S, r ∈ R ⇒ A(rv) = rAv = 0Deﬁnition 16. For any vector space V , any subset that is itself a vector space is a subspace of V The linear combination of n vectors in the vector space E over K, with n ∈ N and coeﬃcient sαi ∈ K(i = 1, . . . , n) is deﬁned as n α1 v1 + · · · + αn vn = αi vi i=1and we will say that v ∈ E is a linear combination of v1 , . . . , vn ∈ E if there exist a set of coeﬃcients αi ∈ K(i = 1, . . . , n) such that v = α1 v1 + · · · + αn vnGiven n vectors, the subspace that is formed by all their possible linear combinations is called thesubspace ”g enerated” or ”spanned” by them, < v1 , . . . , vn >. This set of vectors, represented by{v1 , . . . , vn }, is called ”spanning set” of < v1 , . . . , vn >. Let’s imagine that we want to span the vector zero from a linear combination of vectors of theset {v1 , . . . , vn }.If this is only possibly done by the so-called “trivial solution”, this is, with all αiequal to zero, t hen we will say that {v1 , . . . , vn } is a set of vectors “linearly independent” or a “freeset”. If there exists some way to obtain 0 without all coeﬃcients being 0, then we will say that{v1 , . . . , vn } i s a set of linearly dependent vectors. A “basis” is a set of vectors that spans the subspace and at the same time is linearlyindependent. This is, B = v1 , . . . , vn is a basis of the subspace V if: • each vector of V is a linear combination of v1 , . . . , vn , and • the vectors v1 , . . . , vn are linearly independent.If so, there will exist an ordered list of scalars such that: v = α1 v1 + · · · + αn vn . Thus, once weknow the vector of the basis we know the whole subspace.Algebra 23
- 24. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences ExampleShow that in R4 , the set of vectors whose components follow: x1 + x2 + x3 + x4 = 0form a vector subspace with dimension 3. Find a basis. We can answer both questions at once. We only need to solve the systems of equations describingthe vector subspace. Thus, the solutions to: x1 + x2 + x3 + x4 = 0have the form: x1 = −x2 − x3 − x4 = −a − b − c x2 = a x3 = b x4 = cor, equivalently: x1 −a − b − c −1 −1 −1 x2 a 1 0 + c 0 x3 = = a 0 + b 1 b 0 x4 c 0 0 1This is to say, the vector subspace is spanned by these three vectors.Exercise 1. Find out if in the vector space P2 [x] of the polinomials with order less than or equal to2 over R, the fol lowing vectors form a basis: u1 = 1 + 2x u2 = −1 − 2x2 u3 = −2x + 2x2Exercise 2. Let be P3 [x] the vector space of the polinomials or order 3 or less with real coeﬃcientsand real variable ov er the commutative body R. Be the set of vectors G = {(x2 + x + 2), (x3 + 3x)}belonging to P3 [x]. Find a basis of P3 [x] by completing the set G.Lemma 1. For any nonempty subset W of a vector space V under the inheritet operations, thefollowing statements are equivalent 1. W is a subspace of V . 2. W is closed under linear combinations of pairs of vectors: ∀v1 , v2 ∈ W and r1 , r2 ∈ R, r1 v1 + r2 v2 ∈ W . 3. W is closed under linear combinations of any number of vectors: ∀v1 , · · · , vn ∈ W and r1 , · · · , rn ∈ R, r1 v1 + · · · + rn vn ∈ W .Algebra 24
- 25. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences This last result tells us that to assess if a subset of a known vector space is also a vector space(a subspace), we don’t have to check everything, just that it is closed under linear combinations.Deﬁnition 17. The span (or linar closure) of a nonempty subset, W , of vector space V is the setof all linear combinations of vectors from W : [W ] = {c1 w1 + · · · + cn wn |w1 , · · · , wn ∈ W, c1 , · · · , cn ∈ R}.Lemma 2. In a vector space, the span of any subset is a subspace (i.e. the span is cloesed underlinear combinations). The converse also holds: any subspace is the span of some set. Example • The span of one vector v ∈ V , is: [{v}] = {r · v|r ∈ R} • Any two linearly independent vectors span R2 . For instance, 1 −1 , = R2 1 1 x x+y 1 y−x −1 Any vector can be written as: 2 + 2 . y 1 1Exercise 3. Check that the set of vectors F = {(x, y, z) ∈ R3 /x + y + z = 9} is not a vector subspacein R3 .Deﬁnition 18. If in a vector space there exist a basis formed by n elements and m > n, then wecan assure that any set of m vectors is linearly dependent. In any ﬁnite-dimensional vectorspace, all of the bases have the same number of elements The “dimension” of a vector spaceis the number of vectors in any of its bases. As a consequence, for the above mentioned vector space: 1. n linearly independent vectors form a basis 2. n spanning vectors form a basis 3. If V is a subspace of E, then V has a basis (V = 0), dimV ≤ n, and the equality only holds if and only if V = E. 4. If r < n and v1 , . . . , vr are linearly independent vectors, then there exist n − r vectors vr+1 , . . . , vn such that {v1 , . . . , vr , vr+1 , . . . , vn } is a basis of E. ExampleWe consider, in the vector space R3 over R, two subspaces E1 = (1, 1, 1), (1, −1, 1) and E2 =(1, 2, 0), (3, 1, 3) • Find the set of vectors that belong to E1 ∩ E2 . • Check if it is a subspace of R3 .Algebra 25
- 26. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • What is the dimension of subspace {E1 ∩ E2 }? The solution is immediate if we consider a geometrical view. In R3 , a vector subspace withdimension 2 is a plane and that two planes can intersect in a line or can be coincident. In bothcases, then, we would have vector subscapes. In this way, both planes can be found in an easy way,yielding E1 = {(x1 , x2 , x3 ) ∈ R3 /x − z = 0} and E2 = {(x1 , x2 , x3 ) ∈ R3 /2x − y − 5 z = 0}. Joining 3these two expressions we will see that they are L.I. and in this way we would have three unknownvariables for two equations: one degree of freedom and thus we are describing a line on R3 . ExampleLet us consider the subspace in R4 deﬁned as: F = {(x1 , x2 , x3 , x4 ) ∈ R4 /x3 = 2x1 + 3x2 ; x4 = 2x2 − 3x1 } (1)Find a basis for the subspace and complete it until obtaining a basis for R5 . The equations deﬁning the subspace can be written also as: x1 = x1 x2 = x2 x3 = 2x1 + 3x2 x4 = 2x2 − 3x1or, equivalently: x1 1 0 x2 0 + b 1 x3 = a 2 3 x4 −3 2Thus, these two vectors form a basis of the subspace, with dimension 2. To complete the set ofvectors until having a basis in R4 , we only need to choose two vectors that, along with the vectorswe already have, form a L.I. set of vectors. We could try, for example, vectors (1, 0, 0, 0) i (0, 1, 0, 0): 1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 2 3 0 0 ≈ 0 3 −2 0 −3 2 0 0 0 2 3 0 1 0 1 0 0 1 0 1 ≈ 0 0 −2 −3 0 0 3 2 1 0 1 0 0 1 0 1 ≈ 0 0 −2 −3 0 0 0 −13Thus, the 4 chosen vectors are L.I. and therefore form a basis in R4 .Algebra 26
- 27. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences6.1 Basis changeThe representation of a vector as a column of components depends, obviously, on the basis. Foreach basis the representation will be diﬀerent. How do we relate these representations? Given two diﬀerent basis in a vector subspace, B = v1 , . . . , vn and B = w1 , . . . , wn , and knowingthe representation of vector v according to the ﬁrst of the bases RepB (v)X = (xi )1≤i≤n ∈ Kn , thisis: v = x 1 v1 + x 2 v2 + · · · + x n vn (2) To obtain the representation of vector v according to the basis B it is enough with knowing therepresentation of the vectors vi in B : v1 = a1 w1 + a2 w2 + · · · + an wn 1 1 1 v2 = a1 w1 + a2 w2 + · · · + an wn 2 2 2 . . . vn = a1 w1 + a2 w2 + · · · + an wn n n nBy replacing this representation in Eq. 2 we get: v = x1 (a1 w1 + a2 w2 + · · · + an wn )+ 1 1 1 + x2 (a1 w1 + a2 w2 + · · · + an wn )+ 2 2 2 . . . + xn (a1 w1 + a2 w2 + · · · + an wn ) n n nrearranging: v = (x1 a1 + x2 a1 + · · · xn a1 )w1 1 2 n + (x1 a2 + x2 a2 + · · · xn a2 )w1 1 2 n . . . + (x1 an + x2 an + · · · xn an )w1 1 2 n Writing this in matrix form we see that calling P the matrix that represents the vectors of thebasis B into the basis B , Y = RepB (v), this is, v = y 1 w1 + y 2 w2 + · · · + y n wn , is given by: Y = PX (3)or, what is the same, RepB (v) = P RepB (v). This matrix P = (RepB (v1 ), RepB (v2 ), . . . , RepB (vn ))is called “matrix of basis change”. This matrix can be inverted, and P −1 is the matrix for changingfrom basis B into B. Example Be B = {u1 , u2 , u3 } a basis in R3 and B = {v1 , v2 , v3 } another basis in the same space, deﬁnedas: v1 = u1 − u3 v2 = u1 + 2u2 + u3 v3 = u2 + 2u3Algebra 27
- 28. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesIf w is a vector in R3 with coordinates (2,1,-1) with respect to basis B, calculate the coordinates ofw with respect to basis B . The above equations directly yield the transformation matrix: 1 1 0 1 1 0 2 3 RepB (w) = 0 2 1 · RepB (w) = 0 2 1 1 = 1 −1 1 2 −1 1 2 −1 −4 ExampleGiven the vector 1 2expressed in the base 0 1 B= , 1 1what are its coordinates in the basis −1 0 B = , 0 2We only need to ﬁnd the matrix for the basis transformation. This is built with the representationsof the vectors of the old basis with respect to the vectors of the new basis. Thus: 0 −1 0 = α1 + β1 1 0 2from which we can obtain α1 0 = 1 β1 2and 1 −1 0 = α2 + β2 1 0 2from which we can obtain α2 −1 = 1 β2 2Finally, 0 −1 vB = 1 1 · vB 2 2 0 −1 1 −2 vB = 1 1 · = 3 2 2 2 2Algebra 28
- 29. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences6.2 The vector space L(V, W )If V and W are two vector spaces over the same body K, a linear map F : V → W is a map that respects the following linear operations: ∀v, w ∈ V, F (v + w) = F (v) + F (w) ∀α ∈ K, ∀v ∈ V, F (αv) = αF (v)or, equivalently: ∀α1 , α2 ∈ K, ∀v1 , v2 ∈ V, F (α1 v1 + α2 v2 ) = α1 F (v1 ) alpha2 F (v2 ) m Let us consider, for example, the matrices with m rows and n columns: A ∈ Mn (K). These n mmatrices can be used to represent a linear map of K into K : FA : Kn → Kmo b´, e FA (X) = AX, ∀X ∈ KnThis map is linear because it follows the above conditions. If we deﬁne a second linear map GA ,analogous to FA it is simple to prove that the space formed by all poss ible linear transforms L(V, W )has the structure of a vector space. Exercise 1Discuss if these transforms are or not linear: F : R3 → R; F (X) = 2x − 3y + 4z G : R2 → R3 ; G(X) = (x + 1, 2y, x + y)6.3 Rangespace and nullspace (Kernel)Rangespace of a linear transformation is the set of images of all the vectors of V , F (V ): ImF = {w ∈ W |∃v ∈ V with F (v) = w}The dimension of the rangespace is the map’s rank, rg F = dim Im F . In any linear transformation,the rangespace of any subspace of the starting set into the arriving set is also a s ubspace. The wholerangespace of the linear transform is also a vector subspace. The nullspace or kernel of a linear mapis the inverse image of the zero vector in the arriving space: NucF = KerF = {v ∈ V |F (v) = 0}Both the rangespace and the kernel are vector subspaces. A linear transformation is injective if and only if NucF = 0. If F : V → W is linear with NucF = {0} and v1 , . . . , vn are linearly independent vectors of V , then F (v1 ), . . . , F (vn ) are also linearlyindependent. Thus, for an injective linear transform, the rangespace of V is a basis of W . Somedeﬁnitions:Algebra 29
- 30. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences • homomorphism is equivalent to linear transform. • epimorphism is a linear transform that is exhaustive in W . • isomorphism is a one-to-one linear transform: both injective and exhaustive: bijective. • endomorphism is a linear transform of a vector space in itself (also called sometimes operator) • automorphisms are both endomorphisms and isomorphisms. Rearranging the previous statements, is F is injective, it is also an isomorphism between V andIm F . If V is a vector space of ﬁnite dimension and F : V → W is linear, then, dimV = dim NucF + dim ImF dim NucF is sometimes caled the nulity of F and dim ImF its rank. Exercise 2Let be F : R5 → R3 the linear transform deﬁned as F (X) = (x + 2y + z − 3s + 4t, 2x + 5y + 4z − 5s + 5t, x + 4y + 5z − s − 2t). Find a basis and the dimension of the rangespace of F . Exercise 3Let be the linear transform F : R3 → R3 that has an associated matrix on the canonical bases 1 2 5 F = 3 5 13 −2 −1 −4Find a basis and the dimension of both the rangespace and the kernel. Exercise 4 2Find the kernel of the isoorphism H : M2 → P3 as deﬁned by: a b → (a + b + 2d) + 0x + cx2 + cx3 c d Example The matrix in exercise 6.1 changes a vector from its representation in the base B to its representationin the base B . Show that it is an automorphism. What would be the transformation matrix fromB to B? It can be shown in an analogous way to what we did in exercise 6.3. In this case thekernel is void: obvious! the only vector that when changing basis gets transformed into 0 is 0 itself!.A basis transformation is represented by a square matrix. If the kernel is empty and the matrix isAlgebra 30
- 31. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciencessquare this means the the rank for the associated matrix is 3 in the present case (check it). Or wecan equivalently say that the determinant of the matrix is diﬀerent than zero (check it). Applying: dimV = dim NucF + dim ImFwe see that the dimension of the origin is the same as the dimension of the image, which is at thesame time the same as the whole ﬁnal space: 3. Thus, we talk on an automorphism. Using simple matrix algebra: RepB (w) = A · RepB (w) −1 A · RepB (w) = A−1 · A ·RepB (w) I −1 A · RepB (w) = RepB (w)Thus, the matrix we are looking for is the inverse. This will exist, as a basis transformation is alwaysan automorphism. ExampleGiven the linear transform (homomorphism) T : R3 → R2 deﬁned by: T (x1 , x2 , x3 ) = (x1 + x2 , x2 + x3 ) 1. ﬁnd the associated matrix 2. ﬁnd the kernel of the transformation 3. is it an isomorphism? is it an epimorphism? 1. the associated matrix A will be given in general by the images of the vectors in the starting subspace: T (1, 0, 0) = (1, 0) T (0, 1, 0) = (1, 1) T (0, 0, 0) = (0, 1) Thus: 1 1 0 A= 0 1 1 and thus if v ∈ R3 and v ∈ R2 , we can represent the transformation by: w =A·v 2. the kernel of the transformation will be formed by the vector subspace in the origin space having as imatge the nul vector. 0=A·v or: x 0 1 1 0 = · y 0 0 1 1 zAlgebra 31
- 32. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences We solve the system: 0=x+y 0=y+z getting: x = −y = −a y=a z = −y = −a or equivalently: x −1 y = a 1 z −1 thus, the dimension of the kernel is 1 and the vector (-1,1,-1) form a base of it. 3. it is not injective because the dimension of the kernel is not zero. Also, if V is a vector space with ﬁnite dimension and F : V → W is linear, then, dimV = dim NucF + dim ImF In our case 3=1+dim ImT . Thus, dim ImT = 2 = dimW , because W is R2 . Thus the transformation is exhaustive. ExampleIs the application F : V → W, to which the following matrix is associated: 1 −2 −4 A = −2 0 4 1 3 1bijective? The matrix determinant is 0. Thus, the three columns, corresponding to the images of the vectorsthat form the canonical basis of V, are not linearly independent. The ﬁrst two columns are L.I.,for example, and thus Dim(ImF ) = 2. As Dim(V) = 3, then Dim(KerF ) = 1. The transformis not injective, as the dimension of the kernel is not zero. The transform is not exhaustive as thedimension of the image is diﬀerent than the dimension of W. The transform is not bijective by anyof these two reasons.6.4 Composition and inverseIf F : V → W and G : W → U are two linear transforms, then G ◦ F : V → U is linear. Thiscomposition is associative, but not commutative.Algebra 32
- 33. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences A linear transform F : V → W is invertible if there exists G : W → V linear, such thatG ◦ F = idV and F ◦ G = idW , and that we will call ”inverse”. Automorphisms F are one-to onelinear transforms, and in this cases F −1 is also an automorphism.6.5 Linear transforms and matricesWe can express linear transforms as matrices. This is to say, once we have set the basis for thestarting and arrival spaces, we can stablish a one-to-one correspondence between linear transformsand matrices, which will have advantages because we know how to do matrix operations. Thiscorrespondence will be an isomorphism and the matrix corresponding to the linear transform willbe formed by the images of the basis of V . For example: Let us consider a linear transform h : R2 → R3 . Let us consider that the basis of V and W are,respectively: 2 1 B =< , > 0 4 1 0 1 D =< 0 , −2 , 0 > 0 0 1The linear transform is deﬁned by its action on the basis vectors in V : 1 1 2 h 1 h −→ 1 −→ 2 0 4 1 0In order to evaluate how this linear transform aﬀects any vector in its domain, ﬁrst we need toexpress h(b1 ) and h(b2 ) in the basis of the rangespace: 1 1 0 1 0 1 1 = 0 0 − −2 + 1 0 so RepD (h(b1 )) = −1/2 2 1 0 0 1 1 Dand 1 1 0 1 1 2 = 1 0 − 1 −2 + 0 0 so RepD (h(b2 )) = −1 0 0 0 1 0 DNow, for each member of the starting space, we can express its image according to h in terms of theimages of the basis vectors B: 2 1 h(v) = h(c1 · + c2 · ) 0 4 2 1 = c1 · h( ) + c2 · h( ) 0 4 1 0 1 1 0 1 1 = c1 · (0 0 − −2 + 1 0) + c2 · (1 0 − 1 −2 + 0 0) 2 0 0 1 0 0 1 1 0 1 1 = (0c1 + 1c2 ) · 0 + (− c1 − 1c2 ) · −2 + (1c1 + 0c2 ) · 0 2 0 0 1Algebra 33
- 34. MAT: 2011-31035-T1 MSc Bioinformatics for Health SciencesThus, 0c1 + 1c2 c1 with RepB (v) = then RepD ( h(v) ) = −(1/2)c1 − 1c2 . c2 1c1 + 0c2For example, 2 4 1 4 with RepB ( )= then RepD ( h( ) ) = −5/2. 8 2 8 B 1 We can express these calculaitons in matrix form: 0 1 0c1 + 1c2 −1/2 −1 c1 = (−1/2)c1 − 1c2 c 1 0 B,D 2 B 1c1 + 0c2 DThe interesting part of this expression is that the matrix representing a linear transform is generated,simply, by putting in columns the images of the vectors of the domain basis as a function of thevectors in the basis of the image. I a more formal way: Let us suppose that V and W are vector spaces with dimensions n and mwith basis B and D, and that h : V → W is a linear transform connecting them. If h1,1 h1,n h2,1 h2,n RepD (h(b1 )) = . . . . RepD (h(bn )) = . . . . . hm,1 D hm,n Dthen h1,1 h1,2 ... h1,n h2,1 h2,2 ... h2,n RepB,D (h) = . . . hm,1 hm,2 ... hm,n B,Dis the matrix representation of the transformation. Exercise 5 Represent the matrix of the linear transform tθ : R2 → R2 which transforms the vectors by rotatingthem clockwise any given angle θ. Every matrix represents a linear transform.Algebra 34
- 35. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences6.6 Composition and matrix productWe already know how to change bases and we know how to represent linear transforms by means ofmatrices. Now we want to do the following scheme: h V B − − → WD −− H id id h V B − − → WD ˆ −− ˆ ˆ HOr, what is identical in a matrix representation:: ˆ H = RepD,D (id) · H · RepB,B (id) ˆ ˆ (∗) For example, the matrix √ cos(π/6) − sin(π/6) 3/2 −1/2 √ T = = sin(π/6) cos(π/6) 1/2 3/2represents, with respect to E1 , E2 , the linear transformation t : R2 → R2 that rotates the vectors π/6radians anticlockwise. We can transform this representation with respect to E2 , E2 to another onewith respect to ˆ 1 0 ˆ −1 2 B= D= 1 2 0 3using what we just learnt: t R2 2 − − → R2 2 E −− E T id id ˆ T = RepE2 ,D (id) · T · RepB,E2 (id) ˆ ˆ t R 2 − − → R2 ˆ B −− ˆ D ˆ TRepE2 ,D (id) can be written as the inverse of RepD,E2 (id). ˆ ˆ −1 √ −1 2 3/2 −1/2 √ 1 0 RepB,D (t) = ˆ ˆ 0 3 1/2 3/2 1 2 √ √ (5 − √3)/6 (3 + 2 3)/3 √ = (1 + 3)/6 3/3 Exercise 6Check if the eﬀect of the new matrix is the same as the original matrix, with the new basis.Algebra 35
- 36. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences7 Projection7.1 Orthogonal Projection Into a LineWe ﬁrst consider orthogonal projection into a line. To orthogonally project a vector v into a line , darken a point on the line if someone on that line and looking straight up or down (from thatperson’s point of view) sees v.The picture shows someone who has walked out on the line until the tip of v is straight overhead.That is, where the line is described as the span of some nonzero vector = {c · s c ∈ R}, the personhas walked out to ﬁnd the coeﬃcient cp with the property that v − cp · s is orthogonal to cp · s.We can solve for this coeﬃcient by noting that because v − cp s is orthogonal to a scalar multiple ofs it must be orthogonal to s itself, and then the consequent fact that the dot product (v − cp s) · sis zero gives that cp = v · s/s · s. The orthogonal projection of v into the line spanned by a nonzero s is this vector. v·s proj[s ] (v) = ·s (4) s·s The wording of that deﬁnition says ‘spanned by s ’ instead the more formal ‘the span of the set{s }’. This casual ﬁrst phrase is common. Exercise 7In R3 , the orthogonal projection of a general vector x y (5) zAlgebra 36
- 37. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciencesinto the y-axis is x 0 y · 1 z 0 0 0 · 1 = y (6) 0 0 0 0 1 · 1 0 0which matches our intuitive expectation. The picture above with the stick ﬁgure walking out on the line until v’s tip is overhead is oneway to think of the orthogonal projection of a vector into a line. We ﬁnish this subsection with twoother ways. Thus, another way to think of the picture that precedes the deﬁnition is that it shows v asdecomposed into two parts, the part with the line (here, the part with the tracks, p), and thepart that is orthogonal to the line (shown here lying on the north-south axis). These two are “notinteracting” or “independent”, in the sense that the east-west car is not at all aﬀected by the north-south part of the wind. So the orthogonal projection of v into the line spanned by s can be thoughtof as the part of v that lies in the direction of s. This subsection has developed a natural projection map: orthogonal projection into a line. Assuggested by the examples, it is often called for in applications. The next subsection shows how thedeﬁnition of orthogonal projection into a line gives us a way to calculate especially convienent basesfor vector spaces, again something that is common in applications. The ﬁnal subsection completelygeneralizes projection, orthogonal or not, into any subspace at all.7.2 Gram-Schmidt OrthogonalizationThe prior subsection suggests that projecting into the line spanned by s decomposes a vector v intotwo parts v = proj[s ] (v) + v − proj[s ] (v)that are orthogonal and so are “not interacting”. We will now develop that suggestion. Vectors v1 , . . . , vk ∈ Rn are mutually orthogonal when any two are orthogonal: if i = j then thedot product vi · vj is zero. If the vectors in a set {v1 , . . . , vk } ⊂ Rn are mutually orthogonal and nonzero then that set islinearly independent.Algebra 37
- 38. MAT: 2011-31035-T1 MSc Bioinformatics for Health Sciences Exercise 8The members β1 and β2 of this basis for R2 are not orthogonal. 4 1 B= , 2 3However, we can derive from B a new basis for the same space that does have mutually orthogonalmembers. For the ﬁrst member of the new basis we simply use β1 . 4 κ1 = (7) 2For the second member of the new basis, we take away from β2 its part in the direction of κ1 , 1 1 1 2 −1 κ2 = − proj[κ1 ] ( )= − = 3 3 3 1 2which leaves the part, κ2 pictured above, of β2 that is orthogonal to κ1 (it is orthogonal by thedeﬁnition of the projection into the span of κ1 ). Note that, by the corollary, {κ1 , κ2 } is a basis forR2 . An orthogonal basis for a vector space is a basis of mutually orthogonal vectors. Exercise 9To turn this basis for R3 1 0 1 1 , 2 , 0 (8) 1 1 3into an orthogonal basis, we take the ﬁrst vector as it is given. 1 κ1 = 1 (9) 1Algebra 38

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment