Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Term project

552 views

Published on

This math project, my final exam grade in the class, was broken up into three parts that were completed throughout the semester. The project required the use of proofs, MATLAB, and LaTeX software to present a professional document -- a presentation of our work. In the project, I proved various key components of numerical methods for approximation, and I worked through single-value decomposition problems and reduction of large-scale ordinary differential equations. Much of the project required MATLAB programming and computation, and the final report was typed into LaTeX to display properly.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Term project

  1. 1. Math 2984H Term ProjectImplicit Solvers and SVD with an Applicationin Model ReductionMark Brandao and Susanna MostaghimMay 10, 2013AbstractThe goal of this project is to employ our Linear Algebra, Differential Equations andMatlab skills for a specific application in the area of Model Reduction. The main goal atthe end is to reduce a large number of differential equations into a much smaller one in sucha way that the smaller number of ODEs gives almost the same information as the originallarge set. The main tools in achieving this goal would be numerical solution of ODEs and animportant matrix decomposition, called the Singular Value Decomposition (SVD).1 Part I: Implicit Solvers for Differential Equations1.1 Problem 1.1There are numerous numerical solutions techniques, including implicit solvers, for the systems ofdifferential equations of the form y = Ay + bu(t) , y(t0) = y0, but one of the most fundamentalmethods is Euler’s method, which is a direct method solving differential equations. ApplyingEuler’s method to an equation of this form yieldsyk+1 = yk + h(Ayk + buk), for k = 0, 1, 2, ...However, when given a scalar equation, as follows (let us assume λ < 0), applying Euler’smethod leads to an expression for yk+1 that is independent of yk.y = λy, y(0) = y0 (1.1)y1 = y0 + hλy0= y0(1 + hλ)
  2. 2. y2 = y1 + hλy1= (y0 + hλy0) + hλ(y0 + hλy0)= y0 + hλy0 + hλy0 + h2λ2y0= y0(1 + 2hλ + (hλ)2)= y0(1 + hλ)2y3 = y2 + hλy2= (y0(1 + hλ)2) + hλ(y0(1 + hλ)2)= y0(1 + hλ)2+ hλy0(1 + hλ)2= y0((1 + hλ)2+ hλ(1 + hλ)2)= y0((1 + hλ)2(1 + hλ))= y0(1 + hλ)3So for each yk in the interative Euler method, the result seems to beyk = y0(1 + hλ)k.Indeed, it can be shown thaty1 = y0(1 + hλ)1= y0(1 + hλ)= y0 + y0hλBecause this is what we defined our original y1 point to be, this formula holds for the base case(the first step approximation is the zeroth step times the h-step times the differential equation atthe zeroth step).Assume that the equation holds for yn:yn = y0(1 + hλ)nWe will show that, assuming this is true, the equation holds for yn+1 as well.yn+1 = yn + ynhλ= (y0(1 + hλ)n) + (y0(1 + hλ)n) hλ= y0(1 + hλ)n+ y0hλ(1 + hλ)n= y0((1 + hλ)n+ hλ(1 + hλ)n)= y0((1 + hλ)(1 + hλ)n)= y0(1 + hλ)n+1Because this holds as well, by induction, we can say that this equation (yk = y0(1 + hλ)k) holdsfor all k ≥ 1.
  3. 3. Because the actual solution to y = λy is y(t) = y0eλtand λ < 0, we know that the solutiondecays to zero. So it is essential, if Euler’s method yields an accurate approximation, for the iterateyk to decay to zero (yk → 0 ) as k → ∞ .limk→∞y0(1 + hλ)k+1y0(1 + hλ)k= limk→∞(1 + hλ)k(1 + hλ)(1 + hλ)k= limk→∞|(1 + hλ)|= |(1 + hλ)|Thus, if yk = y0(1 + hλ)k→ 0,|1 + hλ| < 1−1 <1 + hλ < 1−2 < hλ < 0Since λ < 0, we can infer that, for the sequence to converge,0 < h|λ| < 2.So if yk = y0(1 + hλ)k→ 0, then 0 < h|λ| < 2.However, if 0 < h|λ| < 2,0 < h|λ| < 2−2 < hλ < 0−1 < 1 + hλ < 1|1 + hλ| < 1|1 + hλ| = limk→∞y0(1 + hλ)k+1y0(1 + hλ)kSince |1 + hλ| < 1, the iterate y0(1 + hλ)k→ 0. With this condition in place, Euler’s methodyields an approximation of the original function which is usable. However, if h|λ| > 2, then thesequence diverges, and the Euler’s method approximation is defective.1.2 Problem 1.2Let us consider a system of the formy (t) = Ay y(0) = y0 , where A =−8003 199923988 −9004, y(0) =14The solution of this equation is of the formy(t) = c1eλ1tv1 + c2eλ2tv2
  4. 4. So the first step of solving this equation is finding the eigenvalues and associated eigenvectors ofthe matrix A. For the matrix A,λ1 = −14000 v1 =1−3λ2 = −7 v2 =14So the general solution isy(t) = c1e−14000t 1−3+ c2e−7t 14And since y(0) =14,y(0) =14= c1e−14000(0) 1−3+ c2e−7(0) 14= c1e0 1−3+ c2e0 14= c11−3+ c214=c1 + c2−3c1 + 4c2Breaking each row up into individual equations, the result isc1 + c2 = 1 ⇒ c1 = 1 − c2−3c1 + 4c2 = 4 ⇒ − 3(1 − c2) + 4c2 = 4The solution for these two equations isc1 = 0 , c2 = 1Which yields for the solution equationy(t) = (0)e−14000t 1−3+ (1)e−7t 14= e−7t 14=e−7t4e−7tBecause both components of the vector solution have a negative exponent, they are both decayingto zero as t → ∞. Since we want to model this equation, we will be using Euler’s method on theinterval t ∈ [0, 0.02]. See Appendix A for Matlab code. The true solution,e−7t4e−7t , is as followsfor t ∈ [0, 0.02]:
  5. 5. Using the Euler’s method approximation with a step size (h) of .001, the result is as follows:This is not a particularly good approximation, since h|λ| > 2 (the second value of λ was -14000,so h|λ| = 14). For the last two step sizes, the values of y go to−3001000rather than decaying to00. If we use a smaller step size such that h|λ| < 2, for example, 1.3571 × 10−4, we should geta better approximation.When we use 1.3571 × 10−4for h, the result is as follows:
  6. 6. This is a better approximation, since this graph decays to00. When h = 1.3571 × 10−4,h|λ2| ≈ 1.9 < 2, and h|λ1| ≈ 9.5 × 10−4< 2 so this approximation converges and is not defective.Indeed, this approximation seems to model the original function very accurately.1.3 Problem 1.3So, as seen in the previous problem, there are some difficulties when dealing with a stiff equation(the eigenvalue of -14000 made the prospect of getting an accurate approximation somewhat diffi-cult). An equation is said to be stiff if all the eigenvalues of the matrix A have negative real partsand the ratio of the largest of absolute values of the real parts to the smallest one is large. In thecase of the last problem, the ratio was 2000, which is certainly large. As such, Euler’s method, aswell as other direct methods for solving differential equations, can run into some trouble. Theseissues can be addressed by implicit methods for solving differential equations.One such method is the Trapezoidal Rule. The Trapezoidal Rule Method is:yk+1 = yk +h2(f(yk, tk) + f(yk+1, tk+1)) , for k = 0, 1, 2, ...This method relies on the k step as well as the k + 1 step (their average), whereas Euler’s methodrelies only on the k step; as such, this method gives a better approximation of the function.However, in this form, yk+1 is on both sides of the equation, so it is not an easy task to solve;indeed, because of this, it is said to be implicitly defined.In the case where f(y, t) = Ay + bu(t), the Trapezoidal Method yieldsyk+1 = yk +h2(Ayk + buk) +h2(Ayk+1 + buk+1)Which can be rearranged so that all instances of yk+1 are on the left side:I −h2A yk+1 = yk +h2(Ayk + buk + buk+1)
  7. 7. With this equation in place, one can develop an algorithm to solve the equation for the TrapezoidalMethod. The algorithm is as follows:Algorithm 1.1 Trapezoidal Method to solve y = Ay + bu(t), y(0) = y01. Choose h.2. Set M = I − h2A.3. for k = 0, 1, 2, 3, . . .(a) Set zk = yk + h2(Ayk + buk + buk+1)(b) Solve the linear system Myk+1 = zk for yk+1One disadvantage of this algorithm, however, is the final step; having to do Gaussian eliminationto solve the linear system over and over (for every time step) is expensive. One way to make thesystem easier is the find the LU decomposition of M and then solve two simpler systems ofequations.For the linear system Myk+1 = zk, let M = LU be the LU-decomposition of M. This can besolved in two steps.(LU)yk+1 = zkL(Uyk+1) = zkSet Uyk+1 = vk+1 and solve the linear system.yk+1 = U−1vk+1And substitute this result back into the original equation.L(UU−1vk+1) = zkLvk+1 = zkThus, there are only two systems to solve: Uyk+1 = vk+1 and Lvk+1 = zk. This is very easy,because L and U are lower- and upper-triangular matrices.This leads to the following, more efficient algorithm.Algorithm 1.2 Trapezoidal Method with LU-decomposition1. Choose h.2. Set M = I − h2A3. Compute the LU-decomposition for M
  8. 8. 4. for k = 0, 1, 2, 3, . . .(a) Set zk = yk + h2(Ayk + buk + buk+1)(b) Solve the linear system Lvk+1 = zk+1 for vk+1.(c) Solve the linear system Uyk+1 = vk+1 for yk+1.1.4 Problem 1.4Using this algorithm on the same function which was earlier approximated by Euler’s method,y (t) = Ay y(0) = y0 , where A =−8003 199923988 −9004, y(0) =14Using a Matlab code (see Appendix B), one can approximate this system with the TrapezoidalMethod. Below is the approximation for h = .001 and t ∈ [0, 0.02].This approximation is very close to the original function; indeed, it converges and gives a betterapproximation for this step size than Euler’s method, which diverged for h = .001. To test theextent of this implicit method’s accuracy, we shall investigate its approximation for h = .1 , t ∈[0, 2].
  9. 9. So the Trapezoidal Method still converges with a larger step size, and it is even accurate on amuch larger interval as well. It is easy to see that this implicit solver is superior to Euler’s directmethod of approximating functions.1.5 Problem 1.5This comparison between the Trapezoidal Method and Euler’s method shows that the TrapezoidalMethod converges even for larger step sizes. So the question becomes, for what step sizes will thesolution yk converge?To answer that question, however, we must first apply the Trapezoidal Method symbolically todetermine the form of the iterate which we will analyze.Let us consider the scalar equationy = λy, y(0) = y0 , where λ < 0The Trapezoidal Method, once more, isyk+1 = yk +h2(f(yk, tk) + f(yk+1, tk+1)) , for k = 0, 1, 2, ...Applying the Trapezoidal Method to the scalar equation y = λy yields:
  10. 10. y1 = y0 +h2(λy0 + λy1)= y0 +h2λy0 +h2λy1y1 −h2λy1 = y0 +h2λy0(1 −h2λ)y1 = (1 +h2λ)y0y1 =1 + h2λ1 − h2λy0=2 + hλ2 − hλy0y2 = y1 +h2(λy1 + λy2)=2 + hλ2 − hλy0 +h2λ2 + hλ2 − hλy0 +h2λy2y2 −h2λy2 = 1 +h2λ2 + hλ2 − hλy01 −h2λ y2 = 1 +h2λ2 + hλ2 − hλy0y2 =1 + h2λ 2+hλ2−hλy01 − h2λ=(2 + hλ) 2+hλ2−hλy02 − hλ=2 + hλ2 − hλ2 + hλ2 − hλy0=2 + hλ2 − hλ2y0From this pattern, one can infer that yk = 2+hλ2−hλky0, for k = 1, 2, .... To prove that this is thecase, let us examine the base case, y1:y1 =2 + hλ2 − hλ1y0(y1)(2 − hλ) = (2 + hλ)y02y1 − hλy1 = 2y0 + hλy02y1 = 2y0 + hλy0 + hλy1y1 = y0 +h2λy0 +h2λy1= y0 +h2(λy0 + λy1)
  11. 11. Indeed, the equation held for our base case. Now let us assume that it holds for yn:yn =2 + hλ2 − hλny0We will now show that the equation holds for the yn+1 case as well:yn+1 = yn +h2(λyn + λyn+1)=2 + hλ2 − hλny0 +h2(λ2 + hλ2 − hλny0 + λyn+1)=2 + hλ2 − hλny0 +hλ22 + hλ2 − hλny0 +hλ2yn+1yn+1 −hλ2yn+1 = 1 +hλ22 + hλ2 − hλny0yn+1 1 −hλ2= 1 +hλ22 + hλ2 − hλny0yn+1 =1 + hλ21 − hλ22 + hλ2 − hλny0=2 + hλ2 − hλ2 + hλ2 − hλny0=2 + hλ2 − hλn+1y0Since this holds for the n + 1 case if assumed true for the nth case, and it has been shown thatthis equation holds for the first case, yk = 2+hλ2−hλky0 is true for all k ≥ 1.Since λ < 0 and h > 0, one can see that 2+hλ2−hλ< 1, since the fraction is essentially the same as2−h|λ|2+h|λ|. The numerator is decreasing from 2, and the denominator is increasing from 2; thus, thefraction is less than one.Since the fraction is less than one, and for any fraction |f| < 1 raised to the power n,limn→∞ fn= 0,limk→∞yk = limk→∞2 + hλ2 − hλky0 = 0for any h > 0, λ < 0. This is a much more favorable constraint than that for Euler’s Method,(h|λ| < 2) and it indeed demonstrates the Trapezoidal Method’s superiority.2 PART II: The Singular Value DecompositionThe Singular Value Decomposition of matrix is a method used to break down any original matrix,A, into three component matrices, U, S, and V. Using the SVD of a matrix, it is possible tocreate a lower-rank approximation of the original matrix (an approximation with less complexity),but we want to reduce the error between A and its approximation as much as possible. Because
  12. 12. this is the case, we will analyze the singular value decomposition and its properties, as well as theproperties of lower-rank approximations with the singular value decomposition.2.1 Problem 2.1Given a matrix A ∈ Rn×m, Let A = USVTbe the SVD of A, whereS = diag(σ1, ..., σm), U = [u1, ..., um] , V = [v1, ..., vm] , and UTU = VTV = InA = u1 u2 ... umσ1σ2...σmv1 v2 ... vmT= u1 u2 ... umσ1σ2...σmvT1vT2...vTm= u1 u2 ... umσ1vT1σ2vT2...σmvTm= σ1u1vT1 + σ2u2vT2 + ... + σmumvTm=mi=1σiuivTiSince we are looking for a rank-k approximation of A, however, we will say thatAk = UkSkVTkOr, equivalently,Ak =ki=1σiuivTiThus, to estimate the error, we will evaluate the difference between the original matrix and itsapproximation:A − Ak 2 =mi=1σiuivTi −ki=1σiuivTi2=mi=k+1σiuivTi2
  13. 13. Since the 2-norm of a matrix A is the largest singular value, and the singular values are equal tothe square root of the eigenvalues of ATA,mi=k+1σiuivTi2= ˜σ1 =√λwhere√λ are the eigenvalues (˜λ) ofmi=k+1σiuivTiT mi=k+1σiuivTi=mi=k+1σiviuTimi=k+1σiuivTi= σk+1vk+1uTk+1 + · · · + σmvmuTm σk+1uk+1vTk+1 + · · · + σmumvTm= σ2k+1vk+1uTk+1uk+1vTk+1 + · · · + σ2mvmuTmumvTmSince UTU = In, uTj uk = 0 when j = k,=σ2k+1σ2k+2...σ2m√λ = ˜λ = σk+1, σk+2, . . . , σmAnd since σk+1 > σk+2 > · · · > σm, the largest value is σk+1. Hence, this is the value of A−Ak 2:the first (largest) singular value not included in the rank-k approximation of A.2.2 Problem 2.2Now we know how a singular value decomposition works, and we know about the error betweenthe rank-k approximation and the original matrix, we want to find out how to build the SVD of amatrix: how do we find the singular values, as well as right singular vectors and the left singularvectors? To understand this, we will examine ATA and AAT.Given An×n, let A = USVTbe the SVD of A whereS = diag(σ1, ..., σm), U = [u1, ..., um] , V = [v1, ..., vm] , and UTU = VTV = In
  14. 14. A = USVTAAT= USVTUSVT T= USVTSVT TUT= USVTVSTUT= USSTUTBecause S is a diagonal matrix, SST= S2= US2UTBecause UTU = I, UT= U−1= US2U−1This matrix is of the form PΛP−1= Q, a form of matrix diagonalization (Λ is a diagonal matrix),where the the column vectors of P are the eigenvectors of the matrix Q, and the values along thediagonal of Λ are the corresponding eigenvalues. It then follows that the column vectors of U (ui)are the eigenvectors of AAT, and the values along the diagonal of S2(σ2i ) are the correspondingeigenvalues.A = USVTATA = USVT TUSVT= VSTUTUSVT= VSTSVTBecause S is a diagonal matrix, STS = S2= VS2VTBecause VTV = I, VT= V−1= VS2V−1This matrix is of the form PΛP−1= Q, a form of matrix diagonalization (Λ is a diagonal matrix),where the the column vectors of P are the eigenvectors of the matrix Q, and the values along thediagonal of Λ are the corresponding eigenvalues. It then follows that the column vectors of V (vi)are the eigenvectors of ATA, and the values along the diagonal of S2(σ2i ) are the correspondingeigenvalues.If we want to be able to find the eigenvectors of AATwithout calculating all the properties of thatmatrix, the question arises if there is a way to find those left singular vectors. In order to do that,
  15. 15. we will consider the problem Avi, where vi is a right singular vector.A = u1 u2 ... umσ1σ2...σmv1 v2 ... vmTAvi = u1 u2 ... umσ1σ2...σmv1 v2 ... vmTviSince VTV = I,VTvi =x1x2...xn: when k = i, xk = 0; when k = i, xk = 1= ii (Let ii denote the i-th column of the identity matrix.)Avi = u1 u2 ... umσ1σ2...σmii= u1 u2 ... um00...σi...0= σiuiThus, it is possible to find the left singular vectors of the matrix if one has the right singularvectors.2.3 Problem 2.3Using what we now know about the components of the SVD of a matrix (the left and the rightsingular vectors as well as the singular values), we will use this information to compute the SVDof the following matrix:
  16. 16. A =2 30 2First, we will find the right singular vectors (and the square of the singular values) by looking atATA, since we know it is of the form VS2V−1.ATA =2 03 22 30 2=4 66 13To find the eigenvalues of this matrix, we will examine det ATA − λI .det ATA − λI =4 − λ 66 13 − λ= (4 − λ) (13 − λ) − 62= 52 − 17λ + λ2− 36= λ2− 17λ + 16= (λ − 16) (λ − 1)Thus, the eigenvalues of this matrix (ATA) are λ = 16 and λ = 1. Using this information tocompute the eigenvectors of the matrix, we must evaluate two linear systems with the matrixATA − λI and the two different values for λ:−12 66 −3v = 03 66 12u = 0First, we will solve the first system using an augmented matrix:−12 66 −3v = 0=−12 6 06 −3 0=−12 6 00 0 0 R2 − (−12)R1−12v1 + 6v2 = 06v2 = 12v1v2 = 2v1v =v12v1= v112
  17. 17. Thus,12is the eigenvector corresponding to λ = 16. Now, we will find the eigenvector corre-sponding to λ = 1.3 66 12u = 0=3 6 06 12 0=3 6 00 0 0 R2 − (2)R13u1 + 6u2 = 03u1 = −6u2u2 = −2u2u =−2u2u2= u2−21So a matrix diagonalization (of the form VΛV−1) for ATA is:ATA =1 −22 116 00 11 −22 1−1Now it is known to us that the singular values of A are the square root of the eigenvalues along thediagonal: the first singular value of A is 4, and the second singular value is 1. This diagonalizationis not completely sufficient, however. One property that is necessary of the matrix V is thatV−1= VT. Since V ∈ R2×2,v11 v12v21 v22v11 v21v12 v22=1 00 1v1 · v1 = 1 = v2 · v2 v1 · v2 = 0 = v2 · v1For the vectors v1 =12and v2 =−21, it is the case that v1 · v2 = 0 = v2 · v1, butv1 · v1 = 5 = 1 and v2 · v2 = 5 = 1, so the eigenvectors must be scaled so that this is the case. Tomake this the case, the vectors must be normalized:˜v1 =v1v1˜v2 =v2v2˜v1 =v1√5˜v2 =v2√5˜v1 =1√52√5˜v2 =− 2√51√5
  18. 18. Thus, the matrix diagonalization with the new V matrix is:ATA =1√5− 2√52√51√516 00 11√5− 2√52√51√5TSince it is known to use that Avi = σiui, we can use these new right singular vectors to determinethe left singular vectors.Av1 =2 30 21√52√5=8√54√5= 42√51√5= σ1u1Thus, the left singular vector associated with the first singular value σ1 = 4 is u1 =2√51√5.Av2 =2 30 2− 2√51√5=− 1√52√5= 1− 1√52√5= σ2u2Thus, the left singular vector associated with the second singular value σ2 = 1 is u2 =− 1√52√5.Hence, the singular value decomposition of A is:A =2√5− 1√51√52√54 00 11√5− 2√52√51√5T=2√5− 1√51√52√54 00 11√52√5− 2√51√5This SVD of A is equivalent to σ1u1vT1 +σ2u2vT2 = 42√51√51√52√5+− 1√52√5− 2√51√5Since we are looking for a rank-1 approximation of A, a rank 2 matrix, we will only take the firstcomponent of the SVD of A as the rank-1 approximation A1:Ak = A1 = σ1u1vT1 = 42√51√51√52√5= 42/5 4/51/5 2/5=8/5 16/54/5 8/5
  19. 19. The error of this approximation, A − A1 2, is equal to 1, the excluded singular value from theapproximation of A. No rank-1 approximation of A can get closer than this; the best one can dois to have an error of 1.2.4 Problem 2.4Now that we have a firmer grasp on singular value decomposition of matrices and computingoptimal rank-k approximations of these matrices, we will apply our knowledge to an important useof matrices: image approximation. Images can take up a large amount of data, and it is possibleto have a nearly identical image that takes up much less data. If one uses a matrix to representan image, one can compute the SVD of that matrix and make smaller-rank approximations, whichwill have less complexity and therefore use less data. We want the optimal rank-k matrices thathave a small error, however. For example, consider the Lena image. The matrix that holds thedata for this image is a rank-508 matrix. One can build a Matlab code (see appendix C) to findthe value of k for which the optimal rank-k approximation of the Lena matrix has a relative error( A−AkA) less than a specified value. First, we will compute the SVD of the Lena matrix (A) usingMatlab and plot the normalized singular values.From this plot, we can see that the first hundred or so singular values are within 2 orders ofmagnitude of each other; similarly, almost every singular value is within 4 orders of magnitudefrom the first singular value. With this in mind we can construct approximation for the optimalrank-k matrix approximation for which the relative error is less than 10−1, 10−2, and 5 × 10−3.Using the Matlab code for different values of k, however, and computing the error, gives a more
  20. 20. accurate result. Displaying the Lena image, matrix rank, and relative error for the three errorcases yields:Original image, Rank k = 508Rank k = 53, Rel. Error = .0100Rank k = 4, Rel. Error = .0909Rank k = 99, Rel. Error = .0050Hence, it is possible to have an approximation of the original image that is nearly flawless, butit has a much lower rank; hence, it has a lower complexity (takes up less space).2.5 Problem 2.5We will now utilize our knowledge of how to apply SVDs to images to work with a large group ofimages. We will be working with faces from the Yale database. Our goal is to make a compilationmatrix that has the information for every face in the database. With this data, we will be able toreconstruct and recognize similar images (other faces). Since SVD reveals optimal information forapproximation, it is the main tool in this case as well.We will be utilizing a Matlab code (see appendix D) for the remainder of this discussion.Given a file that contains all the faces in the database, we will create a single database matrixfor this file in the following way:
  21. 21. 1. Vectorize every matrix in the database.2. Find the average face.(a) Add all the matrices in the database and divide by the total number of matrices.(b) Vectorize the resultant ”average” matrix.3. Subtract the average face from every (vectorized) image.4. Concatenate each subtracted quantity next to each other in a single matrix.5. Scale the resultant matrix.After this process is done, we will be left with a single matrix which is our new database. We canthen plot the average face:Now we will examine the SVD of the database matrix. Let the database matrix, D, be equalto USVT. The leading vectors in the U matrix, the left singular vectors, contain the dominantinformation in the database. The leading four left singular vectors plot the following images, or”eigenfaces:”
  22. 22. These eigenfaces have the most dominant information for the faces in the database; indeed,they resemble faces, and they each have facial features that one would see on many faces in thedatabase. These facial features ”build” the faces in the database, in way.So now we have database of faces that we can access, and we have the relevant (dominant)information used to construct these images. Using this information, then,we can take advantage ofthese matrices to reconstruct other matrices (images) that are not in our database. Applying thesemethods yields the following results for reconstructing images not in our database. The originalimages are on the left; the reconstructed images are on the right.While the reconstructions are not perfect representations of the original images, they are fairrepresentations of the images. The images were constructed from the database of images, so theyare not meant to be free from error. However, the resemblance is apparent.
  23. 23. 3 PART III: Model Reduction of Large-Scale ODEsWe will now revisit approximating ordinary differential equations using the Trapezoidal method.However, instead of the simple case we investigated in Part I (y = Ay), we will be examining thefull equation x = Ax + bu(t), where the sizes of A and b are much greater than before. Usingimplicit solvers for these functions, then, have the potential to be very costly, so we want to reducethe models if at all possible, while keeping our reduced models still accurate. Thus, we want areduced dynamical system of the form xr = Arxr + bru(t), where the system has been reducedfrom order n to order r.We will use the Trapezoidal Method to obtain X = [x0, x1, . . . , xN−1] ∈ Rn×N. Using thismatrix, X, we will compute the SVD: X = USVT, and create our reduced model using Ur, thefist r columns of U:xr = Arxr + bru(t) where Ar = UTr AUr and br = UTr bOur solution, however, xr, has length r, where our original solution has length n. To rectify this,we make our approximation to the original solution as follows: x(t) ≈ ˆx(t) = Urxr(t)3.1 Problem 3.1We will now apply our methodology for solving large-scale ODEs (called POD – Proper OrthogonalDecomposition) to a specific case in which the order is n = 1412, i.e., A ∈ R1412×1412, b ∈ R1412.Using POD, we will approximate the behavior of ISS 12A module using the A and b quantitiesdescribing its differential equations. We will choose a constant input (u(t) = 1) and the initialstate will be zero (x(0) = 0), and we will use the Trapezoidal Method to approximate the functionon the interval t ∈ [0, 50] with a time step h = 0.1.In order to begin, however, we must decide how accurate we want our approximation to be, and,ultimately, what r-value we choose. To do this, we will again look at the relative error we examinedin Problem 2.5 using our information about matrix norms from Problem 2.1: X−XkX= σk+1σ1. Thus,if we want to generate an approximation Xr to X, it will have a relative error of σr+1σ1. We willchoose three values of relative error which we want to replicate: 10−1, 10−2, and 5 × 10−3. Ther-values for these respective errors are r = 12, r = 36, and r = 43.To approximate these functions, we will use a Matlab code (see appendix E) that uses theTrapezoidal code as well (appendix B). Graphing these approximations against the original functionyields the following:
  24. 24. r = 12r = 36r = 43So, as we expected, the more leading columns in U that we keep (the higher r is), the more accu-rate our approximation to the original function. Indeed, when r = 43, the approximation is nearly
  25. 25. indistinguishable from the original function. Thus, this reduced model is a good approximation tothe original.Now, we built our reduced model with the input u(t) = 1. Let us use this same model, keepingr at 43, to approximate the way the system behaves when u(t) = sin (4t):Indeed, even though we are approximating the system for a different input than the input forwhich we constructed the reduced model, the approximation is still incredibly accurate. This isthe power of POD.3.2 Problem 3.2We will now apply the same process in POD to a slightly different system: a complex mass-springdamper system which is of order n = 20000, i.e., A ∈ R20000×20000, b ∈ R20000. For this system,we will choose u(t) = sin (5t), h = 0.01, and we will select an r such that the relative error is lessthan 5 × 10−3. Plotting the second component of the time-evolution for the original function inMatlab (see appendix F) and the approximation yields:
  26. 26. Again, this result is incredibly accurate, and our reduced system models the full system verywell. As in the previous problem, we will use the reduced model built from the input u(t) = sin (5t)to approximate the system for a different input – in this case, u(t) = square(10t).Once more the approximation and the original function are indistinguishable, and the PODapproximation provides an accurate representation of the behavior of the system.
  27. 27. AppendicesAppendix AWhat follows is the Matlab code for approximating the scalar function using Euler’s Method.TermProjectEuler1.m :1 f = @(t,y) [−8003 1999; 23988 −6004]*[y(1);y(2)]; % y = Ay.23 h = 1.3571*10ˆ−4;4 t0 = 0;5 tf = .02;6 y0 = [1;4];7 [t,y] = euler(t0,tf,h,f,y0); % This uses a separate file; see below.8 subplot(2,1,1)9 plot(t,y(1,:));10 grid on11 subplot(2,1,2)12 plot(t,y(2,:));13 grid oneuler.m :1 function [t,y] = euler(t0,tf,h,f,y0,k)2 %input t0 initial time3 %input tf final time4 %input h size of time steps5 %input f function so that y=f(t,y)6 %input y0 starting value for y7 %input k if given, the y(k,:) to plot against t.8 % if not given, nothing is plotted.910 % Created by Christian Zinck1112 t = t0:h:tf;13 y = zeros(length(y0),length(t));14 y(:,1) = y0;1516 for i = 1:length(t)−117 y(:,i+1) = y(:,i) + h*f(t(i),y(:,i));18 end192021 if nargin > 522 plot(t,y(k,:));23 end24 end
  28. 28. Appendix BWhat follows is the Matlab code for approximating functions using the Trapezoidal Method. Forthe scalar case, u=0 and b=0.Trapezoidal.m :1 function [t,y] = Trapezoidal(t0,tf,h,A,x,b,k)2 %This function computes the Trapezoidal Method approximation3 %for a function of the form y = Ay + bu(t).45 if nargin == 5 %For the case y = Ay6 b = 0;7 u = @(t) 0;8 end9 m = isa(k,function handle);10 if m == 0 %Checks to see if k is a function.11 u = @(t) k;12 elseif m == 113 u = @(t) k(t);14 end15 [m n] = size(x);16 if m == 1 && n == 1 && x == 017 y0 = zeros(length(A),1);18 elseif m = 1 && n == 119 y0 = x;20 elseif m==1 && n = 121 y0 = transpose(x);22 else23 error(Initial vector value needed.)24 end2526 M = speye(length(A)) − h/2*(A);27 [L U] = lu(M);28 t = t0:h:tf;29 y = zeros(length(y0),length(t));30 y(:,1) = y0;3132 for i = 1:length(t)−133 z(:,i) = y(:,i) + h/2.*(A*y(:,i)+b.*u(t(i))+b.*u(t(i+1)));34 v(:,i+1) = Lz(:,i);35 y(:,i+1) = Uv(:,i+1);36 end37 if length(y0) ≤ 538 for i = 1:length(y0)39 subplot(length(y0),1,i)40 plot(t,y(i,:))41 grid on42 end43 end44 end
  29. 29. Appendix CWhat follows is the Matlab code for computing a rank-k approximation of the Lena Matrix andplotting the resulting image.LenaSVD.m :1 load LenaImage2 [U,S,V] = svd(A,0);3 n = 409;4 k = rank(A) − n;5 Ak = zeros(size(A));6 for i=1:k7 Ak = Ak + S(i,i)*U(:,i)*transpose(V(:,i));8 end9 error = norm(A − Ak,2) / norm(A,2);10 k11 error12 imshow(Ak)13 image(Ak)14 axis off
  30. 30. Appendix DWhat follows is the Matlab code for creating a database of faces and reconstructing facesYaleFace.m :1 load yale faces2 N = length(Y);3 Mave = zeros(size(Y{1})); %Makes a blank matrix that is the size of an ...individual matrix in yale faces.mat4 [n m] = size(Y{1});56 for i=1:N7 Mave = Mave + Y{i}; %Adds together all the matrices in yale faces.mat8 end910 Mave = (Mave)/N; %Average matrix in yale faces.mat11 mave = reshape(Mave,n*m,1); %Vectorizes the average matrix1213 for i = 1:N14 D(:,i) = reshape(Y{i},n*m,1) − mave; %Subtracting each vectorized matrix ...in yale faces.mat from the vector of the average matrix15 end1617 D = D/sqrt(N); %"Database" of Yale faces.18 [U S V] = svd(D,0);19 %for i = 1:420 %subplot(2,2,i)21 %imagesc(reshape(U(:,i),n,m));colormap(gray) %This plots the eigenfaces22 %end23 load images to reconstruct24 F{1} = F1; F{2} = F2; F{3} = F3; %Store the three images in a cell block so ...they can be accessed by index.2526 for i=1:length(F)27 [m(i) n(i)] = size(F{i}); %Acquire the dimensions of the matrices to be ...28 reconstructed2930 f{i} = reshape(F{i},m(i)*n(i),1) − mave; %Vectorize the matrices and ...subtract the ...31 average face.3233 p{i} = U*U*f{i}+mave; %Construct the best approximation using our database.3435 P{i} = reshape(p{i},m(i),n(i)); %Unvectorize the vectors back into matrices.3637 subplot(3,2,2*i−1) %These lines plot the original images on the left38 imagesc(F{i});colormap(gray) %and the database approximations on the right.39 subplot(3,2,2*i)40 imagesc(P{i});colormap(gray)41 end
  31. 31. Appendix EWhat follows is the Matlab code for applying POD to a system of rank n = 1412POD 1.m :1 load iss12a2 t0 = 0;3 tf = 50;4 h = 0.1;5 k = @(t) 1;6 [t,X] = Trapezoidal(t0,tf,h,A,0,b,k);7 [U S V] = svd(X,0);89 i = 1;10 error = 5*10ˆ−3;11 while S(i,i)/S(1,1) > error %This loop determines the error when u(t) = 1.12 r=i; %If a different r−value is desired (say,13 i = i+1; %r = 43 when u(t) = sin(4*t)), it must be14 end %manually entered, and these lines must be deleted.1516 Ur = U(:,1:r);17 Ar = Ur*A*Ur;18 br = Ur*b;1920 [t,Xhat] = Trapezoidal(t0,tf,h,Ar,0,br,k);21 Xhat = Ur*Xhat;2223 plot(t,X(1,:),b)24 hold on25 plot(t,Xhat(1,:),r)26 grid on
  32. 32. Appendix FWhat follows is the Matlab code for applying POD to a system of rank n = 20000POD 2.m :1 load msd200002 t0 = 0;3 tf = 5;4 h = 0.01;5 x0 = 0;6 u1 = @(t) sin(5*t);7 [t,X] = Trapezoidal(t0,tf,h,A,x0,b,u1);8 [U S V] = svd(X,0);910 i = 1;11 error = 5*10ˆ−3;12 while S(i,i)/S(1,1) > error13 r=i;14 i = i+1;15 end1617 Ur = U(:,1:r);18 Ar = Ur*A*Ur;19 br = Ur*b;2021 u2 = @(t) square(10*t);2223 [t,X1] = Trapezoidal(t0,tf,h,A,x0,b,u2);24 [t,X2] = Trapezoidal(t0,tf,h,Ar,0,br,u2);25 Xhat = Ur*X2;2627 figure28 plot(t,X1(2,:),b)29 hold on30 plot(t,Xhat(2,:),r)31 grid on

×