Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Dynamic Programming                Lecture 16
Dynamic Programming Dynamic programming is typically applied to optimization  problems. In such problems there can be man...
Dynamic Programming Like divide and conquer, Dynamic Programming solves  problems by combining solutions to sub problems....
Dynamic ProgrammingThe development of a dynamic-programming algorithm can bebroken into a sequence of four steps.1.Charact...
Matrix-chain Multiplication   Suppose we have a sequence or chain A1, A2, …, An of n    matrices to be multiplied       ...
Matrix-chain Multiplication    Example: consider the chain A1, A2, A3, A4 of 4 matrices        Let us compute the produc...
Algorithm to Multiply 2 MatricesInput: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)Result: Matrix Cp×r resulting f...
Matrix-chain Multiplication   Example: Consider three matrices A10×100, B100×5, and C5×50   There are 2 ways to parenthe...
Matrix-chain Multiplication Problem   Matrix-chain multiplication problem       Given a chain A1, A2, …, An of n matrice...
Step 1: The structure of an optimalparenthesization The optimal substructure of this problem is as follows. Suppose that...
Step 2: A recursive solution Let m[i, j] be the minimum number of scalar  multiplications needed to compute the matrix  A...
Step 3: Computing the optimal costsMATRIX-CHAIN-ORDER(p)1 n←length[p] - 12 for i←1 to n3    do m[i, i]←04    for l←2 to n5...
matrix dimension----------------------  A1 30 X 35  A2 35 X 15  A3       15 X 5  A4       5 X 10  A5 10 X 20  A6 20 X 25
Step 4: Constructing an optimal solutionPRINT-OPTIMAL-PARENS(s, i, ,j)1 if j =i2 then print “A”,i3 else print “(”4        ...
Longest common subsequence Formally, given a sequence X ={x1, x2, . . . , xm}, another sequence  Z ={z1, z2, . . . , zk} ...
 Given two sequences X and Y, we say that a sequence Z is a  common subsequence of X and Y if Z is a subsequence of both ...
Step 1: Characterizing a longest commonsubsequence Let X ={x1, x2, . . . , xm} and Y ={y1, y2, . . . , yn} be  sequences,...
Step 2: A recursive solution to subproblems Let us define c[i, j] to be the length of an LCS of the sequences Xi  and Yj....
Step 3: Computing the length of an LCSLCS-LENGTH(X,Y)1 m←length[X]2 n←length[Y]3 for i←1 to m4 do c[i,0]←05 for j←0 to n6 ...
The running time of the procedure isO(mn).x=ABCBDAB and y=BDCABA,
Step 4: Constructing an LCSPRINT-LCS(b,X,i,j)1 if i = 0 or j = 02 then return3 if b[i, j] = “↖"4 then PRINT-LCS(b,X,i - 1,...
Elements of dynamic programming For dynamic programming, Optimization problem must  have following to be applicable opti...
Optimal substructure The first step in solving an optimization problem by  dynamic programming is to characterize the str...
Overlapping subproblems When a recursive algorithm revisits the same problem over  and over again, we say that the optimi...
Memoization A memoized recursive algorithm maintains an entry in a  table for the solution to each subproblem. Each tabl...
Upcoming SlideShare
Loading in …5
×

Dynamic1

756 views

Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

Dynamic1

  1. 1. Dynamic Programming Lecture 16
  2. 2. Dynamic Programming Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value.
  3. 3. Dynamic Programming Like divide and conquer, Dynamic Programming solves problems by combining solutions to sub problems. Unlike divide and conquer, sub problems are not independent.  Subproblems may share subsubproblems,  However, solution to one subproblem may not affect the solutions to other subproblems of the same problem. (More on this later.) Dynamic Programming reduces computation by  Solving subproblems in a bottom-up fashion.  Storing solution to a subproblem the first time it is solved.  Looking up the solution when subproblem is encountered again
  4. 4. Dynamic ProgrammingThe development of a dynamic-programming algorithm can bebroken into a sequence of four steps.1.Characterize the structure of an optimal solution.2.Recursively define the value of an optimal solution.3.Compute the value of an optimal solution in a bottom-upfashion.4.Construct an optimal solution from computed information.
  5. 5. Matrix-chain Multiplication Suppose we have a sequence or chain A1, A2, …, An of n matrices to be multiplied  That is, we want to compute the product A1A2…An There are many possible ways (parenthesizations) to compute the product
  6. 6. Matrix-chain Multiplication Example: consider the chain A1, A2, A3, A4 of 4 matrices  Let us compute the product A1A2A3A4 There are 5 possible ways: 1. (A1(A2(A3A4))) 2. (A1((A2A3)A4)) 3. ((A1A2)(A3A4)) 4. ((A1(A2A3))A4) 5. (((A1A2)A3)A4)
  7. 7. Algorithm to Multiply 2 MatricesInput: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)Result: Matrix Cp×r resulting from the product A·BMATRIX-MULTIPLY(Ap×q , Bq×r)1. for i ← 1 to p2. for j ← 1 to r3. C[i, j] ← 04. for k ← 1 to q5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]6. return C
  8. 8. Matrix-chain Multiplication Example: Consider three matrices A10×100, B100×5, and C5×50 There are 2 ways to parenthesize  ((AB)C) = D10×5 · C5×50  AB ⇒ 10·100·5=5,000 scalar multiplications  DC ⇒ 10·5·50 =2,500 scalar multiplications  (A(BC)) = A10×100 · E100×50  BC ⇒ 100·5·50=25,000 scalar multiplications  AE ⇒ 10·100·50 =50,000 scalar multiplications
  9. 9. Matrix-chain Multiplication Problem Matrix-chain multiplication problem  Given a chain A1, A2, …, An of n matrices, where for i=1, 2, …, n, matrix Ai has dimension pi-1×pi  Parenthesize the product A1A2…An such that the total number of scalar multiplications is minimized Brute force method of exhaustive search takes time exponential in n
  10. 10. Step 1: The structure of an optimalparenthesization The optimal substructure of this problem is as follows. Suppose that the optimal parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1. Then the parenthesization of the subchain AiAi+1…Ak within this optimal parenthesization of AiAi+1…Aj must be an optimal parenthesization of AiAi+1…Ak. A similar observation holds for the parenthesization of the subchain Ak+1Ak+2…Aj in the optimal parenthesization of AiAi+1…Aj.
  11. 11. Step 2: A recursive solution Let m[i, j] be the minimum number of scalar multiplications needed to compute the matrix AiAi+1…Aj; the cost of a cheapest way to compute A1A2…An would thus be m[1, n]. 0  i= j m[i, j ] =  min{m[i, k ] + m[k + 1, j ] + pi −1 pk p j } i < j  i ≤k < j 
  12. 12. Step 3: Computing the optimal costsMATRIX-CHAIN-ORDER(p)1 n←length[p] - 12 for i←1 to n3 do m[i, i]←04 for l←2 to n5 do for i←1 to n - l + 16 do j ←i + l-17 m[i, j]←∞8 for k←i to j - 19 do q←m[i, k] + m[k + 1, j] +pi-1pkpj10 if q < m[i, j]11 then m[i, j]←q12 s[i, j]←k13 return m and s It’s running time is O(n3).
  13. 13. matrix dimension---------------------- A1 30 X 35 A2 35 X 15 A3 15 X 5 A4 5 X 10 A5 10 X 20 A6 20 X 25
  14. 14. Step 4: Constructing an optimal solutionPRINT-OPTIMAL-PARENS(s, i, ,j)1 if j =i2 then print “A”,i3 else print “(”4 PRINT-OPTIMAL-PARENS(s, i, s[i, j])5 PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j)6 print “)” In the above example, the call PRINT-OPTIMAL- PARENS(s, 1, 6) computes the matrix-chain product according to the parenthesization ((A1(A2A3))((A4A5)A6)) .
  15. 15. Longest common subsequence Formally, given a sequence X ={x1, x2, . . . , xm}, another sequence Z ={z1, z2, . . . , zk} is a subsequence of X if there exists a strictly increasing sequence i1, i2, . . . , ik of indices of X such that for all j = 1, 2, . . . , k, we have xij = zj. For example, Z ={B, C, D, B} is a subsequence of X ={A, B, C, B, D, A, B } with corresponding index sequence {2, 3, 5, 7}.
  16. 16.  Given two sequences X and Y, we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y. In the longest-common-subsequence problem, we are given two sequences X = {x1, x2, . . . , xm} and Y ={y1, y2, . . . , yn} and wish to find a maximum-length common subsequence of X and Y.
  17. 17. Step 1: Characterizing a longest commonsubsequence Let X ={x1, x2, . . . , xm} and Y ={y1, y2, . . . , yn} be sequences, and let Z ={z1, z2, . . . , zk} be any LCS of X and Y. 1. If xm = yn, then zk = xm = yn and Zk-l is an LCS of Xm-l and Yn-l. 2. If xm≠yn, then zk≠xm implies that Z is an LCS of Xm-1 and Y. 3. If xm≠yn, then zk≠yn implies that Z is an LCS of X and Yn-l.
  18. 18. Step 2: A recursive solution to subproblems Let us define c[i, j] to be the length of an LCS of the sequences Xi and Yj. The optimal substructure of the LCS problem gives the recursive formula 0 i = 0orj = 0  c[i, j ] = c[i − 1, j − 1] + 1 i, j > 0andxi = y j max(c[i, j − 1], c[i − 1, j ]) i, j > 0andx ≠ y  i j
  19. 19. Step 3: Computing the length of an LCSLCS-LENGTH(X,Y)1 m←length[X]2 n←length[Y]3 for i←1 to m4 do c[i,0]←05 for j←0 to n6 do c[0, j]←07 for i←1 to m8 do for j←1 to n9 do if xi = yj10 then c[i, j]←c[i - 1, j -1] + 111 b[i, j] ←”↖”12 else if c[i - 1, j]←c[i, j - 1]13 then c[i, j]←c[i - 1, j]14 b[i, j] ←”↑”15 else c[i, j]←c[i, j -1]16 b[i, j] ←”←”17 return c and b
  20. 20. The running time of the procedure isO(mn).x=ABCBDAB and y=BDCABA,
  21. 21. Step 4: Constructing an LCSPRINT-LCS(b,X,i,j)1 if i = 0 or j = 02 then return3 if b[i, j] = “↖"4 then PRINT-LCS(b,X,i - 1, j - 1)5 print xi6 elseif b[i,j] = “↑"7 then PRINT-LCS(b,X,i - 1,j)8 else PRINT-LCS(b,X,i, j - 1)
  22. 22. Elements of dynamic programming For dynamic programming, Optimization problem must have following to be applicable optimal substructure overlapping subproblems Memoization.
  23. 23. Optimal substructure The first step in solving an optimization problem by dynamic programming is to characterize the structure of an optimal solution. We say that a problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Whenever a problem exhibits optimal substructure, it is a good clue that dynamic programming might apply.
  24. 24. Overlapping subproblems When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems
  25. 25. Memoization A memoized recursive algorithm maintains an entry in a table for the solution to each subproblem. Each table entry initially contains a special value to indicate that the entry has yet to be filled in. When the subproblem is first encountered during the execution of the recursive algorithm, its solution is computed and then stored in the table. Each subsequent time that the subproblem is encountered, the value stored in the table is simply looked up and returned.

×