Successfully reported this slideshow.

# Lecture 8 dynamic programming

9

Share

Upcoming SlideShare
Dynamic programming
×
1 of 44
1 of 44

# Lecture 8 dynamic programming

9

Share

1. 1. Algorithms Analysis lecture 8 Minimum and Maximum Alg + Dynamic Programming
2. 2. Min and Max <ul><li>The minimum of a set of elements: </li></ul><ul><ul><li>The first order statistic i = 1 </li></ul></ul><ul><li>The maximum of a set of elements: </li></ul><ul><ul><li>The n-th order statistic i = n </li></ul></ul><ul><li>The median is the “halfway point” of the set </li></ul><ul><ul><li>i = (n+1)/2 , is unique when n is odd </li></ul></ul><ul><ul><li>i =  (n+1)/2  = n/2 ( lower median ) and  (n+1)/2  = n/2+1 ( upper median ), when n is even </li></ul></ul>
3. 3. Finding Minimum or Maximum <ul><li>Alg.: MINIMUM (A, n) </li></ul><ul><li>min ← A[1] </li></ul><ul><li>for i ← 2 to n </li></ul><ul><li> do if min > A[i] </li></ul><ul><li> then min ← A[i] </li></ul><ul><li>return min </li></ul><ul><li>How many comparisons are needed? </li></ul><ul><ul><li>n – 1 : each element, except the minimum, must be compared to a smaller element at least once </li></ul></ul><ul><ul><li>The same number of comparisons are needed to find the maximum </li></ul></ul><ul><ul><li>The algorithm is optimal with respect to the number of comparisons performed </li></ul></ul>
4. 4. Simultaneous Min, Max <ul><li>Find min and max independently </li></ul><ul><ul><li>Use n – 1 comparisons for each  total of 2n – 2 </li></ul></ul><ul><li>At most 3n/2 comparisons are needed </li></ul><ul><ul><li>Process elements in pairs </li></ul></ul><ul><ul><li>Maintain the minimum and maximum of elements seen so far </li></ul></ul><ul><ul><li>Don’t compare each element to the minimum and maximum separately </li></ul></ul><ul><ul><li>Compare the elements of a pair to each other </li></ul></ul><ul><ul><li>Compare the larger element to the maximum so far, and compare the smaller element to the minimum so far </li></ul></ul><ul><ul><li>This leads to only 3 comparisons for every 2 elements </li></ul></ul>
5. 5. Analysis of Simultaneous Min, Max <ul><li>Setting up initial values: </li></ul><ul><ul><li>n is odd: </li></ul></ul><ul><ul><li>n is even: </li></ul></ul><ul><li>Total number of comparisons: </li></ul><ul><ul><li>n is odd: we do 3(n-1)/2 comparisons </li></ul></ul><ul><ul><li>n is even: we do 1 initial comparison + 3(n-2)/2 more comparisons = 3n/2 - 2 comparisons </li></ul></ul>set both min and max to the first element compare the first two elements , assign the smallest one to min and the largest one to max
6. 6. Example: Simultaneous Min, Max <ul><li>n = 5 (odd), array A = {2, 7, 1, 3, 4} </li></ul><ul><ul><li>Set min = max = 2 </li></ul></ul><ul><ul><li>Compare elements in pairs: </li></ul></ul><ul><ul><ul><li>1 < 7  compare 1 with min and 7 with max  min = 1, max = 7 </li></ul></ul></ul><ul><ul><ul><li>3 < 4  compare 3 with min and 4 with max  min = 1, max = 7 </li></ul></ul></ul>We performed: 3(n-1)/2 = 6 comparisons 3 comparisons 3 comparisons
7. 7. Example: Simultaneous Min, Max <ul><li>n = 6 (even), array A = {2, 5, 3, 7, 1, 4} </li></ul><ul><ul><li>Compare 2 with 5: 2 < 5 </li></ul></ul><ul><ul><li>Set min = 2, max = 5 </li></ul></ul><ul><ul><li>Compare elements in pairs: </li></ul></ul><ul><ul><ul><li>3 < 7  compare 3 with min and 7 with max  min = 2, max = 7 </li></ul></ul></ul><ul><ul><ul><li>1 < 4  compare 1 with min and 4 with max  min = 1, max = 7 </li></ul></ul></ul>We performed: 3n/2 - 2 = 7 comparisons 1 comparison 3 comparisons 3 comparisons
8. 8. Advanced Design and Analysis Techniques <ul><li>􀂄 Covers important techniques for the design and analysis of efficient algorithms: such as dynamic programming , greedy algorithms . </li></ul>
9. 9. Dynamic Programming <ul><li>Well known algorithm design techniques:. </li></ul><ul><ul><li>Divide-and-conquer algorithms </li></ul></ul><ul><li>Another strategy for designing algorithms is dynamic programming . </li></ul><ul><ul><li>Used when problem breaks down into recurring small subproblems </li></ul></ul><ul><li>Dynamic programming is typically applied to optimization problems . In such problem there can be many solutions . Each solution has a value, and we wish to find a solution with the optimal value. </li></ul>
10. 10. Divide-and-conquer <ul><li>Divide-and-conquer method for algorithm design: </li></ul><ul><li>Divide : If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems </li></ul><ul><li>Conquer : conquer recursively to solve the subproblems </li></ul><ul><li>Combine : Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem </li></ul>
11. 11. Divide-and-conquer - Example
12. 12. Dynamic programming <ul><li>Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms. </li></ul><ul><li>By “ inefficient ”, we mean that the same recursive call is made over and over . </li></ul><ul><li>If same subproblem is solved several times , we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. </li></ul><ul><li>Dynamic programming is applicable when the subproblems are dependent , that is, when subproblems share subsubproblems. </li></ul><ul><li>“ Programming” refers to a tabular method </li></ul>
13. 13. Difference between DP and Divide-and-Conquer <ul><li>Using Divide-and-Conquer to solve these problems is inefficient because the same common subproblems have to be solved many times . </li></ul><ul><li>DP will solve each of them once and their answers are stored in a table for future use. </li></ul>
14. 14. Elements of Dynamic Programming (DP) <ul><li>DP is used to solve problems with the following characteristics : </li></ul><ul><li>Simple subproblems </li></ul><ul><ul><li>We should be able to break the original problem to smaller subproblems that have the same structure </li></ul></ul><ul><li>Optimal substructure of the problems </li></ul><ul><ul><li>The optimal solution to the problem contains within optimal solutions to its subproblems . </li></ul></ul><ul><li>Overlapping sub-problems </li></ul><ul><ul><li>there exist some places where we solve the same subproblem more than once . </li></ul></ul>
15. 15. Steps to Designing a Dynamic Programming Algorithm <ul><li>Characterize optimal substructure </li></ul><ul><li>2. Recursively define the value of an optimal solution </li></ul><ul><li>3. Compute the value bottom up </li></ul><ul><li>4. (if needed) Construct an optimal solution </li></ul>
16. 16. Fibonacci Numbers <ul><li>Fn= Fn-1+ Fn-2 n ≥ 2 </li></ul><ul><li>F0 =0, F1 =1 </li></ul><ul><li>0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … </li></ul><ul><li>Straightforward recursive procedure is slow ! </li></ul><ul><li>Let’s draw the recursion tree </li></ul>
17. 17. Fibonacci Numbers
18. 18. Fibonacci Numbers <ul><li>How many summations are there? Using Golden Ratio </li></ul><ul><li>As you go farther and farther to the right in this sequence, the ratio of a term to the one before it will get closer and closer to the Golden Ratio. </li></ul><ul><li>Our recursion tree has only 0s and 1s as leaves, thus we have 1.6 n summations </li></ul><ul><li>Running time is exponential ! </li></ul>
19. 19. Fibonacci Numbers <ul><li>We can calculate Fn in linear time by remembering solutions to the solved subproblems – dynamic programming </li></ul><ul><li>Compute solution in a bottom-up fashion </li></ul><ul><li>In this case, only two values need to be remembered at any time </li></ul>
20. 20. Ex1:Assembly-line scheduling <ul><li>􀂄 Automobiles factory with two assembly lines. </li></ul><ul><li>􀂄 Each line has the same number “n” of stations. Numbered j = 1, 2, ..., n. </li></ul><ul><li>􀂅 We denote the jth station on line i (where i is 1 or 2) by Si,j . </li></ul><ul><li>􀂅 The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j ). </li></ul><ul><li>􀂅 The time required at each station varies, even between stations at the same position on the two different lines, as each assembly line has different technology. </li></ul><ul><li>􀂅 time required at station Si,j is (ai,j) . </li></ul><ul><li>􀂅 There is also an entry time (ei) for the chassis( هيكل ) to enter assembly line i and an exit time (xi) for the completed auto to exit assembly line i. </li></ul>
21. 21. Ex1:Assembly-line scheduling <ul><li>(Time between adjacent stations are nearly 0). </li></ul>
22. 22. Problem Definition <ul><li>Problem: Given all these costs, what stations should be chosen from line 1 and from line 2 for minimizing the total time for car assembly. </li></ul><ul><li>“ Brute force” is to try all possibilities. </li></ul><ul><ul><li>requires to examine Omega(2 n ) possibilities </li></ul></ul><ul><ul><li>Trying all 2 n subsets is infeasible when n is large. </li></ul></ul><ul><ul><li>Simple example : 2 station  (2 n ) possibilities =4 </li></ul></ul>start end
23. 23. Step 1: Optimal Solution Structure <ul><li>􀂄 optimal substructure : choosing the best path to Sij. </li></ul><ul><li>The structure of the fastest way through the factory (from the starting point) </li></ul><ul><li>The fastest possible way to get through Si ,1 ( i = 1, 2) </li></ul><ul><ul><li>Only one way: from entry starting point to Si ,1 </li></ul></ul><ul><ul><li>take time is entry time ( ei) </li></ul></ul>
24. 24. Step 1: Optimal Solution Structure <ul><li>The fastest possible way to get through S i , j ( i = 1, 2) ( j = 2, 3, ..., n ). Two choices: </li></ul><ul><ul><li>Stay in the same line: Si , j -1  Si , j </li></ul></ul><ul><ul><li>Time is Ti , j -1 + ai , j </li></ul></ul><ul><ul><ul><li>If the fastest way through Si , j is through Si , j -1, it must have taken a fastest way through Si , j -1 </li></ul></ul></ul><ul><ul><li>Transfer to other line: S 3- i , j -1  Si , j </li></ul></ul><ul><ul><li>Time is T 3- i , j -1 + t 3- i , j -1 + ai , j </li></ul></ul><ul><ul><ul><li>Same as above </li></ul></ul></ul>
25. 25. Step 1: Optimal Solution Structure <ul><li>An optimal solution to a problem </li></ul><ul><ul><li>finding the fastest way to get through Si , j </li></ul></ul><ul><li>contains within it an optimal solution to sub-problems </li></ul><ul><ul><li>finding the fastest way to get through either Si , j -1 or S 3- i , j -1 </li></ul></ul><ul><li>Fastest way from starting point to Si , j is either: </li></ul><ul><ul><li>The fastest way from starting point to Si , j -1 and then directly from Si , j -1 to Si , j </li></ul></ul><ul><ul><li>or </li></ul></ul><ul><ul><li>The fastest way from starting point to S 3- i , j -1 then a transfer from line 3- i to line i and finally to Si , j </li></ul></ul><ul><li> Optimal Substructure. </li></ul>
26. 26. Example
27. 29. Step 2: Recursive Solution <ul><li>Define the value of an optimal solution recursively in terms of the optimal solution to sub-problems </li></ul><ul><li>Sub-problem here </li></ul><ul><ul><li>finding the fastest way through station j on both lines (i=1,2) </li></ul></ul><ul><ul><li>Let fi [ j ] be the fastest possible time to go from starting point through Si , j </li></ul></ul><ul><li>The fastest time to go all the way through the factory : f * </li></ul><ul><li>x 1 and x 2 are the exit times from lines 1 and 2, respectively </li></ul>
28. 30. Step 2: Recursive Solution <ul><li>The fastest time to go through Si , j </li></ul><ul><li>e 1 and e 2 are the entry times for lines 1 and 2 </li></ul>
29. 31. Example
30. 32. Example
31. 33. Step 2: Recursive Solution <ul><li>To help us keep track of how to construct an optimal solution, let us define </li></ul><ul><ul><li>li [ j ]: line # whose station j -1 is used in a fastest way through Si , j ( i = 1, 2, and j = 2, 3,..., n ) </li></ul></ul><ul><li>we avoid defining li [1] because no station precedes station 1 on either lines. </li></ul><ul><li>We also define </li></ul><ul><ul><li>l *: the line whose station n is used in a fastest way through the entire factory </li></ul></ul>
32. 34. Step 2: Recursive Solution <ul><li>Using the values of l * and li [ j ] shown in Figure (b) in next slide, we would trace a fastest way through the factory shown in part (a) as follows </li></ul><ul><ul><li>The fastest total time comes from choosing stations </li></ul></ul><ul><ul><li>Line 1: 1, 3, & 6 Line 2: 2, 4, & 5 </li></ul></ul>
33. 36. Step 3: Optimal Solution Value
34. 37. Step 3: Optimal Solution Value
35. 38. Step 3: Optimal Solution Value
36. 39. Step 3: Optimal Solution Value
37. 40. Step 3: Optimal Solution Value
38. 41. Step 3: Optimal Solution Value
39. 42. Step 3: Optimal Solution Value
40. 43. Step 3: Optimal Solution Value
41. 44. Step 4: Optimal Solution <ul><li>Constructing the fastest way through the factory </li></ul>
1. 1. Algorithms Analysis lecture 8 Minimum and Maximum Alg + Dynamic Programming
2. 2. Min and Max <ul><li>The minimum of a set of elements: </li></ul><ul><ul><li>The first order statistic i = 1 </li></ul></ul><ul><li>The maximum of a set of elements: </li></ul><ul><ul><li>The n-th order statistic i = n </li></ul></ul><ul><li>The median is the “halfway point” of the set </li></ul><ul><ul><li>i = (n+1)/2 , is unique when n is odd </li></ul></ul><ul><ul><li>i =  (n+1)/2  = n/2 ( lower median ) and  (n+1)/2  = n/2+1 ( upper median ), when n is even </li></ul></ul>
3. 3. Finding Minimum or Maximum <ul><li>Alg.: MINIMUM (A, n) </li></ul><ul><li>min ← A[1] </li></ul><ul><li>for i ← 2 to n </li></ul><ul><li> do if min > A[i] </li></ul><ul><li> then min ← A[i] </li></ul><ul><li>return min </li></ul><ul><li>How many comparisons are needed? </li></ul><ul><ul><li>n – 1 : each element, except the minimum, must be compared to a smaller element at least once </li></ul></ul><ul><ul><li>The same number of comparisons are needed to find the maximum </li></ul></ul><ul><ul><li>The algorithm is optimal with respect to the number of comparisons performed </li></ul></ul>
4. 4. Simultaneous Min, Max <ul><li>Find min and max independently </li></ul><ul><ul><li>Use n – 1 comparisons for each  total of 2n – 2 </li></ul></ul><ul><li>At most 3n/2 comparisons are needed </li></ul><ul><ul><li>Process elements in pairs </li></ul></ul><ul><ul><li>Maintain the minimum and maximum of elements seen so far </li></ul></ul><ul><ul><li>Don’t compare each element to the minimum and maximum separately </li></ul></ul><ul><ul><li>Compare the elements of a pair to each other </li></ul></ul><ul><ul><li>Compare the larger element to the maximum so far, and compare the smaller element to the minimum so far </li></ul></ul><ul><ul><li>This leads to only 3 comparisons for every 2 elements </li></ul></ul>
5. 5. Analysis of Simultaneous Min, Max <ul><li>Setting up initial values: </li></ul><ul><ul><li>n is odd: </li></ul></ul><ul><ul><li>n is even: </li></ul></ul><ul><li>Total number of comparisons: </li></ul><ul><ul><li>n is odd: we do 3(n-1)/2 comparisons </li></ul></ul><ul><ul><li>n is even: we do 1 initial comparison + 3(n-2)/2 more comparisons = 3n/2 - 2 comparisons </li></ul></ul>set both min and max to the first element compare the first two elements , assign the smallest one to min and the largest one to max
6. 6. Example: Simultaneous Min, Max <ul><li>n = 5 (odd), array A = {2, 7, 1, 3, 4} </li></ul><ul><ul><li>Set min = max = 2 </li></ul></ul><ul><ul><li>Compare elements in pairs: </li></ul></ul><ul><ul><ul><li>1 < 7  compare 1 with min and 7 with max  min = 1, max = 7 </li></ul></ul></ul><ul><ul><ul><li>3 < 4  compare 3 with min and 4 with max  min = 1, max = 7 </li></ul></ul></ul>We performed: 3(n-1)/2 = 6 comparisons 3 comparisons 3 comparisons
7. 7. Example: Simultaneous Min, Max <ul><li>n = 6 (even), array A = {2, 5, 3, 7, 1, 4} </li></ul><ul><ul><li>Compare 2 with 5: 2 < 5 </li></ul></ul><ul><ul><li>Set min = 2, max = 5 </li></ul></ul><ul><ul><li>Compare elements in pairs: </li></ul></ul><ul><ul><ul><li>3 < 7  compare 3 with min and 7 with max  min = 2, max = 7 </li></ul></ul></ul><ul><ul><ul><li>1 < 4  compare 1 with min and 4 with max  min = 1, max = 7 </li></ul></ul></ul>We performed: 3n/2 - 2 = 7 comparisons 1 comparison 3 comparisons 3 comparisons
8. 8. Advanced Design and Analysis Techniques <ul><li>􀂄 Covers important techniques for the design and analysis of efficient algorithms: such as dynamic programming , greedy algorithms . </li></ul>
9. 9. Dynamic Programming <ul><li>Well known algorithm design techniques:. </li></ul><ul><ul><li>Divide-and-conquer algorithms </li></ul></ul><ul><li>Another strategy for designing algorithms is dynamic programming . </li></ul><ul><ul><li>Used when problem breaks down into recurring small subproblems </li></ul></ul><ul><li>Dynamic programming is typically applied to optimization problems . In such problem there can be many solutions . Each solution has a value, and we wish to find a solution with the optimal value. </li></ul>
10. 10. Divide-and-conquer <ul><li>Divide-and-conquer method for algorithm design: </li></ul><ul><li>Divide : If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems </li></ul><ul><li>Conquer : conquer recursively to solve the subproblems </li></ul><ul><li>Combine : Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem </li></ul>
11. 11. Divide-and-conquer - Example
12. 12. Dynamic programming <ul><li>Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms. </li></ul><ul><li>By “ inefficient ”, we mean that the same recursive call is made over and over . </li></ul><ul><li>If same subproblem is solved several times , we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. </li></ul><ul><li>Dynamic programming is applicable when the subproblems are dependent , that is, when subproblems share subsubproblems. </li></ul><ul><li>“ Programming” refers to a tabular method </li></ul>
13. 13. Difference between DP and Divide-and-Conquer <ul><li>Using Divide-and-Conquer to solve these problems is inefficient because the same common subproblems have to be solved many times . </li></ul><ul><li>DP will solve each of them once and their answers are stored in a table for future use. </li></ul>
14. 14. Elements of Dynamic Programming (DP) <ul><li>DP is used to solve problems with the following characteristics : </li></ul><ul><li>Simple subproblems </li></ul><ul><ul><li>We should be able to break the original problem to smaller subproblems that have the same structure </li></ul></ul><ul><li>Optimal substructure of the problems </li></ul><ul><ul><li>The optimal solution to the problem contains within optimal solutions to its subproblems . </li></ul></ul><ul><li>Overlapping sub-problems </li></ul><ul><ul><li>there exist some places where we solve the same subproblem more than once . </li></ul></ul>
15. 15. Steps to Designing a Dynamic Programming Algorithm <ul><li>Characterize optimal substructure </li></ul><ul><li>2. Recursively define the value of an optimal solution </li></ul><ul><li>3. Compute the value bottom up </li></ul><ul><li>4. (if needed) Construct an optimal solution </li></ul>
16. 16. Fibonacci Numbers <ul><li>Fn= Fn-1+ Fn-2 n ≥ 2 </li></ul><ul><li>F0 =0, F1 =1 </li></ul><ul><li>0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … </li></ul><ul><li>Straightforward recursive procedure is slow ! </li></ul><ul><li>Let’s draw the recursion tree </li></ul>
17. 17. Fibonacci Numbers
18. 18. Fibonacci Numbers <ul><li>How many summations are there? Using Golden Ratio </li></ul><ul><li>As you go farther and farther to the right in this sequence, the ratio of a term to the one before it will get closer and closer to the Golden Ratio. </li></ul><ul><li>Our recursion tree has only 0s and 1s as leaves, thus we have 1.6 n summations </li></ul><ul><li>Running time is exponential ! </li></ul>
19. 19. Fibonacci Numbers <ul><li>We can calculate Fn in linear time by remembering solutions to the solved subproblems – dynamic programming </li></ul><ul><li>Compute solution in a bottom-up fashion </li></ul><ul><li>In this case, only two values need to be remembered at any time </li></ul>
20. 20. Ex1:Assembly-line scheduling <ul><li>􀂄 Automobiles factory with two assembly lines. </li></ul><ul><li>􀂄 Each line has the same number “n” of stations. Numbered j = 1, 2, ..., n. </li></ul><ul><li>􀂅 We denote the jth station on line i (where i is 1 or 2) by Si,j . </li></ul><ul><li>􀂅 The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j ). </li></ul><ul><li>􀂅 The time required at each station varies, even between stations at the same position on the two different lines, as each assembly line has different technology. </li></ul><ul><li>􀂅 time required at station Si,j is (ai,j) . </li></ul><ul><li>􀂅 There is also an entry time (ei) for the chassis( هيكل ) to enter assembly line i and an exit time (xi) for the completed auto to exit assembly line i. </li></ul>
21. 21. Ex1:Assembly-line scheduling <ul><li>(Time between adjacent stations are nearly 0). </li></ul>
22. 22. Problem Definition <ul><li>Problem: Given all these costs, what stations should be chosen from line 1 and from line 2 for minimizing the total time for car assembly. </li></ul><ul><li>“ Brute force” is to try all possibilities. </li></ul><ul><ul><li>requires to examine Omega(2 n ) possibilities </li></ul></ul><ul><ul><li>Trying all 2 n subsets is infeasible when n is large. </li></ul></ul><ul><ul><li>Simple example : 2 station  (2 n ) possibilities =4 </li></ul></ul>start end
23. 23. Step 1: Optimal Solution Structure <ul><li>􀂄 optimal substructure : choosing the best path to Sij. </li></ul><ul><li>The structure of the fastest way through the factory (from the starting point) </li></ul><ul><li>The fastest possible way to get through Si ,1 ( i = 1, 2) </li></ul><ul><ul><li>Only one way: from entry starting point to Si ,1 </li></ul></ul><ul><ul><li>take time is entry time ( ei) </li></ul></ul>
24. 24. Step 1: Optimal Solution Structure <ul><li>The fastest possible way to get through S i , j ( i = 1, 2) ( j = 2, 3, ..., n ). Two choices: </li></ul><ul><ul><li>Stay in the same line: Si , j -1  Si , j </li></ul></ul><ul><ul><li>Time is Ti , j -1 + ai , j </li></ul></ul><ul><ul><ul><li>If the fastest way through Si , j is through Si , j -1, it must have taken a fastest way through Si , j -1 </li></ul></ul></ul><ul><ul><li>Transfer to other line: S 3- i , j -1  Si , j </li></ul></ul><ul><ul><li>Time is T 3- i , j -1 + t 3- i , j -1 + ai , j </li></ul></ul><ul><ul><ul><li>Same as above </li></ul></ul></ul>
25. 25. Step 1: Optimal Solution Structure <ul><li>An optimal solution to a problem </li></ul><ul><ul><li>finding the fastest way to get through Si , j </li></ul></ul><ul><li>contains within it an optimal solution to sub-problems </li></ul><ul><ul><li>finding the fastest way to get through either Si , j -1 or S 3- i , j -1 </li></ul></ul><ul><li>Fastest way from starting point to Si , j is either: </li></ul><ul><ul><li>The fastest way from starting point to Si , j -1 and then directly from Si , j -1 to Si , j </li></ul></ul><ul><ul><li>or </li></ul></ul><ul><ul><li>The fastest way from starting point to S 3- i , j -1 then a transfer from line 3- i to line i and finally to Si , j </li></ul></ul><ul><li> Optimal Substructure. </li></ul>
26. 26. Example
27. 29. Step 2: Recursive Solution <ul><li>Define the value of an optimal solution recursively in terms of the optimal solution to sub-problems </li></ul><ul><li>Sub-problem here </li></ul><ul><ul><li>finding the fastest way through station j on both lines (i=1,2) </li></ul></ul><ul><ul><li>Let fi [ j ] be the fastest possible time to go from starting point through Si , j </li></ul></ul><ul><li>The fastest time to go all the way through the factory : f * </li></ul><ul><li>x 1 and x 2 are the exit times from lines 1 and 2, respectively </li></ul>
28. 30. Step 2: Recursive Solution <ul><li>The fastest time to go through Si , j </li></ul><ul><li>e 1 and e 2 are the entry times for lines 1 and 2 </li></ul>
29. 31. Example
30. 32. Example
31. 33. Step 2: Recursive Solution <ul><li>To help us keep track of how to construct an optimal solution, let us define </li></ul><ul><ul><li>li [ j ]: line # whose station j -1 is used in a fastest way through Si , j ( i = 1, 2, and j = 2, 3,..., n ) </li></ul></ul><ul><li>we avoid defining li [1] because no station precedes station 1 on either lines. </li></ul><ul><li>We also define </li></ul><ul><ul><li>l *: the line whose station n is used in a fastest way through the entire factory </li></ul></ul>
32. 34. Step 2: Recursive Solution <ul><li>Using the values of l * and li [ j ] shown in Figure (b) in next slide, we would trace a fastest way through the factory shown in part (a) as follows </li></ul><ul><ul><li>The fastest total time comes from choosing stations </li></ul></ul><ul><ul><li>Line 1: 1, 3, & 6 Line 2: 2, 4, & 5 </li></ul></ul>
33. 36. Step 3: Optimal Solution Value
34. 37. Step 3: Optimal Solution Value
35. 38. Step 3: Optimal Solution Value
36. 39. Step 3: Optimal Solution Value
37. 40. Step 3: Optimal Solution Value
38. 41. Step 3: Optimal Solution Value
39. 42. Step 3: Optimal Solution Value
40. 43. Step 3: Optimal Solution Value
41. 44. Step 4: Optimal Solution <ul><li>Constructing the fastest way through the factory </li></ul>

## More Related Content

### Related Books

Free with a 30 day trial from Scribd

See all

### Related Audiobooks

Free with a 30 day trial from Scribd

See all