SlideShare a Scribd company logo
1 of 191
UNIT - 4
Dynamic Programming
• Dynamic Programming is also used in optimization
problems.
• Like divide-and-conquer method, Dynamic
Programming solves problems by combining the
solutions of sub problems.
• Moreover, Dynamic Programming algorithm solves
each sub-problem just once and then saves its answer in
a table, thereby avoiding the work of re-computing the
answer every time..
-There are two main properties for the given
problem can be solved using Dynamic
Programming.
1.Overlapping sub-problems.
2.Optimal substructure.
1. Overlapping Sub-Problems
• Similar to Divide-and-Conquer approach,
Dynamic Programming also combines solutions
to sub-problems.
• It is mainly used where the solution of one sub-
problem is needed repeatedly.
• The computed solutions are stored in a table, so
that these don’t have to be re-computed.
• -> In Dynamic programming every problem solved using
“bellman’s principle of optimality” i.e “ the output of
stage ‘1’ will be given as to stage ‘2’, the output of stage ‘2’
will be given as to stage ‘3’,…………..
• Hence, this technique is needed where overlapping sub-
problem exists.
• For example, Binary Search does not have overlapping sub-
problem. Where as recursive program of Fibonacci
numbers have many overlapping sub-problems.
2.Optimal Sub-Structure
• A given problem has Optimal Substructure
Property, if the optimal solution of the given
problem can obtained using optimal solutions
of sub-problems.
• For example, the Shortest Path problem has the
following optimal substructure property −
• If a node x lies in the shortest path from a
source node u to destination node v, then the
shortest path from u to v is the combination of
the shortest path from u to x, and the shortest
path from x to v.
• The standard All Pair Shortest Path algorithms
like Floyd-Warshall and Bellman-Ford are
typical examples of Dynamic Programming.
Steps of Dynamic Programming Approach
• Dynamic Programming algorithm is designed using
the following four steps −
Applications of Dynamic Programming Approach
1. All-Pairs shortest paths.
2.Travelling sales person problem.
3.0/1 knapsack problem.
4.Optimal Binary search tree(OBST).
5. Reliability Design.
1.All-Pairs shortest paths
(Floyd-Warshall)
• Let G = (V, E) be a Directed graph with n
vertices. Let cost be a cost Adjacency matrix
for G such that
-- cost(i , i)=0, 1≤ i ≤n.
• Then cost(i, j)is the length (or cost) of edge
(i, j), if (i, j) Є E(G).
• And cost(i, j)= ∞ and if i≠ j and (i,j) ∉ E(G).
• The all-pairs shortest-path problem is to
determined as,
• a matrix ‘A’ such that A(i, j)is the length of a
shortest path from i to j.
• The matrix ‘A’ can be obtained by solving n
single-source problems using the algorithm
Shortest Paths.
• Since each application of this procedure requires
0(n2) time.
• The matrix ‘A’ can be obtained in 0(n3)time.
• We obtain an alternate 0(n3) solution to this
problem using the principle of optimality.
• Let us examine a shortest i to j path in G, i≠ j
• This path originates at vertex i and goes through
some intermediate vertices (possibly none) and
terminates at vertex j.
• We can assume that this path contains no cycles
if there is a cycle, then this can be deleted without
increasing the path length(no cycle has negative
length)
• If ‘k’ is an intermediate vertex on this Shortest
path, then the sub paths from i to k and from k to j
must be shortest paths from i to k and k to j,
respectively.
• Otherwise, the i to j path is not of minimum length.
So, the principle of optimality holds.
• This alerts us to the prospect of using dynamic
programming.
• If ‘k’ is the intermediate vertex with highest
index , then the i to k path is a shortest i to k
path in G going through no vertex with index
greater than k -1.
• Similarly the k to j path is a shortest k to j path
in G going through no vertex of index greater
than k-1.
• We can regard the construction of a shortest
i to j path as first requiring a decision as to
which is the highest indexed intermediate vertex
k.
• Once this decision has been made, we need to
find two shortest paths,
• One from i to k and the other from k to j.
• Neither of these may go through a Vertex with
index greater than k -1.
• Using Ak (i, j) to represent the length of a
shortest path from i to j going through no vertex
of index greater than k, we obtain
A(i, j)= min{ min1≤ k ≤n{AK-1(i, k) +AK-1 (k, j)},cost(i, j)}
• Clearly, A0(i, j) = cost(i, j), 1≤ i ≤n , 1≤ j ≤n.
• We can obtain a recurrence for Ak(i, j) using an
argument similar to that used before.
• A shortest path from i to j going through no vertex
higher than ‘k’ either goes through vertex k or it does
not.
• If it does, Ak(i, j) = Ak-1(i, k) + Ak-1(k, j)
• If it does not, then no intermediate vertex has index
greater than k-1.
- Hence Ak(i, j) = Ak-1(i, j)
• Combining we get,
Ak(i, j)=min{AK-1(i, j) , AK-1(i, k) +AK-1 (k, j)} k≥1
Ak(i, j)=min{AK-1(i, j) , AK-1(i, k) +AK-1 (k, j)}
k≥1
Pseudo code:
Compute all pairs shortest paths for
the given graph using dynamic
programming
2.Chain Matrix Multiplication
• Given a sequence of matrices, find the most
efficient way to multiply these matrices
together.
• The problem is not actually to perform the
multiplications, but merely to decide in which
order to perform the multiplications.
• We have many options to multiply a chain of
matrices because matrix multiplication is
associative.
• In other words, no matter how we parenthesize
the product, the result will be the same.
• For example, if we had four matrices A, B, C, and
D, we would have:
(ABC)D = (AB)(CD) = A(BCD) = ....
• However, the order in which we parenthesize the
product affects the number of simple arithmetic
operations needed to compute the product, or the
efficiency.
• For example, suppose A is a 10 × 30 matrix, B is a
30 × 5 matrix, and C is a 5 × 60 matrix. Then,
Case 1:(AB)C = (10×30×5) + (10×5×60)
= 1500 + 3000 = 4500 operations
Case 2:A(BC) = (30×5×60) + (10×30×60)
= 9000 + 18000 = 27000 operations.
Clearly the first parenthesization requires less number of operations.
• Given an array p[] which represents the chain of
matrices such that the ith matrix Ai is of dimension
p[i-1] * p[i].
• We need to write a function MatrixChainOrder() that
should return the minimum number of multiplications
needed to multiply the chain.
• Input: p[] = {40, 20, 30, 10, 30} Output: 26000 There are
4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30.
Let the input 4 matrices be A, B, C and D.
• The minimum number of multiplications are obtained by
putting parenthesis in following way
• (A(BC))D --> 20*30*10 + 40*20*10 + 40*10*30
• Input: p[] = {10, 20, 30, 40, 30} Output: 30000
-There are 4 matrices of dimensions 10x20, 20x30,
30x40 and 40x30.
-Let the input 4 matrices be A, B, C and D. The
minimum number of multiplications are obtained by
putting parenthesis in following way
((AB)C)D --> 10*20*30 + 10*30*40 + 10*40*30
• Input: p[] = {10, 20, 30} Output: 6000
-There are only two matrices of dimensions 10x20 and
20x30.
-So there is only one way to multiply the matrices, cost
of which is 10*20*30
Optimal Substructure:
• A simple solution is to place parenthesis at all
possible places, calculate the cost for each
placement and return the minimum value.
• In a chain of matrices of size n, we can place the
first set of parenthesis in n-1 ways.
• For example, if the given chain is of 4 matrices.
• let the chain be ABCD, then there are 3 ways to
place first set of parenthesis outer side:
(A)(BCD), (AB)(CD) and (ABC)(D).
• So when we place a set of parenthesis, we
divide the problem into subproblems of
smaller size.
• Therefore, the problem has optimal
substructure property and can be easily solved
using recursion.
Minimum number of multiplication needed to
multiply a chain of size n = Minimum of all n-1
placements (these placements create subproblems of
smaller size).
Pseudocode
Matrix-chain-order (p) ; p =< p1, p2,……….>
1) n ← length[p] − 1
2) for i ← 1 to n
3) do m[i, i] ← 0
4) for l ← 2 to n ; l is length of chain product
5) do for i ← 1 to n − l + 1
6) do j ← i + l −1
7) m[i, j] ← inf
8) for k ← i to j − 1
9) do q ← m[i, k] + m[k+1, j] + pi-1pk pj
10) if q < m[i, j]
11) then m[i, j] ← q
12) s[i, j] ← k
13) return m and s
3. Travelling Sales person Problem(TSP)
Problem Statement:
- A traveler needs to visit all the cities from a
list, where distances between all the cities are
known and each city should be visited just
once. What is the shortest possible route that
he visits each city exactly once and returns to
the origin city?
Solution:
• Travelling salesman problem is the most notorious
computational problem.
• We can use brute-force approach to evaluate every
possible tour and select the best one. For n number of
vertices in a graph, there are (n - 1)! number of
possibilities.
• Instead of brute-force using dynamic programming
approach, the solution can be obtained in lesser time,
though there is no polynomial time algorithm.
• Let G = (V,E) be a directed graph with edge costs Cij.
• The variable Cij is defined such that
Cij > 0 for all i and j and Cij = ∞ if (i, j) ∉ E.
• Let |V| = n and assume n > 1.
• A tour of G is a directed simple cycle that includes every
vertex in V.
• The cost of a tour is “the sum of the cost of the edges
on the tour”.
• The traveling sales person problem is to find a tour of
minimum cost.
The traveling sales person problem finds
application in a variety of situations.
Example 1:
-Suppose we have to route a postal van to pick up mail
from mail boxes located at ‘n’ different sites.
-An ‘n + 1’ vertex graph can be used to represent the
situation.
-One vertex represents the post office from which the
Postal van starts and to which it must return.
-Edge(i, j)is assigned a cost equal to the distance from
site ‘i’ to site ‘j’.
-The route taken by the postal van is a tour, and we are
interested in finding a tour of minimum length.
• Example2:
-suppose we wish to use a robot arm to tighten
the nuts on some piece of machinery on an
assembly line.
-The arm will start from its initial position
(which is over the first nut to be tightened),
successively move to each of the remaining
nuts, and return to the initial position.
• The path of the arm is clearly a tour on a graph
in which vertices represent the nuts.
• A minimum-cost tour will minimize the time
needed for the arm to complete its task
(note that only the total arm movement time is
variable; the nut tightening time is independent
of the tour).
Example 3:
-From a production environment in which several
commodities are manufactured on the same set of
machines.
-The manufacture proceeds in cycles. In each production
cycle, ‘n’ different commodities are produced.
-When the machines are changed from production of
commodity ‘i’ to commodity ‘j’, a change over cost Cij is
incurred.
-It is desired to find a sequence in which to manufacture
these commodities.
-This sequence should minimize the sum of changeover
costs(the remaining production costs are sequence
independent).
- Since the manufacture proceeds cyclically, It is necessary to
include the cost of starting the next cycle.
-This is just the change over cost from the last to the first
commodity.
-Hence, this problem can be regarded as a traveling sales
person problem on an ‘n’ vertex graph with edge
cost Cij's being the change over cost from commodity ‘i’ to
commodity ’j’.
- In the following discussion we shall, without loss of
generality, regard a tour to be a simple path that
starts and ends at vertex ‘1’.
- Every tour consists of an edge (i, k) for some
k ∈V-{1}. and a path from vertex ‘k’ to vertex ‘1’.
- The path from vertex ‘k’ to vertex ‘1’ goes through
each vertex in V-{1,k} exactly once.
- It is easy to see that if the tour is optimal, then the
path from ‘k’ to ‘1’ must be a shortest ‘k’ to ‘1’
path going through all vertices in V-{1,k} .
- Hence, the principle of optimality holds.
- Let g(i, S) be the length of a shortest path
starting at vertex ‘i’ ,going through all vertices in
‘S’, and terminating at vertex ‘1’.
- The function g(1 ,V-{1}) is the length of an
optimal sales person tour.
- From the principal of optimality it follows that
- Generalizing (from Eq.1) , we obtain(for i∉S)
g(i , S) = minj∈S{c i j+ g( j , S -{ j})}
g(1,V-{1})=min2≤k ≤ n{c1k+ g(k,V -{1,k})} Eq.1
Eq.2
- Eq.1 can be solved for g(1,V-{1}) if we know
g(k, V -{1, k}) for all choices of ‘k’.
- The ‘g’ values can be obtained by using Eq.2
- Clearly ,
- Hence, we can use (Eq.2) obtain g(i , S) for all S
of size 1.
- Then we can obtain g(i ,S) for ‘S’ with |S|= 2, and
so on.
- When |S| < n -1, the values of ‘i’ and ‘S’ for which
g(i , S) is needed are such that i≠1,1 ∉ S and i ∉ S.
g(i, Ø) = Ci1,1≤ i ≤ n. Eq.3
• Example: Consider the directed graph of Figure
(a). The edge lengths are given by matrix c of
Figure (b). we will illustrate the steps to solve the
travelling salesman problem.
• g(1,V-{1})=min2≤k ≤ n{c1k+ g(k,V -{1,k})}
g(i, Ø) = Ci1,1≤ i ≤ n.
g(i , S) = minj∈S{c i j+ g( j , S -{ j})}
• From the above graph following table prepared.
->S = Φ
-using Eq.3
g(2, Φ)= C21= 5.
g(3, Φ)= C31= 6.
g(4, Φ)= C41= 8.
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
->S=1
Using EQ.2
g(2,{3})= C23+g(3, Φ)= 9+6=15.
g(2,{4})= C24+g(4, Φ)= 10+8=18.
g(3,{2})= C32+g(2, Φ)= 13+5=18
g(3,{4})= C34+g(4, Φ)= 12+8=20
g(4,{2})= C42+g(2, Φ)= 8+5=13
g(4,{3})= C43+g(3, Φ)=9+6=15
g(i , S) = minj∈S{c i j+ g( j , S -{ j})}
• Next, we compute g(i,S) with |S|=2, i≠1,1 ∉ S
and i ∉ S.
g(2,{3,4})= min{C23+g(3,{4}) , C24+g (4,{3}}
=min {9+20,10+15}
=min{29,25}
=25.
g(3,{2,4})=min{C32+g(2,{4}), C34+g(4,{2})}
=min {13+18,12+13}
=min{31,25}
=25.
g(4,{2,3})= min {c42+g(2,{3}),c43+g(3,{2})}
min {8+15,9+18}
=min{23,27}
=23.
g(2,{3,4})= min{C23+g(3,{4}) , C24+g (4,{3}}
=min {9+20,10+15}
=min{29,25}
=25.
-Finally, from Eq.1 we obtain
g(1,{2,3,4})=min{C12+g(2,{3,4}),C13+g(3,{2,4}),
C14+g(4,{2,3})}
= min {10+25,15+25,20+23}
=min{35,40,43}
=35.
• An optimal tour of the graph of Figure(a) has length
35.
• A tour of this length can be constructed if we retain
with each g(i, S) the value of j that minimizes the
right-hand side of Eq.1.
• Let J(i, S) be this value.
• Then J(1,{2,3,4})=2
• Thus the tour starts from 1 and goes to 2.
• The remaining tour can be obtained from g(2,{3,4}).
So J(2,{3,4})= 4.
• Thus the next edge is <2,4>.
• The remaining tour is for g(4, {3}).
So J(4,{3})=3.
• The optimal tour is 1,2,4, 3, 1
Analysis:
• There are at the most 2n.n sub-problems and each one
takes linear time to solve.
• Therefore, the total running time is O(2n. n2).
4. 0/1 KNAPSACK
• A solution to the knapsack problem can be
obtained by making a sequence of decisions on
the variables x1, x2,x3……. xn.
• A decision on variable xi involves determining
which of the values 0 or 1is to be assigned to
it.
• Let us assume that decisions on the xi are made
in the order xn , xn-1,…, x1.
• Following a decision on xn , we may be in one
of two possible states:
• The capacity remaining in the knapsack is ‘m’
and no profit has accrued or
• The capacity remaining is m- wn and a profit
of pn has accrued.
• It is clear that the remaining decisions xn-1,…..
x1 must be optimal with respect to the problem
state resulting from the decision on xn.
• Otherwise xn,….., x1 will not be optimal.
• Hence , the principle of optimality holds.
• Let fj(y) be the value of an optimal solution to
KNAP(1,j,y).
• Since the Principle of optimality holds, we
obtain
• For arbitrary fj(y) ,i >0, Eq.1 generalizes to
Eq.1
Eq.2
fn(m) = max{fn-1 (m), fn-1 (m-wn)+pn}
fn(y) = max{fn-1 (y), fn-1 (y- wi)+pi}
• Eq.2 can be solved for fn(m) by beginning with
the knowledge f0 (y) = 0 for all y
and fi(y) =-∞,y<0.then f1, f2….., fn can be
successfully computed using Eq.2 .
• So, we need to compute only fi(yj) ,1≤j ≤k.
• We use the ordered set
Si= {(fi (yj), yj) | 1≤j ≤k.}to represent fi(y) .
• Each member of Si is a pair(P,W), where P=
profit and W = weight
• Notice that
• We can compute Si+1 from Si by first computing
• Note : Si +1 can be computed by merging the pairs
in Si and S1
i together.
S1
i = {(P,W)|(P- pi,W - wi) ∈ Si } Eq.3
S0 = {(0,0)}.
• Note that if Si +1 contains two pairs (p j, wj) and
(pk ,wk) with the property that pj ≤ pk and
wj ≥wk then the pair(p j, wj) can be discarded
because of Eq.2.
• Discarding or purging rules such as this one are
also known as dominance rules.
• Dominated tuples get purged. In the above,
(p k, wk) dominates(p j, wj) .
• Let Si represent the possible states resulting from the 2i
decision sequences for x1, x2,x3……. xi.
• A state refers to a pair (p j, wj) , wj being the total
weight of objects included in the Knapsack and p j
being the corresponding profit.
• To obtain Si +1,we note that the possibilities xi+1 =0 or
xi+1 =1.
• When xi+1 =0 the resulting states are the same as for Si .
• When xi+1 =01 the resulting states are obtained by
adding((p i+1, wi+1 ) to each state in Si .
• Call the set of these additional states S1
i .
• The S1
i is the same as in Eq.3 .
• Now, Si+1 can be computed by merging the states
in Si and S1
i together.
S0 = {(0,0)}.
S1
i = Si-1 + (pi, wi)
Adding an item into the
bag.
Si = Si-1 + S1
i
Merge operation
Total no.of
profits
&weights
• Example: Consider the knapsack instance n = 3,(w1,w2,w3)=
(2,3,4), (P1,P2,P3)= (1,2,5) and m = 6.
Sol:
S0 = {(0,0)};
S1
i = Si-1 + (pi, wi)
i=1
S1
1 = S0 + (p1, w1)
S1
1 ={(0,0)+(1,2)}
S1
1 ={(1,2)}
Si = Si-1 + S1
i
S1 = S0 + S1
1
S1 = {(0,0)}{(1,2)}
S1 ={(0,0),(1,2)}
• Example: Consider the knapsack instance n =
3,(w1,w2,w3)= (2,3,4), (P1,P2,P3)= (1,2,5) and
m = 6. using dynamic approach
• Q1.Example: Consider the knapsack instance n
= 5,(w1,w2,w3,w4,w5)= (1,2,5,6,7),
(P1,P2,P3,p4,p5)= (1,6,18,22,28) and m = 11.
Q2.
value = [ 20, 5, 10, 40, 15, 25 ]
weight = [ 1, 2, 3, 8, 7, 4 ]
W = 10
i=2
S1
2 = S1 + (p2, w2)
S1
2 ={(0,0) (1,2)} +(2,3)}
S1
2 ={(2,3),(3,5)}
Si = Si-1 + S1
i
S2 = S1 + S1
2
S2 = {(0,0),(1,2)} {(2,3),(3,5)}
S2 ={(0,0),(1,2),(2,3),(3,5}
i=3
S1
3 = S2 + (p3, w3)
S1
3={(0,0),(1,2),(2,3),(3,5)} +(5,4)}
S1
3 ={(5,4),(6,6),(7,7),(8,9)}
Si = Si-1 + S1
i
S3 = S2 + S1
3
S3 ={(0,0),(1,2),(2,3),(3,5} + {(5,4),(6,6),(7,7),(8,9)}
S3 ={(0,0),(1,2),(2,3),(3,5), (5,4),(6,6),(7,7),(8,9)}
• Purging rule: if one of Si-1 and S1
i has a pair
(pj, wj) and other has a pair (pk, wk) and pj ≤pk
• while wj ≤wk then the pair of (pj, wj) is
discarded.
• After applying purging rule ,we will check
following condition in order to find the solution.
• i.e(pi, wi) ∈ Sn and (pi, wi) ∉ Sn-1 then xn =1 other
wise xn =0
• In the above example we have M=6 so the
values of f3 (6) is give b y tuple (6,6) in S3 .
• Therefore (6,6) ∈ S3 where as (6,6) ∉ S2
So , X3=1
• The pair (6,6) came from the pair
(6-p3,6-w3)=(6-5,6-4)=(1,2).
Now
(1,2) ∈ S2 similarly (1,2) ∈ S1
so
X2=0
• Now (1,2) ∈ S1 similarly (1,2) ∉ S0
• So x1=1.
• The solution is (x1,x2,x3)=(1,0,1).
• Maximum Profit
• Ʃ pi xi= p1 x1 + p2 x2 + p3 x3
• =1*1+2*0+5*1
=6
Input:
value = [ 20, 5, 10, 40, 15, 25 ]
weight = [ 1, 2, 3, 8, 7, 4 ]
W = 10
5.OBST(OPTIMAL BINARY SEARCH TREES)
• A Binary Search Tree (BST) is a tree where the
key values are stored in the internal nodes.
• The external nodes are null nodes.
• The keys are ordered lexicographically, i.e. for
each internal node all the keys in the left sub-
tree are less than the keys in the node, and all
the keys in the right sub-tree are greater.
• When we know the frequency of searching each one
of the keys, it is quite easy to compute the expected
cost of accessing each node in the tree.
•
• An optimal binary search tree is a BST, which has
minimal expected cost of locating each node.
• Search time of an element in a BST is O(n), whereas
in a Balanced-BST search time is O(log n).
• Again the search time can be improved in Optimal
Cost Binary Search Tree, placing the most frequently
used data in the root and closer to the root element,
while placing the least frequently used data near
leaves and in leaves.
• Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of
frequency counts, where freq[i] is the number of searches for keys[i]. Construct a
binary search tree of all keys such that the total cost of all the searches is as small
as possible.
• Let us first define the cost of a BST. The cost of a BST node is the level of that node
multiplied by its frequency. The level of the root is 1.
• Examples:
• Input: keys[] = {10, 12}, freq[] = {34, 50}
• There can be following two possible BSTs
• 10 12
•  /
• 12 10
• I II
• Frequency of searches of 10 and 12 are 34 and 50 respectively.
• The cost of tree I is 34*1 + 50*2 = 134
• The cost of tree II is 50*1 + 34*2 = 118
Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of
frequency counts, where freq[i] is the number of searches for keys[i]. Construct a
binary search tree of all keys such that the total cost of all the searches is as small
as possible.
Let us first define the cost of a BST. The cost of a BST node is the level of that
node multiplied by its frequency. The level of the root is 1.
Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
 /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50 respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118
• Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
• There can be following possible BSTs
• 10 12 20 10 20
•  /  /  /
• 12 10 20 12 20 10
•  / / 
• 20 10 12 12
• I II III IV V
• Among all possible BSTs, cost of the fifth BST is
minimum.
• Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142
Fig (a) & (b)Two possible binary search trees
• Given a fixed set of identifiers, we wish to create a
binary search tree organization.
• We may expect different binary search trees for the
same identifier set to have different performance
characteristics.
• The tree of Figure(a) in, the worst case, requires
four comparisons to find an identifier, where as the
tree of Figure(b) requires only three.
• On the average the two trees need 12/5 and 11/5
comparisons respectively.
• For example in the case of tree(a), it takes
1,2,2,3 and 4 comparisons, respectively,
to find the identifiers for, do, while, int & if.
• Thus the average number of comparisons is
(1+2+2+3+4)/5=12/5.
• This calculation assumes that Each identifier is
searched for with equal probability and that no
unsuccessful searches(i.e.searches for
identifiers not in the tree) are made.
• In a general situation, we can expect different
identifiers to be searched for with different
frequencies (or probabilities).
• In addition, we can expect unsuccessful
searches also to be made.
• Let us assume that the given set of identifiers
is {a1,a2,……,an} with a1 < a2 < ……< an.
• Let p(i) be the probability with which we
search for ai.
• Let q(i) be the probability that the identifier ‘x’
being searched for is such that ai< x < ai+1,
where 0 ≤ i ≤ n.
• Then, Ʃ0 ≤ i ≤ n q(i) the probability of an
unsuccessful search.
• Clearly,
Ʃ0 ≤ i ≤ n p(i)+ Ʃ0 ≤ i ≤ n q(i) =1.
• Given this data, we wish to construct an optimal
binary search tree for {a1,a2,….. an,}.
• In obtaining a cost function for binary search trees, it is
useful to add a ‘’fictitious’’ node in place of every empty sub
tree in the search tree. Such nodes, called “External nodes”.
All other nodes are internal nodes.
• In Fig. Square boxes indicates External Nodes.
• If a binary search tree represents n identifiers, then there
will be exactly ‘n’ internal nodes and n+1
( fictitious) external nodes.
• Every internal node represents a point where a successful
search may terminate.
• Every external node represents a point where an
unsuccessful search may terminate.
• The expected cost contribution from the internal node for
ai is p(i) * level(a i ).
• The expected cost contribution from the External node
for Ei is q(i) * level((E i ) -1).
• The preceding discussion leads to the following
formula for the expected Cost of a binary search tree
• Example: The possible binary search trees for the identifier
set (a1,a2,a3)= (do, if, while) are given if Figure5.1
Ʃ1 ≤ i ≤ n p(i) * level(a i)+Ʃ0 ≤ i ≤ n q(i) * level((E i ) -1)=1.
• With p(1)= 0.5,p(2) = 0.1,p(3) = 0.05,
q(0)= 0.15, q(1) =0.1,q(2)= 0.05 and q(3)=0.05
• For instance ,cost (tree a) can be computed as
follows.
The contribution from successful searches(internal
nodes) is
=3 * 0.5+2*0.1+1*0.05
=1.75.
• the contribution from unsuccessful
searches(external nodes) is
= 3 * 0.15+3 * 0.1+2 * 0.05+1*0.05
= 0.90.
Cost (tree ‘a’)=1.75+0.90=2.65.
• All the other costs can also be calculated in a
similar manner
Cost (tree ‘a’)=1.75+0.90=2.65.
Cost (tree ‘b’)=1.9
Cost (tree ‘c’)=1.5.
Cost (tree ‘d’)=2.05.
Cost (tree ‘e’)=1.6.
• Tree ‘c’ is optimal with this assignment of p's and
q's.
• To apply dynamic programming to the problem of
obtaining an optimal binary search tree, we need to
view the construction of such a tree as the result of a
sequence of decisions and then observe that the
principle of optimality holds when applied to the
problem state resulting from a decision.
• A possible approach to this would be to make a
decision as to which of the ‘ai’s’should be assigned to
the root node of the tree.
• If we choose ak , then it is clear that the internal nodes
for a1, a2, a3,……, ak-1 .
• as well as the external nodes for the classes
E0,E1,……. Ek-1 will lie in the left sub tree ‘l’ of the root.
• The remaining nodes will be in the right sub tree ‘r’.
• In both cases the level is measured by regarding
the root of the respective subtree to be at level 1.
Cost(l)=Ʃ1 ≤ i < k p(i) * level(a i)+Ʃ0 ≤ i < k q(i) * level((E i ) -1)
Cost(r)=Ʃk <i ≤ n p(i) * level(a i)+Ʃk < i ≤ n q(i) * level((E i ) -1)
Figure-An optimal binary search tree with root ak.
- Using w(I , j) to represent the
- sum q(i)+ Ʃl=i+1
j (q(l)+p(l)) the following as the
expected cost of the search tree (Figure)
=P(k)+cost(l)+cost(r)+w(0,k-1)+w(k,n)
ak
l r
• If the tree is optimal, then above equation must be
minimum.
• Hence, cost(l) must be minimum over all binary search
trees containing a1, a2,…….. ak-1 And E1,E2,…….. Ek-1.
• Similarly cost(r)must be minimum. If we use c(I , j)
to represent the cost of an optimal binary search tree tij
containing ai+1,….aj And Ei,,…….. Ej.
• then for the tree to be optimal ,we must have
->cost(l) =c(0,k-1) & cost(r)= c(k , n).
• In addition, k must be chosen such that
p(k) + c(0,k -1)+c(k,n) +w(0,k -1)+w(k,n) is minimum.
• Hence, for c(0,n) we obtain
C(0,n)= min1 ≤ k≤n {c(0,k-1)+c(k,n)+p(k)+w(0,k-1) +w(k,n)}.
• We can generalize above equation to obtain for any c(i, j)
C(i, j)= min1 ≤ k≤j {c(i,k-1)+c(k,j)+p(k)+w(0,k-1) +w(k,j)}
C(i, j)= min1 ≤ k≤j {c(i,k-1)+c(k,j)}+w(i,j), where i ≠j
w(i, j)=P(j)+q(j)+w(i,j-1), where i ≠j
c(i, i)= 0 , if i=j
w(i, i)= q(i), if i=j
• Above Equation can be solved for c(0,n) by first
computing all c(i, j)such that j-i = 1.
• Next we can compute all c(i, j) such that j-i =2,
then all c(i, j) with j-i = 3, and so on.
• If during this computation we record the root r(i,j)
of each tree tij, then an optimal binary search tree
can be constructed from these r(i, j).
Note: r(i, j)is the value of ‘k’ that minimize, if
i≠j
• Example: Let n = 4 and (a1,a2,a3,a4)=(do, if,
int, while). Let p(1:4) = (3,3,1,1) and
q(0:4) = (2,3,1,1,1) using the rij’s construct
OBST.
• Example: Let n = 4 and (a1,a2,a3,a4)=(do, if, int,
while). Let p(1:4) = (3,3,1,1) and
q(0:4) = (2,3,1,1,1)
Initially, we have w(i, i)= q(i) and
c(i, i)=0 and
r(i, i)= 0, when(i=j)
0 ≤i ≤ 4.
when i!=j
w(i, j)=P(j)+q(j)+w(i,j-1),t
Sol:
compute W,C,R for j-i=1,j-i=2,j-i=3 & j-i=4
p1=3,p2=3,p3=1 and p4=1
Q0=2,Q1=3,Q2=1,Q3=1 AND Q4=1
Compute W,C,R for j-i=1
• Knowing W(i, i +1) and C(i, i + 1),0≤i <4, we can
again use equations compute w(i,i+2), c(i,i+2),
and r(i,i+2), 0≤i <3.
• This process can be repeated until w(0,4),
c(0,4) and r(0,4)are obtained.
Computation of c(0,4)w, (0,4)and r(0,4)
W00=
C00=
R00=
W11
C11
R11
j-i=0
Your text here
W12
C12
R12
W23
C23
R23
W34
C34
R34
W02
C02
R02
W13
C13
R13
W24
C24
R24
W01=
C01
R01
w(i, j)=P(j)+q(j)+w(i,j-1) i!=j
C(i, j)= mini< k≤j {c(i,k-1)+c(k,j)}+w(i,j), where i ≠ j
rij=k
• Optimal Binary search of above example
if
do int
while
-The computation of each of these c(i,j)‘s Requires us to find
the minimum of ‘m’ quantities.
-Hence, each such c(i,j) can be computed in time O(m).
-The total time for all c(i,j) with j-i= m is therefore O(nm- m2).
-The total time to evaluate all the c(i,j)'s and r(i,j)'s is therefore
Ʃ1≤m≤n (nm- m2)=O(n3).
6.RELIABILITYDESIGN
• In this section we look at an example of how to
use dynamic programming to solve a problem
with a multiplicative optimization function.
• The problem is to design a system that is
composed of several devices connected in series.
• Let ‘ri’ be the reliability of device ‘D’ (that is ri is
the probability that device i will function
properly).
• Then, the reliability of the entire system is ∏ri
• Even if the individual devices are very reliable
(the ‘ri 's are very close to one), the reliability of
the system may not be very good.
• Multiple copies of the same device type are
connected in parallel(Figure 5.20) through the use
of switching circuits.
• The switching circuits determine which devices in
any given group are functioning properly.
• They then make use of one such device at each
stage.
• If stage ‘i’ contains mi copies of device Di ,then the
probability that all mi have a malfunction is (1-ri)mi
• Hence the reliability of stage ‘i’ becomes
• In any practical situation, the stage reliability is a
little less than 1-(1-ri)mi because the switching circuits
themselves are not fully reliable.
• Also, failures of copies of the same device may not be
fully independent.(if failure is due to design defect).
1-(1-ri)mi
• Let us assume that the reliability of stage ‘i’ is given by a
function Фi(mi).
• The reliability of the system of stages is
• Our problem is to use device duplication to maximize
reliability.
• This maximization is to be carried out under a cost constraint.
• Let Ci, be the cost of each unit of device di and let ‘c’ be the
maximum allowable cost of the system being designed.
∏1≤m≤n Фi(mi)
• We wish to solve the following maximization
problem
subject to
mi > 1and integer, 1≤i≤n
• A dynamic programming solution can be
obtained in a manner similar to that used for
the knapsack problem.
maximize ∏1≤m≤n Фi(mi)
Ʃ 1≤m≤n ci mi <=C
• Since, we can assume each ci >0,each mi must
be in the range 1≤ mi ≤ ui where
• Here ‘u’ is the upper bound function.
ui= (C+ Ci - Ʃ j=1
n C j / Ci )
• Problem: We are to design a three stage
system with device types D1D2 and D3 The costs
are $30,$15,and $20 respectively. The cost of
the system is to be no more than $105.
The reliability of each device type is 0.9,0.8 and
0.5 respectively.

More Related Content

What's hot

Divide and conquer - Quick sort
Divide and conquer - Quick sortDivide and conquer - Quick sort
Divide and conquer - Quick sortMadhu Bala
 
Greedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack ProblemGreedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack ProblemMadhu Bala
 
Lecture optimal binary search tree
Lecture optimal binary search tree Lecture optimal binary search tree
Lecture optimal binary search tree Divya Ks
 
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...Simplilearn
 
Asymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAsymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAmrinder Arora
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic ProgrammingSahil Kumar
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy methodhodcsencet
 
Stressen's matrix multiplication
Stressen's matrix multiplicationStressen's matrix multiplication
Stressen's matrix multiplicationKumar
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals reviewDynamic programming - fundamentals review
Dynamic programming - fundamentals reviewElifTech
 
Back tracking and branch and bound class 20
Back tracking and branch and bound class 20Back tracking and branch and bound class 20
Back tracking and branch and bound class 20Kumar
 

What's hot (20)

Divide and conquer - Quick sort
Divide and conquer - Quick sortDivide and conquer - Quick sort
Divide and conquer - Quick sort
 
Backtracking
BacktrackingBacktracking
Backtracking
 
Greedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack ProblemGreedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack Problem
 
Heap sort
Heap sortHeap sort
Heap sort
 
Lecture optimal binary search tree
Lecture optimal binary search tree Lecture optimal binary search tree
Lecture optimal binary search tree
 
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...
 
Divide and Conquer
Divide and ConquerDivide and Conquer
Divide and Conquer
 
Divide and conquer
Divide and conquerDivide and conquer
Divide and conquer
 
Asymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAsymptotic Notation and Data Structures
Asymptotic Notation and Data Structures
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programming
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
Tsp branch and-bound
Tsp branch and-boundTsp branch and-bound
Tsp branch and-bound
 
Greedy algorithms
Greedy algorithmsGreedy algorithms
Greedy algorithms
 
Stressen's matrix multiplication
Stressen's matrix multiplicationStressen's matrix multiplication
Stressen's matrix multiplication
 
Splay Tree
Splay TreeSplay Tree
Splay Tree
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Dynamic programming - fundamentals review
Dynamic programming - fundamentals reviewDynamic programming - fundamentals review
Dynamic programming - fundamentals review
 
Back tracking and branch and bound class 20
Back tracking and branch and bound class 20Back tracking and branch and bound class 20
Back tracking and branch and bound class 20
 
Greedy Algorihm
Greedy AlgorihmGreedy Algorihm
Greedy Algorihm
 

Similar to unit-4-dynamic programming

AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxHarshitSingh334328
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptxTekle12
 
Undecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation AlgorithmsUndecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation AlgorithmsMuthu Vinayagam
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programmingGopi Saiteja
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfjannatulferdousmaish
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERmuthukrishnavinayaga
 
Dynamic1
Dynamic1Dynamic1
Dynamic1MyAlome
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deepKNaveenKumarECE
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverJi-yong Kwon
 
Data structure - traveling sales person and mesh algorithm
Data structure - traveling sales person and mesh algorithmData structure - traveling sales person and mesh algorithm
Data structure - traveling sales person and mesh algorithmlavanya marichamy
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.pptDavidMaina47
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesSreedhar Chowdam
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentaryirrrrr
 

Similar to unit-4-dynamic programming (20)

AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptx
 
Undecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation AlgorithmsUndecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation Algorithms
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
 
Dynamic1
Dynamic1Dynamic1
Dynamic1
 
Lecture11
Lecture11Lecture11
Lecture11
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
 
Algorithms Lab PPT
Algorithms Lab PPTAlgorithms Lab PPT
Algorithms Lab PPT
 
Chap12 slides
Chap12 slidesChap12 slides
Chap12 slides
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
 
Data structure - traveling sales person and mesh algorithm
Data structure - traveling sales person and mesh algorithmData structure - traveling sales person and mesh algorithm
Data structure - traveling sales person and mesh algorithm
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.ppt
 
DAA Notes.pdf
DAA Notes.pdfDAA Notes.pdf
DAA Notes.pdf
 
algorithm Unit 3
algorithm Unit 3algorithm Unit 3
algorithm Unit 3
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture Notes
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentary
 

Recently uploaded

Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 

Recently uploaded (20)

Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 

unit-4-dynamic programming

  • 1. UNIT - 4 Dynamic Programming
  • 2. • Dynamic Programming is also used in optimization problems. • Like divide-and-conquer method, Dynamic Programming solves problems by combining the solutions of sub problems. • Moreover, Dynamic Programming algorithm solves each sub-problem just once and then saves its answer in a table, thereby avoiding the work of re-computing the answer every time..
  • 3. -There are two main properties for the given problem can be solved using Dynamic Programming. 1.Overlapping sub-problems. 2.Optimal substructure.
  • 4. 1. Overlapping Sub-Problems • Similar to Divide-and-Conquer approach, Dynamic Programming also combines solutions to sub-problems. • It is mainly used where the solution of one sub- problem is needed repeatedly. • The computed solutions are stored in a table, so that these don’t have to be re-computed.
  • 5. • -> In Dynamic programming every problem solved using “bellman’s principle of optimality” i.e “ the output of stage ‘1’ will be given as to stage ‘2’, the output of stage ‘2’ will be given as to stage ‘3’,………….. • Hence, this technique is needed where overlapping sub- problem exists. • For example, Binary Search does not have overlapping sub- problem. Where as recursive program of Fibonacci numbers have many overlapping sub-problems.
  • 6.
  • 7. 2.Optimal Sub-Structure • A given problem has Optimal Substructure Property, if the optimal solution of the given problem can obtained using optimal solutions of sub-problems. • For example, the Shortest Path problem has the following optimal substructure property −
  • 8. • If a node x lies in the shortest path from a source node u to destination node v, then the shortest path from u to v is the combination of the shortest path from u to x, and the shortest path from x to v. • The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are typical examples of Dynamic Programming.
  • 9.
  • 10. Steps of Dynamic Programming Approach • Dynamic Programming algorithm is designed using the following four steps −
  • 11.
  • 12.
  • 13. Applications of Dynamic Programming Approach 1. All-Pairs shortest paths. 2.Travelling sales person problem. 3.0/1 knapsack problem. 4.Optimal Binary search tree(OBST). 5. Reliability Design.
  • 14. 1.All-Pairs shortest paths (Floyd-Warshall) • Let G = (V, E) be a Directed graph with n vertices. Let cost be a cost Adjacency matrix for G such that -- cost(i , i)=0, 1≤ i ≤n. • Then cost(i, j)is the length (or cost) of edge (i, j), if (i, j) Є E(G). • And cost(i, j)= ∞ and if i≠ j and (i,j) ∉ E(G).
  • 15.
  • 16.
  • 17.
  • 18. • The all-pairs shortest-path problem is to determined as, • a matrix ‘A’ such that A(i, j)is the length of a shortest path from i to j. • The matrix ‘A’ can be obtained by solving n single-source problems using the algorithm Shortest Paths. • Since each application of this procedure requires 0(n2) time. • The matrix ‘A’ can be obtained in 0(n3)time. • We obtain an alternate 0(n3) solution to this problem using the principle of optimality.
  • 19. • Let us examine a shortest i to j path in G, i≠ j • This path originates at vertex i and goes through some intermediate vertices (possibly none) and terminates at vertex j. • We can assume that this path contains no cycles if there is a cycle, then this can be deleted without increasing the path length(no cycle has negative length)
  • 20. • If ‘k’ is an intermediate vertex on this Shortest path, then the sub paths from i to k and from k to j must be shortest paths from i to k and k to j, respectively. • Otherwise, the i to j path is not of minimum length. So, the principle of optimality holds. • This alerts us to the prospect of using dynamic programming.
  • 21. • If ‘k’ is the intermediate vertex with highest index , then the i to k path is a shortest i to k path in G going through no vertex with index greater than k -1. • Similarly the k to j path is a shortest k to j path in G going through no vertex of index greater than k-1.
  • 22. • We can regard the construction of a shortest i to j path as first requiring a decision as to which is the highest indexed intermediate vertex k. • Once this decision has been made, we need to find two shortest paths, • One from i to k and the other from k to j. • Neither of these may go through a Vertex with index greater than k -1.
  • 23. • Using Ak (i, j) to represent the length of a shortest path from i to j going through no vertex of index greater than k, we obtain A(i, j)= min{ min1≤ k ≤n{AK-1(i, k) +AK-1 (k, j)},cost(i, j)} • Clearly, A0(i, j) = cost(i, j), 1≤ i ≤n , 1≤ j ≤n. • We can obtain a recurrence for Ak(i, j) using an argument similar to that used before.
  • 24. • A shortest path from i to j going through no vertex higher than ‘k’ either goes through vertex k or it does not. • If it does, Ak(i, j) = Ak-1(i, k) + Ak-1(k, j) • If it does not, then no intermediate vertex has index greater than k-1. - Hence Ak(i, j) = Ak-1(i, j) • Combining we get, Ak(i, j)=min{AK-1(i, j) , AK-1(i, k) +AK-1 (k, j)} k≥1
  • 25. Ak(i, j)=min{AK-1(i, j) , AK-1(i, k) +AK-1 (k, j)} k≥1
  • 26.
  • 27.
  • 28.
  • 29.
  • 31.
  • 32.
  • 33. Compute all pairs shortest paths for the given graph using dynamic programming
  • 34.
  • 35.
  • 36. 2.Chain Matrix Multiplication • Given a sequence of matrices, find the most efficient way to multiply these matrices together. • The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications.
  • 37. • We have many options to multiply a chain of matrices because matrix multiplication is associative. • In other words, no matter how we parenthesize the product, the result will be the same. • For example, if we had four matrices A, B, C, and D, we would have: (ABC)D = (AB)(CD) = A(BCD) = ....
  • 38. • However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. • For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then, Case 1:(AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations Case 2:A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations. Clearly the first parenthesization requires less number of operations.
  • 39. • Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of dimension p[i-1] * p[i]. • We need to write a function MatrixChainOrder() that should return the minimum number of multiplications needed to multiply the chain. • Input: p[] = {40, 20, 30, 10, 30} Output: 26000 There are 4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30. Let the input 4 matrices be A, B, C and D. • The minimum number of multiplications are obtained by putting parenthesis in following way • (A(BC))D --> 20*30*10 + 40*20*10 + 40*10*30
  • 40. • Input: p[] = {10, 20, 30, 40, 30} Output: 30000 -There are 4 matrices of dimensions 10x20, 20x30, 30x40 and 40x30. -Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way ((AB)C)D --> 10*20*30 + 10*30*40 + 10*40*30 • Input: p[] = {10, 20, 30} Output: 6000 -There are only two matrices of dimensions 10x20 and 20x30. -So there is only one way to multiply the matrices, cost of which is 10*20*30
  • 41. Optimal Substructure: • A simple solution is to place parenthesis at all possible places, calculate the cost for each placement and return the minimum value. • In a chain of matrices of size n, we can place the first set of parenthesis in n-1 ways. • For example, if the given chain is of 4 matrices. • let the chain be ABCD, then there are 3 ways to place first set of parenthesis outer side: (A)(BCD), (AB)(CD) and (ABC)(D).
  • 42. • So when we place a set of parenthesis, we divide the problem into subproblems of smaller size. • Therefore, the problem has optimal substructure property and can be easily solved using recursion. Minimum number of multiplication needed to multiply a chain of size n = Minimum of all n-1 placements (these placements create subproblems of smaller size).
  • 43. Pseudocode Matrix-chain-order (p) ; p =< p1, p2,……….> 1) n ← length[p] − 1 2) for i ← 1 to n 3) do m[i, i] ← 0 4) for l ← 2 to n ; l is length of chain product 5) do for i ← 1 to n − l + 1 6) do j ← i + l −1 7) m[i, j] ← inf 8) for k ← i to j − 1 9) do q ← m[i, k] + m[k+1, j] + pi-1pk pj 10) if q < m[i, j] 11) then m[i, j] ← q 12) s[i, j] ← k 13) return m and s
  • 44. 3. Travelling Sales person Problem(TSP) Problem Statement: - A traveler needs to visit all the cities from a list, where distances between all the cities are known and each city should be visited just once. What is the shortest possible route that he visits each city exactly once and returns to the origin city?
  • 45.
  • 46. Solution: • Travelling salesman problem is the most notorious computational problem. • We can use brute-force approach to evaluate every possible tour and select the best one. For n number of vertices in a graph, there are (n - 1)! number of possibilities. • Instead of brute-force using dynamic programming approach, the solution can be obtained in lesser time, though there is no polynomial time algorithm.
  • 47. • Let G = (V,E) be a directed graph with edge costs Cij. • The variable Cij is defined such that Cij > 0 for all i and j and Cij = ∞ if (i, j) ∉ E. • Let |V| = n and assume n > 1. • A tour of G is a directed simple cycle that includes every vertex in V. • The cost of a tour is “the sum of the cost of the edges on the tour”. • The traveling sales person problem is to find a tour of minimum cost.
  • 48. The traveling sales person problem finds application in a variety of situations. Example 1: -Suppose we have to route a postal van to pick up mail from mail boxes located at ‘n’ different sites. -An ‘n + 1’ vertex graph can be used to represent the situation. -One vertex represents the post office from which the Postal van starts and to which it must return. -Edge(i, j)is assigned a cost equal to the distance from site ‘i’ to site ‘j’. -The route taken by the postal van is a tour, and we are interested in finding a tour of minimum length.
  • 49. • Example2: -suppose we wish to use a robot arm to tighten the nuts on some piece of machinery on an assembly line. -The arm will start from its initial position (which is over the first nut to be tightened), successively move to each of the remaining nuts, and return to the initial position.
  • 50. • The path of the arm is clearly a tour on a graph in which vertices represent the nuts. • A minimum-cost tour will minimize the time needed for the arm to complete its task (note that only the total arm movement time is variable; the nut tightening time is independent of the tour).
  • 51. Example 3: -From a production environment in which several commodities are manufactured on the same set of machines. -The manufacture proceeds in cycles. In each production cycle, ‘n’ different commodities are produced. -When the machines are changed from production of commodity ‘i’ to commodity ‘j’, a change over cost Cij is incurred. -It is desired to find a sequence in which to manufacture these commodities.
  • 52. -This sequence should minimize the sum of changeover costs(the remaining production costs are sequence independent). - Since the manufacture proceeds cyclically, It is necessary to include the cost of starting the next cycle. -This is just the change over cost from the last to the first commodity. -Hence, this problem can be regarded as a traveling sales person problem on an ‘n’ vertex graph with edge cost Cij's being the change over cost from commodity ‘i’ to commodity ’j’.
  • 53. - In the following discussion we shall, without loss of generality, regard a tour to be a simple path that starts and ends at vertex ‘1’. - Every tour consists of an edge (i, k) for some k ∈V-{1}. and a path from vertex ‘k’ to vertex ‘1’. - The path from vertex ‘k’ to vertex ‘1’ goes through each vertex in V-{1,k} exactly once.
  • 54. - It is easy to see that if the tour is optimal, then the path from ‘k’ to ‘1’ must be a shortest ‘k’ to ‘1’ path going through all vertices in V-{1,k} . - Hence, the principle of optimality holds. - Let g(i, S) be the length of a shortest path starting at vertex ‘i’ ,going through all vertices in ‘S’, and terminating at vertex ‘1’.
  • 55. - The function g(1 ,V-{1}) is the length of an optimal sales person tour. - From the principal of optimality it follows that - Generalizing (from Eq.1) , we obtain(for i∉S) g(i , S) = minj∈S{c i j+ g( j , S -{ j})} g(1,V-{1})=min2≤k ≤ n{c1k+ g(k,V -{1,k})} Eq.1 Eq.2
  • 56. - Eq.1 can be solved for g(1,V-{1}) if we know g(k, V -{1, k}) for all choices of ‘k’. - The ‘g’ values can be obtained by using Eq.2 - Clearly , - Hence, we can use (Eq.2) obtain g(i , S) for all S of size 1. - Then we can obtain g(i ,S) for ‘S’ with |S|= 2, and so on. - When |S| < n -1, the values of ‘i’ and ‘S’ for which g(i , S) is needed are such that i≠1,1 ∉ S and i ∉ S. g(i, Ø) = Ci1,1≤ i ≤ n. Eq.3
  • 57. • Example: Consider the directed graph of Figure (a). The edge lengths are given by matrix c of Figure (b). we will illustrate the steps to solve the travelling salesman problem.
  • 58. • g(1,V-{1})=min2≤k ≤ n{c1k+ g(k,V -{1,k})}
  • 59. g(i, Ø) = Ci1,1≤ i ≤ n.
  • 60. g(i , S) = minj∈S{c i j+ g( j , S -{ j})}
  • 61.
  • 62.
  • 63.
  • 64. • From the above graph following table prepared. ->S = Φ -using Eq.3 g(2, Φ)= C21= 5. g(3, Φ)= C31= 6. g(4, Φ)= C41= 8. 1 2 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0
  • 65. ->S=1 Using EQ.2 g(2,{3})= C23+g(3, Φ)= 9+6=15. g(2,{4})= C24+g(4, Φ)= 10+8=18. g(3,{2})= C32+g(2, Φ)= 13+5=18 g(3,{4})= C34+g(4, Φ)= 12+8=20 g(4,{2})= C42+g(2, Φ)= 8+5=13 g(4,{3})= C43+g(3, Φ)=9+6=15 g(i , S) = minj∈S{c i j+ g( j , S -{ j})}
  • 66.
  • 67. • Next, we compute g(i,S) with |S|=2, i≠1,1 ∉ S and i ∉ S. g(2,{3,4})= min{C23+g(3,{4}) , C24+g (4,{3}} =min {9+20,10+15} =min{29,25} =25. g(3,{2,4})=min{C32+g(2,{4}), C34+g(4,{2})} =min {13+18,12+13} =min{31,25} =25. g(4,{2,3})= min {c42+g(2,{3}),c43+g(3,{2})} min {8+15,9+18} =min{23,27} =23.
  • 68. g(2,{3,4})= min{C23+g(3,{4}) , C24+g (4,{3}} =min {9+20,10+15} =min{29,25} =25.
  • 69. -Finally, from Eq.1 we obtain g(1,{2,3,4})=min{C12+g(2,{3,4}),C13+g(3,{2,4}), C14+g(4,{2,3})} = min {10+25,15+25,20+23} =min{35,40,43} =35. • An optimal tour of the graph of Figure(a) has length 35. • A tour of this length can be constructed if we retain with each g(i, S) the value of j that minimizes the right-hand side of Eq.1. • Let J(i, S) be this value.
  • 70. • Then J(1,{2,3,4})=2 • Thus the tour starts from 1 and goes to 2. • The remaining tour can be obtained from g(2,{3,4}). So J(2,{3,4})= 4. • Thus the next edge is <2,4>. • The remaining tour is for g(4, {3}). So J(4,{3})=3. • The optimal tour is 1,2,4, 3, 1 Analysis: • There are at the most 2n.n sub-problems and each one takes linear time to solve. • Therefore, the total running time is O(2n. n2).
  • 71.
  • 72. 4. 0/1 KNAPSACK • A solution to the knapsack problem can be obtained by making a sequence of decisions on the variables x1, x2,x3……. xn. • A decision on variable xi involves determining which of the values 0 or 1is to be assigned to it. • Let us assume that decisions on the xi are made in the order xn , xn-1,…, x1.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80. • Following a decision on xn , we may be in one of two possible states: • The capacity remaining in the knapsack is ‘m’ and no profit has accrued or • The capacity remaining is m- wn and a profit of pn has accrued. • It is clear that the remaining decisions xn-1,….. x1 must be optimal with respect to the problem state resulting from the decision on xn. • Otherwise xn,….., x1 will not be optimal. • Hence , the principle of optimality holds.
  • 81. • Let fj(y) be the value of an optimal solution to KNAP(1,j,y). • Since the Principle of optimality holds, we obtain • For arbitrary fj(y) ,i >0, Eq.1 generalizes to Eq.1 Eq.2 fn(m) = max{fn-1 (m), fn-1 (m-wn)+pn} fn(y) = max{fn-1 (y), fn-1 (y- wi)+pi}
  • 82. • Eq.2 can be solved for fn(m) by beginning with the knowledge f0 (y) = 0 for all y and fi(y) =-∞,y<0.then f1, f2….., fn can be successfully computed using Eq.2 . • So, we need to compute only fi(yj) ,1≤j ≤k. • We use the ordered set Si= {(fi (yj), yj) | 1≤j ≤k.}to represent fi(y) .
  • 83. • Each member of Si is a pair(P,W), where P= profit and W = weight • Notice that • We can compute Si+1 from Si by first computing • Note : Si +1 can be computed by merging the pairs in Si and S1 i together. S1 i = {(P,W)|(P- pi,W - wi) ∈ Si } Eq.3 S0 = {(0,0)}.
  • 84. • Note that if Si +1 contains two pairs (p j, wj) and (pk ,wk) with the property that pj ≤ pk and wj ≥wk then the pair(p j, wj) can be discarded because of Eq.2. • Discarding or purging rules such as this one are also known as dominance rules. • Dominated tuples get purged. In the above, (p k, wk) dominates(p j, wj) .
  • 85. • Let Si represent the possible states resulting from the 2i decision sequences for x1, x2,x3……. xi. • A state refers to a pair (p j, wj) , wj being the total weight of objects included in the Knapsack and p j being the corresponding profit. • To obtain Si +1,we note that the possibilities xi+1 =0 or xi+1 =1. • When xi+1 =0 the resulting states are the same as for Si . • When xi+1 =01 the resulting states are obtained by adding((p i+1, wi+1 ) to each state in Si .
  • 86. • Call the set of these additional states S1 i . • The S1 i is the same as in Eq.3 . • Now, Si+1 can be computed by merging the states in Si and S1 i together. S0 = {(0,0)}. S1 i = Si-1 + (pi, wi) Adding an item into the bag. Si = Si-1 + S1 i Merge operation Total no.of profits &weights
  • 87.
  • 88.
  • 89. • Example: Consider the knapsack instance n = 3,(w1,w2,w3)= (2,3,4), (P1,P2,P3)= (1,2,5) and m = 6. Sol: S0 = {(0,0)}; S1 i = Si-1 + (pi, wi) i=1 S1 1 = S0 + (p1, w1) S1 1 ={(0,0)+(1,2)} S1 1 ={(1,2)} Si = Si-1 + S1 i S1 = S0 + S1 1 S1 = {(0,0)}{(1,2)} S1 ={(0,0),(1,2)}
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96. • Example: Consider the knapsack instance n = 3,(w1,w2,w3)= (2,3,4), (P1,P2,P3)= (1,2,5) and m = 6. using dynamic approach
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116. • Q1.Example: Consider the knapsack instance n = 5,(w1,w2,w3,w4,w5)= (1,2,5,6,7), (P1,P2,P3,p4,p5)= (1,6,18,22,28) and m = 11. Q2. value = [ 20, 5, 10, 40, 15, 25 ] weight = [ 1, 2, 3, 8, 7, 4 ] W = 10
  • 117.
  • 118. i=2 S1 2 = S1 + (p2, w2) S1 2 ={(0,0) (1,2)} +(2,3)} S1 2 ={(2,3),(3,5)} Si = Si-1 + S1 i S2 = S1 + S1 2 S2 = {(0,0),(1,2)} {(2,3),(3,5)} S2 ={(0,0),(1,2),(2,3),(3,5}
  • 119. i=3 S1 3 = S2 + (p3, w3) S1 3={(0,0),(1,2),(2,3),(3,5)} +(5,4)} S1 3 ={(5,4),(6,6),(7,7),(8,9)} Si = Si-1 + S1 i S3 = S2 + S1 3 S3 ={(0,0),(1,2),(2,3),(3,5} + {(5,4),(6,6),(7,7),(8,9)} S3 ={(0,0),(1,2),(2,3),(3,5), (5,4),(6,6),(7,7),(8,9)}
  • 120. • Purging rule: if one of Si-1 and S1 i has a pair (pj, wj) and other has a pair (pk, wk) and pj ≤pk • while wj ≤wk then the pair of (pj, wj) is discarded. • After applying purging rule ,we will check following condition in order to find the solution. • i.e(pi, wi) ∈ Sn and (pi, wi) ∉ Sn-1 then xn =1 other wise xn =0
  • 121. • In the above example we have M=6 so the values of f3 (6) is give b y tuple (6,6) in S3 . • Therefore (6,6) ∈ S3 where as (6,6) ∉ S2 So , X3=1 • The pair (6,6) came from the pair (6-p3,6-w3)=(6-5,6-4)=(1,2). Now (1,2) ∈ S2 similarly (1,2) ∈ S1 so X2=0
  • 122. • Now (1,2) ∈ S1 similarly (1,2) ∉ S0 • So x1=1. • The solution is (x1,x2,x3)=(1,0,1). • Maximum Profit • Ʃ pi xi= p1 x1 + p2 x2 + p3 x3 • =1*1+2*0+5*1 =6
  • 123.
  • 124.
  • 125.
  • 126.
  • 127. Input: value = [ 20, 5, 10, 40, 15, 25 ] weight = [ 1, 2, 3, 8, 7, 4 ] W = 10
  • 128. 5.OBST(OPTIMAL BINARY SEARCH TREES) • A Binary Search Tree (BST) is a tree where the key values are stored in the internal nodes. • The external nodes are null nodes. • The keys are ordered lexicographically, i.e. for each internal node all the keys in the left sub- tree are less than the keys in the node, and all the keys in the right sub-tree are greater.
  • 129. • When we know the frequency of searching each one of the keys, it is quite easy to compute the expected cost of accessing each node in the tree. • • An optimal binary search tree is a BST, which has minimal expected cost of locating each node. • Search time of an element in a BST is O(n), whereas in a Balanced-BST search time is O(log n). • Again the search time can be improved in Optimal Cost Binary Search Tree, placing the most frequently used data in the root and closer to the root element, while placing the least frequently used data near leaves and in leaves.
  • 130. • Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of all keys such that the total cost of all the searches is as small as possible. • Let us first define the cost of a BST. The cost of a BST node is the level of that node multiplied by its frequency. The level of the root is 1. • Examples: • Input: keys[] = {10, 12}, freq[] = {34, 50} • There can be following two possible BSTs • 10 12 • / • 12 10 • I II • Frequency of searches of 10 and 12 are 34 and 50 respectively. • The cost of tree I is 34*1 + 50*2 = 134 • The cost of tree II is 50*1 + 34*2 = 118
  • 131. Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of all keys such that the total cost of all the searches is as small as possible. Let us first define the cost of a BST. The cost of a BST node is the level of that node multiplied by its frequency. The level of the root is 1. Examples: Input: keys[] = {10, 12}, freq[] = {34, 50} There can be following two possible BSTs 10 12 / 12 10 I II Frequency of searches of 10 and 12 are 34 and 50 respectively. The cost of tree I is 34*1 + 50*2 = 134 The cost of tree II is 50*1 + 34*2 = 118
  • 132. • Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50} • There can be following possible BSTs • 10 12 20 10 20 • / / / • 12 10 20 12 20 10 • / / • 20 10 12 12 • I II III IV V
  • 133. • Among all possible BSTs, cost of the fifth BST is minimum. • Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142
  • 134. Fig (a) & (b)Two possible binary search trees
  • 135. • Given a fixed set of identifiers, we wish to create a binary search tree organization. • We may expect different binary search trees for the same identifier set to have different performance characteristics. • The tree of Figure(a) in, the worst case, requires four comparisons to find an identifier, where as the tree of Figure(b) requires only three. • On the average the two trees need 12/5 and 11/5 comparisons respectively.
  • 136. • For example in the case of tree(a), it takes 1,2,2,3 and 4 comparisons, respectively, to find the identifiers for, do, while, int & if. • Thus the average number of comparisons is (1+2+2+3+4)/5=12/5. • This calculation assumes that Each identifier is searched for with equal probability and that no unsuccessful searches(i.e.searches for identifiers not in the tree) are made.
  • 137. • In a general situation, we can expect different identifiers to be searched for with different frequencies (or probabilities). • In addition, we can expect unsuccessful searches also to be made. • Let us assume that the given set of identifiers is {a1,a2,……,an} with a1 < a2 < ……< an. • Let p(i) be the probability with which we search for ai.
  • 138. • Let q(i) be the probability that the identifier ‘x’ being searched for is such that ai< x < ai+1, where 0 ≤ i ≤ n. • Then, Ʃ0 ≤ i ≤ n q(i) the probability of an unsuccessful search. • Clearly, Ʃ0 ≤ i ≤ n p(i)+ Ʃ0 ≤ i ≤ n q(i) =1. • Given this data, we wish to construct an optimal binary search tree for {a1,a2,….. an,}.
  • 139. • In obtaining a cost function for binary search trees, it is useful to add a ‘’fictitious’’ node in place of every empty sub tree in the search tree. Such nodes, called “External nodes”. All other nodes are internal nodes. • In Fig. Square boxes indicates External Nodes.
  • 140. • If a binary search tree represents n identifiers, then there will be exactly ‘n’ internal nodes and n+1 ( fictitious) external nodes. • Every internal node represents a point where a successful search may terminate. • Every external node represents a point where an unsuccessful search may terminate. • The expected cost contribution from the internal node for ai is p(i) * level(a i ). • The expected cost contribution from the External node for Ei is q(i) * level((E i ) -1).
  • 141. • The preceding discussion leads to the following formula for the expected Cost of a binary search tree • Example: The possible binary search trees for the identifier set (a1,a2,a3)= (do, if, while) are given if Figure5.1 Ʃ1 ≤ i ≤ n p(i) * level(a i)+Ʃ0 ≤ i ≤ n q(i) * level((E i ) -1)=1.
  • 142. • With p(1)= 0.5,p(2) = 0.1,p(3) = 0.05, q(0)= 0.15, q(1) =0.1,q(2)= 0.05 and q(3)=0.05 • For instance ,cost (tree a) can be computed as follows. The contribution from successful searches(internal nodes) is =3 * 0.5+2*0.1+1*0.05 =1.75. • the contribution from unsuccessful searches(external nodes) is = 3 * 0.15+3 * 0.1+2 * 0.05+1*0.05 = 0.90. Cost (tree ‘a’)=1.75+0.90=2.65.
  • 143. • All the other costs can also be calculated in a similar manner Cost (tree ‘a’)=1.75+0.90=2.65. Cost (tree ‘b’)=1.9 Cost (tree ‘c’)=1.5. Cost (tree ‘d’)=2.05. Cost (tree ‘e’)=1.6. • Tree ‘c’ is optimal with this assignment of p's and q's.
  • 144. • To apply dynamic programming to the problem of obtaining an optimal binary search tree, we need to view the construction of such a tree as the result of a sequence of decisions and then observe that the principle of optimality holds when applied to the problem state resulting from a decision. • A possible approach to this would be to make a decision as to which of the ‘ai’s’should be assigned to the root node of the tree.
  • 145. • If we choose ak , then it is clear that the internal nodes for a1, a2, a3,……, ak-1 . • as well as the external nodes for the classes E0,E1,……. Ek-1 will lie in the left sub tree ‘l’ of the root. • The remaining nodes will be in the right sub tree ‘r’. • In both cases the level is measured by regarding the root of the respective subtree to be at level 1. Cost(l)=Ʃ1 ≤ i < k p(i) * level(a i)+Ʃ0 ≤ i < k q(i) * level((E i ) -1) Cost(r)=Ʃk <i ≤ n p(i) * level(a i)+Ʃk < i ≤ n q(i) * level((E i ) -1)
  • 146. Figure-An optimal binary search tree with root ak. - Using w(I , j) to represent the - sum q(i)+ Ʃl=i+1 j (q(l)+p(l)) the following as the expected cost of the search tree (Figure) =P(k)+cost(l)+cost(r)+w(0,k-1)+w(k,n) ak l r
  • 147. • If the tree is optimal, then above equation must be minimum. • Hence, cost(l) must be minimum over all binary search trees containing a1, a2,…….. ak-1 And E1,E2,…….. Ek-1. • Similarly cost(r)must be minimum. If we use c(I , j) to represent the cost of an optimal binary search tree tij containing ai+1,….aj And Ei,,…….. Ej. • then for the tree to be optimal ,we must have ->cost(l) =c(0,k-1) & cost(r)= c(k , n). • In addition, k must be chosen such that p(k) + c(0,k -1)+c(k,n) +w(0,k -1)+w(k,n) is minimum.
  • 148. • Hence, for c(0,n) we obtain C(0,n)= min1 ≤ k≤n {c(0,k-1)+c(k,n)+p(k)+w(0,k-1) +w(k,n)}. • We can generalize above equation to obtain for any c(i, j) C(i, j)= min1 ≤ k≤j {c(i,k-1)+c(k,j)+p(k)+w(0,k-1) +w(k,j)} C(i, j)= min1 ≤ k≤j {c(i,k-1)+c(k,j)}+w(i,j), where i ≠j w(i, j)=P(j)+q(j)+w(i,j-1), where i ≠j c(i, i)= 0 , if i=j w(i, i)= q(i), if i=j
  • 149. • Above Equation can be solved for c(0,n) by first computing all c(i, j)such that j-i = 1. • Next we can compute all c(i, j) such that j-i =2, then all c(i, j) with j-i = 3, and so on. • If during this computation we record the root r(i,j) of each tree tij, then an optimal binary search tree can be constructed from these r(i, j). Note: r(i, j)is the value of ‘k’ that minimize, if i≠j
  • 150. • Example: Let n = 4 and (a1,a2,a3,a4)=(do, if, int, while). Let p(1:4) = (3,3,1,1) and q(0:4) = (2,3,1,1,1) using the rij’s construct OBST.
  • 151.
  • 152.
  • 153.
  • 154.
  • 155.
  • 156. • Example: Let n = 4 and (a1,a2,a3,a4)=(do, if, int, while). Let p(1:4) = (3,3,1,1) and q(0:4) = (2,3,1,1,1) Initially, we have w(i, i)= q(i) and c(i, i)=0 and r(i, i)= 0, when(i=j) 0 ≤i ≤ 4. when i!=j w(i, j)=P(j)+q(j)+w(i,j-1),t Sol: compute W,C,R for j-i=1,j-i=2,j-i=3 & j-i=4 p1=3,p2=3,p3=1 and p4=1 Q0=2,Q1=3,Q2=1,Q3=1 AND Q4=1
  • 158. • Knowing W(i, i +1) and C(i, i + 1),0≤i <4, we can again use equations compute w(i,i+2), c(i,i+2), and r(i,i+2), 0≤i <3. • This process can be repeated until w(0,4), c(0,4) and r(0,4)are obtained.
  • 159. Computation of c(0,4)w, (0,4)and r(0,4)
  • 160.
  • 161.
  • 163.
  • 164. w(i, j)=P(j)+q(j)+w(i,j-1) i!=j C(i, j)= mini< k≤j {c(i,k-1)+c(k,j)}+w(i,j), where i ≠ j rij=k
  • 165.
  • 166.
  • 167.
  • 168.
  • 169.
  • 170.
  • 171.
  • 172.
  • 173.
  • 174. • Optimal Binary search of above example if do int while
  • 175.
  • 176.
  • 177.
  • 178.
  • 179. -The computation of each of these c(i,j)‘s Requires us to find the minimum of ‘m’ quantities. -Hence, each such c(i,j) can be computed in time O(m). -The total time for all c(i,j) with j-i= m is therefore O(nm- m2). -The total time to evaluate all the c(i,j)'s and r(i,j)'s is therefore Ʃ1≤m≤n (nm- m2)=O(n3).
  • 180.
  • 181.
  • 182. 6.RELIABILITYDESIGN • In this section we look at an example of how to use dynamic programming to solve a problem with a multiplicative optimization function. • The problem is to design a system that is composed of several devices connected in series. • Let ‘ri’ be the reliability of device ‘D’ (that is ri is the probability that device i will function properly). • Then, the reliability of the entire system is ∏ri
  • 183.
  • 184.
  • 185.
  • 186. • Even if the individual devices are very reliable (the ‘ri 's are very close to one), the reliability of the system may not be very good. • Multiple copies of the same device type are connected in parallel(Figure 5.20) through the use of switching circuits. • The switching circuits determine which devices in any given group are functioning properly. • They then make use of one such device at each stage.
  • 187. • If stage ‘i’ contains mi copies of device Di ,then the probability that all mi have a malfunction is (1-ri)mi • Hence the reliability of stage ‘i’ becomes • In any practical situation, the stage reliability is a little less than 1-(1-ri)mi because the switching circuits themselves are not fully reliable. • Also, failures of copies of the same device may not be fully independent.(if failure is due to design defect). 1-(1-ri)mi
  • 188. • Let us assume that the reliability of stage ‘i’ is given by a function Фi(mi). • The reliability of the system of stages is • Our problem is to use device duplication to maximize reliability. • This maximization is to be carried out under a cost constraint. • Let Ci, be the cost of each unit of device di and let ‘c’ be the maximum allowable cost of the system being designed. ∏1≤m≤n Фi(mi)
  • 189. • We wish to solve the following maximization problem subject to mi > 1and integer, 1≤i≤n • A dynamic programming solution can be obtained in a manner similar to that used for the knapsack problem. maximize ∏1≤m≤n Фi(mi) Ʃ 1≤m≤n ci mi <=C
  • 190. • Since, we can assume each ci >0,each mi must be in the range 1≤ mi ≤ ui where • Here ‘u’ is the upper bound function. ui= (C+ Ci - Ʃ j=1 n C j / Ci )
  • 191. • Problem: We are to design a three stage system with device types D1D2 and D3 The costs are $30,$15,and $20 respectively. The cost of the system is to be no more than $105. The reliability of each device type is 0.9,0.8 and 0.5 respectively.