SlideShare a Scribd company logo
Dynamic Programming
Alexander S. Kulikov
Competitive Programmer’s Core Skills
https://www.coursera.org/learn/competitive-programming-core-skills
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
A solution is usually not so easy to find, but
when found, is easily implementable
Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
A solution is usually not so easy to find, but
when found, is easily implementable
Need a lot of practice!
Fibonacci numbers
Fibonacci numbers
Fn =
⎧
⎪
⎨
⎪
⎩
0, n = 0 ,
1, n = 1 ,
Fn−1 + Fn−2, n > 1 .
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . .
Computing Fibonacci Numbers
Computing Fn
Input: An integer n ≥ 0.
Output: The n-th Fibonacci number Fn.
Computing Fibonacci Numbers
Computing Fn
Input: An integer n ≥ 0.
Output: The n-th Fibonacci number Fn.
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
Recursion Tree
Fn
Fn−1
Fn−2
Fn−3 Fn−4
Fn−3
Fn−4 Fn−5
Fn−2
Fn−3
Fn−4 Fn−5
Fn−4
Fn−5 Fn−6
.
.
.
Recursion Tree
Fn
Fn−1
Fn−2
Fn−3 Fn−4
Fn−3
Fn−4 Fn−5
Fn−2
Fn−3
Fn−4 Fn−5
Fn−4
Fn−5 Fn−6
.
.
.
Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
E.g., F150 is already 31 decimal digits long
Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
E.g., F150 is already 31 decimal digits long
The Sun may die before your computer returns
F150
Reason
Many computations are repeated
Reason
Many computations are repeated
“Those who cannot remember the past are
condemned to repeat it.” (George Santayana)
Reason
Many computations are repeated
“Those who cannot remember the past are
condemned to repeat it.” (George Santayana)
A simple, but crucial idea: instead of
recomputing the intermediate results, let’s store
them once they are computed
Memoization
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
Memoization
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
1 T = dict()
2
3 def f i b (n ) :
4 if n not in T:
5 i f n <= 1:
6 T[n] = n
7 else :
8 T[n] = f i b (n − 1) + f i b (n − 2)
9
10 return T[n]
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
4 F4 = 1 + 2 = 3
Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
4 F4 = 1 + 2 = 3
5 F5 = 2 + 3 = 5
Iterative Algorithm
1 def f i b (n ) :
2 T = [ None ] * (n + 1)
3 T[ 0 ] , T[ 1 ] = 0 , 1
4
5 for i in range (2 , n + 1 ) :
6 T[ i ] = T[ i − 1] + T[ i − 2]
7
8 return T[ n ]
Hm Again...
But do we really need to waste so much space?
Hm Again...
But do we really need to waste so much space?
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4
5 previous , c u r r e n t = 0 , 1
6 for _ in range (n − 1 ) :
7 new_current = p r e v i o u s + c u r r e n t
8 previous , c u r r e n t = current , new_current
9
10 return c u r r e n t
Running Time
O(n) additions
Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
In theory: we should not treat such additions as
basic operations
Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
In theory: we should not treat such additions as
basic operations
In practice: just F100 does not fit into a 64-bit
integer type anymore, hence we need bignum
arithmetic
Summary
The key idea of dynamic programming: avoid
recomputing the same thing again!
Summary
The key idea of dynamic programming: avoid
recomputing the same thing again!
At the same time, the case of Fibonacci
numbers is a slightly artificial example of
dynamic programming since it is clear from the
very beginning what intermediate results we
need to compute the final result
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Longest Increasing Subsequence
Longest increasing subsequence
Input: An array A = [a0, a1, . . . , an−1].
Output: A longest increasing subsequence (LIS),
i.e., ai1
, ai2
, . . . , aik
such that
i1 < i2 < . . . < ik, ai1
< ai2
< · · · < aik
,
and k is maximal.
Example
Example
7 2 1 3 8 4 9 1 2 6 5 9 3 8 1
Example
Example
7 2 1 3 8 4 9 1 2 6 5 9 3 8 1
Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
Optimal substructure by “cut-and-paste” trick
Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
Convention: maximum of an empty set is equal
to zero
Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
Convention: maximum of an empty set is equal
to zero
Base case: LIS(0) = 1
Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
Initially, T is empty. When LIS(i) is computed,
we store its value at T[i] (so that we will never
recompute LIS(i) again)
Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
Initially, T is empty. When LIS(i) is computed,
we store its value at T[i] (so that we will never
recompute LIS(i) again)
The exact data structure behind T is not that
important at this point: it could be an array or
a hash table
Memoization
1 T = dict ()
2
3 def l i s (A, i ) :
4 i f i not in T:
5 T[ i ] = 1
6
7 for j in range ( i ) :
8 i f A[ j ] < A[ i ] :
9 T[ i ] = max(T[ i ] , l i s (A, j ) + 1)
10
11 return T[ i ]
12
13 A = [7 , 2 , 1 , 3 , 8 , 4 , 9 , 1 , 2 , 6 , 5 , 9 , 3]
14 print (max( l i s (A, i ) for i in range ( len (A) ) ) )
Running Time
The running time is quadratic (O(n2
)): there are n
“serious” recursive calls (that are not just table
look-ups), each of them needs time O(n) (not
counting the inner recursive calls)
Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
Reasonable choice of a data structure for T:
an array of size n
Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
Reasonable choice of a data structure for T:
an array of size n
Moreover, one can fill in this array iteratively
instead of recursively
Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
Crucial property: when computing T[i], T[j] for
all j < i have already been computed
Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
Crucial property: when computing T[i], T[j] for
all j < i have already been computed
Running time: O(n2
)
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Reconstructing a Solution
How to reconstruct an optimal IS?
Reconstructing a Solution
How to reconstruct an optimal IS?
In order to reconstruct it, for each subproblem
we will keep its optimal value and a choice
leading to this value
Adjusting the Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3 prev = [None] * len(A)
4
5 for i in range ( len (A ) ) :
6 T[ i ] = 1
7 prev[i] = -1
8 for j in range ( i ) :
9 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
10 T[ i ] = T[ j ] + 1
11 prev[i] = j
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Example
Example
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
-1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
Unwinding Solution
1 l a s t = 0
2 for i in range (1 , len (A ) ) :
3 i f T[ i ] > T[ l a s t ] :
4 l a s t = i
5
6 l i s= [ ]
7 c u r r e n t = l a s t
8 while c u r r e n t >= 0:
9 l i s . append ( c u r r e n t )
10 c u r r e n t = prev [ c u r r e n t ]
11 l i s . r e v e r s e ()
12 return [A[ i ] for i in l i s ]
Reconstructing Again
Reconstructing without prev
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
Reconstructing Again
Reconstructing without prev
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
Reconstructing Again
Reconstructing without prev
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
Reconstructing Again
Reconstructing without prev
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
Reconstructing Again
Reconstructing without prev
7
0
2
1
1
2
3
3
8
4
4
5
9
6
1
7
2
8
6
9
5
10
9
11
3
12
8
13
1
14
1 1 1 2 3 3 4 1 2 4 4 5 3 5 1
A
T
Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
A recursive algorithm, in turn, can be converted
into an iterative one
Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
A recursive algorithm, in turn, can be converted
into an iterative one
An optimal solution can be recovered either by
using an additional bookkeeping info or by using
the computed solutions to all subproblems
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
In the previous video, we arrived at a reasonable
subproblem by analyzing the structure of an
optimal solution
The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
In the previous video, we arrived at a reasonable
subproblem by analyzing the structure of an
optimal solution
In this video, we’ll provide an alternative way of
arriving at subproblems: implement a naive
brute force solution, then optimize it
Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
Keep track of the length of the sequence
Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
Keep track of the length of the sequence
This is going to be slow, but not to worry: we
will optimize it later
Brute Force: Code
1 def l i s (A, seq ) :
2 r e s u l t = len ( seq )
3
4 i f len ( seq ) == 0:
5 last_index = −1
6 last_element = float ( "−i n f " )
7 else :
8 last_index = seq [ −1]
9 last_element = A[ last_index ]
10
11 for i in range ( last_index + 1 , len (A ) ) :
12 i f A[ i ] > last_element :
13 r e s u l t = max( r e s u l t , l i s (A, seq + [ i ] ) )
14
15 return r e s u l t
16
17 print ( l i s (A=[7 , 2 , 1 , 3 , 8 , 4 , 9] , seq =[]))
Optimizing
At each step, we are trying to extend the
current sequence
Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
At the same time, code inspection reveals that
we are not using all of the sequence: we are only
interested in its last element and its length
Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
At the same time, code inspection reveals that
we are not using all of the sequence: we are only
interested in its last element and its length
Let’s optimize!
Optimized Code
1 def l i s (A, seq_len , last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = seq_len
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t ,
12 l i s (A, seq_len + 1 , i ))
13
14 return r e s u l t
15
16 print ( l i s ( [ 3 , 2 , 7 , 8 , 9 , 5 , 8] , 0 , −1))
Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
Hence, can optimize the code as follows:
max(result, 1 + seq_len + extend(A, 0, i))
Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
Hence, can optimize the code as follows:
max(result, 1 + seq_len + extend(A, 0, i))
Excludes seq_len from the list of parameters!
Resulting Code
1 def l i s (A, last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = 0
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t , 1 + l i s (A, i ))
12
13 return r e s u l t
14
15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1))
Resulting Code
1 def l i s (A, last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = 0
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t , 1 + l i s (A, i ))
12
13 return r e s u l t
14
15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1))
It remains to add memoization!
Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
Analyze the structure of an optimal solution
Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
Analyze the structure of an optimal solution
Implement a brute force solution and optimize it
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Statement
Edit distance
Input: Two strings A[0 . . . n − 1] and
B[0 . . . m − 1].
Output: The minimal number of insertions,
deletions, and substitutions needed to
transform A to B. This number is known
as edit distance or Levenshtein distance.
Example: EDITING → DISTANCE
EDITING
Example: EDITING → DISTANCE
EDITING
DITING
remove E
Example: EDITING → DISTANCE
EDITING
DITING
DISTING
remove E
insert S
Example: EDITING → DISTANCE
EDITING
DITING
DISTING
DISTANG
remove E
insert S
replace I with by A
Example: EDITING → DISTANCE
EDITING
DITING
DISTING
DISTANG
DISTANC
remove E
insert S
replace I with by A
replace G with C
Example: EDITING → DISTANCE
EDITING
DITING
DISTING
DISTANG
DISTANC
DISTANCE
remove E
insert S
replace I with by A
replace G with C
insert E
Example: alignment
E D I − T I N G −
− D I S T A N C E
cost: 5
Example: alignment
E D I − T I N G −
− D I S T A N C E
cost: 5
substitutions/mismatches
deletion insertions
matches
Analyzing an Optimal Alignment
A[0 . . . n − 1]
B[0 . . . m − 1]
Analyzing an Optimal Alignment
A[0 . . . n − 1]
B[0 . . . m − 1]
A[0 . . . n − 1] −
B[0 . . . m − 2] B[m − 1]
insertion
Analyzing an Optimal Alignment
A[0 . . . n − 1]
B[0 . . . m − 1]
A[0 . . . n − 1] −
B[0 . . . m − 2] B[m − 1]
insertion
A[0 . . . n − 2] A[n − 1]
B[0 . . . m − 1] −
deletion
Analyzing an Optimal Alignment
A[0 . . . n − 1]
B[0 . . . m − 1]
A[0 . . . n − 1] −
B[0 . . . m − 2] B[m − 1]
insertion
A[0 . . . n − 2] A[n − 1]
B[0 . . . m − 1] −
deletion
A[0 . . . n − 2] A[n − 1]
B[0 . . . m − 2] B[m − 1]
match/mismatch
Subproblems
Let ED(i, j) be the edit distance of
A[0 . . . i − 1] and B[0 . . . j − 1].
Subproblems
Let ED(i, j) be the edit distance of
A[0 . . . i − 1] and B[0 . . . j − 1].
We know for sure that the last column of an
optimal alignment is either an insertion, a
deletion, or a match/mismatch.
Subproblems
Let ED(i, j) be the edit distance of
A[0 . . . i − 1] and B[0 . . . j − 1].
We know for sure that the last column of an
optimal alignment is either an insertion, a
deletion, or a match/mismatch.
What is left is an optimal alignment of the
corresponding two prefixes (by cut-and-paste).
Recurrence Relation
ED(i, j) = min
⎧
⎪
⎨
⎪
⎩
ED(i, j − 1) + 1
ED(i − 1, j) + 1
ED(i − 1, j − 1) + diff(A[i], B[j])
Recurrence Relation
ED(i, j) = min
⎧
⎪
⎨
⎪
⎩
ED(i, j − 1) + 1
ED(i − 1, j) + 1
ED(i − 1, j − 1) + diff(A[i], B[j])
Base case: ED(i, 0) = i, ED(0, j) = j
Recursive Algorithm
1 T = dict ()
2
3 def edit_distance (a , b , i , j ) :
4 i f not ( i , j ) in T:
5 i f i == 0: T[ i , j ] = j
6 e l i f j == 0: T[ i , j ] = i
7 else :
8 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1
9 T[ i , j ] = min(
10 edit_distance (a , b , i − 1 , j ) + 1 ,
11 edit_distance (a , b , i , j − 1) + 1 ,
12 edit_distance (a , b , i − 1 , j − 1) + d i f f )
13
14 return T[ i , j ]
15
16
17 print ( edit_distance ( a=" e d i t i n g " , b=" d i s t a n c e " ,
18 i =7, j =8))
Converting to a Recursive Algorithm
Use a 2D table to store the intermediate results
Converting to a Recursive Algorithm
Use a 2D table to store the intermediate results
ED(i, j) depends on ED(i − 1, j − 1),
ED(i − 1, j), and ED(i, j − 1):
0
n
i
0 m
j
(m
is)m
atch
insertion
deletion
Filling the Table
Fill in the table row by row or column by column:
0
n
i
0 m
j
0
n
i
0 m
j
Iterative Algorithm
1 def edit_distance (a , b ) :
2 T = [ [ f l o a t ( " i n f " ) ] * ( len (b) + 1)
3 for _ in range ( len ( a ) + 1 ) ]
4 for i in range ( len ( a ) + 1 ) :
5 T[ i ] [ 0 ] = i
6 for j in range ( len (b) + 1 ) :
7 T [ 0 ] [ j ] = j
8
9 for i in range (1 , len ( a ) + 1 ) :
10 for j in range (1 , len (b) + 1 ) :
11 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1
12 T[ i ] [ j ] = min(T[ i − 1 ] [ j ] + 1 ,
13 T[ i ] [ j − 1] + 1 ,
14 T[ i − 1 ] [ j − 1] + d i f f )
15
16 return T[ len ( a ) ] [ len (b ) ]
17
18
19 print ( edit_distance ( a=" d i s t a n c e " , b=" e d i t i n g " ))
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1 2
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1 2 3 4 5 6
I 2 2 2 2 1 2 3 4 5
S 3 3 3 3 2 2 3 4 5
T 4 4 4 4 3 2 3 4 5
A 5 5 5 5 4 3 3 4 5
N 6 6 6 6 5 4 4 3 4
C 7 7 7 7 6 5 5 4 4
E 8 8 7 8 7 6 6 5 5
Brute Force
Recursively construct an alignment column by
column
Brute Force
Recursively construct an alignment column by
column
Then note, that for extending the partially
constructed alignment optimally, one only needs
to know the already used length of prefix of A
and the length of prefix of B
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
If ED(i, j) = ED(i, j − 1) + 1, then there exists
an optimal alignment whose last column is an
insertion
Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
If ED(i, j) = ED(i, j − 1) + 1, then there exists
an optimal alignment whose last column is an
insertion
If ED(i, j) = ED(i − 1, j − 1) + diff(A[i], B[j]),
then match (if A[i] = B[j]) or mismatch (if
A[i] ̸= B[j])
Example
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
E
G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
C E
- G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
N C E
N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
A N C E
I N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
T A N C E
T I N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
S T A N C E
- T I N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
I S T A N C E
I - T I N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
D I S T A N C E
D I - T I N - G
Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
- D I S T A N C E
E D I - T I N - G
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Saving Space
When filling in the matrix it is enough to keep
only the current column and the previous
column:
0
n
i
0 m
j
0
n
i
0 m
j
Saving Space
When filling in the matrix it is enough to keep
only the current column and the previous
column:
0
n
i
0 m
j
0
n
i
0 m
j
Thus, one can compute the edit distance of two
given strings A[1 . . . n] and B[1 . . . m] in time
O(nm) and space O(min{n, m}).
Reconstructing a Solution
However we need the whole table to find an
actual alignment (we trace an alignment from
the bottom right corner to the top left corner)
Reconstructing a Solution
However we need the whole table to find an
actual alignment (we trace an alignment from
the bottom right corner to the top left corner)
There exists an algorithm constructing an
optimal alignment in time O(nm) and space
O(n + m) (Hirschberg’s algorithm)
Weighted Edit Distance
The cost of insertions, deletions, and
substitutions is not necessarily identical
Spell checking: some substitutions are more
likely than others
Biology: some mutations are more likely than
others
Generalized Recurrence Relation
min
⎧
⎪
⎨
⎪
⎩
ED(i, j − 1) + inscost(B[j]),
ED(i − 1, j) + delcost(A[i]),
ED(i − 1, j − 1) + substcost(A[i], B[j])
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Knapsack Problem
Goal
Maximize
value ($) while
limiting total
weight (kg)
Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
Select a set of TV commercials (each commercial
has duration and cost) so that the total revenue is
maximal while the total length does not exceed the
length of the available time slot
Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
Select a set of TV commercials (each commercial
has duration and cost) so that the total revenue is
maximal while the total length does not exceed the
length of the available time slot
Purchase computers for a data center to achieve
the maximal performance under limited budget
Problem Variations
knapsack
Problem Variations
knapsack
fractional
knapsack
discrete
knapsack
Problem Variations
knapsack
fractional
knapsack
discrete
knapsack
can take fractions
of items
each item is either taken
or not
Problem Variations
knapsack
fractional
knapsack
discrete
knapsack
with
repetitions
without
repetitions
unlimited
quantities
one of each
item
Problem Variations
knapsack
fractional
knapsack
discrete
knapsack
with
repetitions
without
repetitions
unlimited
quantities
one of each
item
greedy algorithm
Problem Variations
knapsack
fractional
knapsack
discrete
knapsack
with
repetitions
without
repetitions
unlimited
quantities
one of each
item
greedy algorithm
greedy does not work for
discrete knapsack! will
design a dynamic program-
ming solution
Example
6
$30
3
$14
4
$16
2
$9
10
knapsack
Example
6
$30
3
$14
4
$16
2
$9
6
$30
4
$16
w/o repeats total: $46
Example
6
$30
3
$14
4
$16
2
$9
6
$30
4
$16
w/o repeats total: $46
6
$30
2
$9
2
$9
w repeats total: $48
Example
6
$30
3
$14
4
$16
2
$9
6
$30
4
$16
w/o repeats total: $46
6
$30
2
$9
2
$9
w repeats total: $48
fractional 6
$30
3 1
$4.5
$14
total: $48.5
Without repetitions:
one of each item
With repetitions:
unlimited quantities
Knapsack with repetitions problem
Input: Weights w0, . . . , wn−1 and values
v0, . . . , vn−1 of n items; total weight W
(vi’s, wi’s, and W are non-negative
integers).
Output: The maximum value of items whose
weight does not exceed W . Each item
can be used any number of times.
Analyzing an Optimal Solution
Consider an optimal solution and an item in it:
W
wi
Analyzing an Optimal Solution
Consider an optimal solution and an item in it:
W
wi
If we take this item out then we get an optimal
solution for a knapsack of total weight W − wi.
Subproblems
Let value(u) be the maximum value of knapsack
of weight u
Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
Base case: value(0) = 0
Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
Base case: value(0) = 0
This recurrence relation is transformed into
a recursive algorithm in a straightforward way
Recursive Algorithm
1 T = dict ()
2
3 def knapsack (w, v , u ) :
4 i f u not in T:
5 T[ u ] = 0
6
7 for i in range ( len (w) ) :
8 i f w[ i ] <= u :
9 T[ u ] = max(T[ u ] ,
10 knapsack (w, v , u − w[ i ] ) + v [ i ] )
11
12 return T[ u ]
13
14
15 print ( knapsack (w=[6 , 3 , 4 , 2] ,
16 v =[30 , 14 , 16 , 9] , u=10))
Recursive into Iterative
As usual, one can transform a recursive
algorithm into an iterative one
Recursive into Iterative
As usual, one can transform a recursive
algorithm into an iterative one
For this, we gradually fill in an array T:
T[u] = value(u)
Recursive Algorithm
1 def knapsack (W, w, v ) :
2 T = [ 0 ] * (W + 1)
3
4 for u in range (1 , W + 1 ) :
5 for i in range ( len (w) ) :
6 i f w[ i ] <= u :
7 T[ u ] = max(T[ u ] , T[ u − w[ i ] ] + v [ i ] )
8
9 return T[W]
10
11
12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
13 v =[30 , 14 , 16 , 9 ] ) )
Example: W = 10
6
$30
3
$14
4
$16
2
$9
0 1 2 3 4 5 6 7 8 9 10
0 0 9 14 18 23 30 32 39 44
Example: W = 10
6
$30
3
$14
4
$16
2
$9
0 1 2 3 4 5 6 7 8 9 10
0 0 9 14 18 23 30 32 39 44
+30 +14 +9
+16
Example: W = 10
6
$30
3
$14
4
$16
2
$9
0 1 2 3 4 5 6 7 8 9 10
0 0 9 14 18 23 30 32 39 44 48
+30 +14 +9
+16
Subproblems Revisited
Another way of arriving at subproblems:
optimizing brute force solution
Subproblems Revisited
Another way of arriving at subproblems:
optimizing brute force solution
Populate a list of used items one by one
Brute Force: Knapsack with Repetitions
1 def knapsack (W, w, v , items ) :
2 weight = sum(w[ i ] for i in items )
3 value = sum( v [ i ] for i in items )
4
5 for i in range ( len (w) ) :
6 i f weight + w[ i ] <= W:
7 value = max( value ,
8 knapsack (W, w, v , items + [ i ] ) )
9
10 return value
11
12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
13 v =[30 , 14 , 16 , 9] , items =[]))
Subproblems
It remains to notice that the only important
thing for extending the current set of items is
the weight of this set
Subproblems
It remains to notice that the only important
thing for extending the current set of items is
the weight of this set
One then replaces items by their weight in the
list of parameters
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Without repetitions:
one of each item
With repetitions:
unlimited quantities
Knapsack without repetitions problem
Input: Weights w0, . . . , wn−1 and values
v0, . . . , vn−1 of n items; total weight W
(vi’s, wi’s, and W are non-negative
integers).
Output: The maximum value of items whose
weight does not exceed W . Each item
can be used at most once.
Same Subproblems?
W
wn−1
Same Subproblems?
W
wn−1
W − wn−1
Same Subproblems?
W
wn−1
W − wn−1
wn−1
Same Subproblems?
W
wn−1
W − wn−1
wn−1
Subproblems
If the last item is taken into an optimal solution:
W
wn−1
then what is left is an optimal solution for a
knapsack of total weight W − wn−1 using items
0, 1, . . . , n − 2.
Subproblems
If the last item is taken into an optimal solution:
W
wn−1
then what is left is an optimal solution for a
knapsack of total weight W − wn−1 using items
0, 1, . . . , n − 2.
If the last item is not used, then the whole
knapsack must be filled in optimally with items
0, 1, . . . , n − 2.
Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
Base case: value(u, 0) = 0, value(0, i) = 0
Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
Base case: value(u, 0) = 0, value(0, i) = 0
For i > 0, the item i − 1 is either used or not:
value(u, i) is equal to
max{value(u−wi−1, i−1)+vi−1, value(u, i−1)}
Recursive Algorithm
1 T = dict ()
2
3 def knapsack (w, v , u , i ) :
4 i f (u , i ) not in T:
5 i f i == 0:
6 T[ u , i ] = 0
7 else :
8 T[ u , i ] = knapsack (w, v , u , i − 1)
9 i f u >= w[ i − 1 ] :
10 T[ u , i ] = max(T[ u , i ] ,
11 knapsack (w, v , u − w[ i − 1] , i − 1) + v [ i − 1 ]
12
13 return T[ u , i ]
14
15
16 print ( knapsack (w=[6 , 3 , 4 , 2] ,
17 v =[30 , 14 , 16 , 9] , u=10, i =4))
Iterative Algorithm
1 def knapsack (W, w, v ) :
2 T = [ [ None ] * ( len (w) + 1) for _ in range (W + 1 ) ]
3
4 for u in range (W + 1 ) :
5 T[ u ] [ 0 ] = 0
6
7 for i in range (1 , len (w) + 1 ) :
8 for u in range (W + 1 ) :
9 T[ u ] [ i ] = T[ u ] [ i − 1]
10 i f u >= w[ i − 1 ] :
11 T[ u ] [ i ] = max(T[ u ] [ i ] ,
12 T[ u − w[ i − 1 ] ] [ i − 1] + v [ i − 1 ] )
13
14 return T[W] [ len (w) ]
15
16
17 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
18 v =[30 , 14 , 16 , 9 ] ) )
Analysis
Running time: O(nW )
Analysis
Running time: O(nW )
Space: O(nW )
Analysis
Running time: O(nW )
Space: O(nW )
Space can be improved to O(W ) in the iterative
version: instead of storing the whole table, store
the current column and the previous one
Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
If value(u, i) = value(u, i − 1), then item i − 1
is not taken. Update i to i − 1
Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
If value(u, i) = value(u, i − 1), then item i − 1
is not taken. Update i to i − 1
Otherwise
value(u, i) = value(u − wi−1, i − 1) + vi−1 and
the item i − i is taken. Update i to i − 1 and u
to u − wi−1
Subproblems Revisited
How to implement a brute force solution for the
knapsack without repetitions problem?
Subproblems Revisited
How to implement a brute force solution for the
knapsack without repetitions problem?
Process items one by one. For each item, either
take into a bag or not
1 def knapsack (W, w, v , items , l a s t ) :
2 weight = sum(w[ i ] for i in items )
3
4 i f l a s t == len (w) − 1:
5 return sum( v [ i ] for i in items )
6
7 value = knapsack (W, w, v , items , l a s t + 1)
8 i f weight + w[ l a s t + 1] <= W:
9 items . append ( l a s t + 1)
10 value = max( value ,
11 knapsack (W, w, v , items , l a s t + 1))
12 items . pop ()
13
14 return value
15
16 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
17 v =[30 , 14 , 16 , 9] ,
18 items =[] , l a s t =−1))
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Recursive vs Iterative
If all subproblems must be solved then an
iterative algorithm is usually faster since it has
no recursion overhead
Recursive vs Iterative
If all subproblems must be solved then an
iterative algorithm is usually faster since it has
no recursion overhead
There are cases however when one does not
need to solve all subproblems and the knapsack
problem is a good example: assume that W and
all wi’s are multiples of 100; then value(w) is
not needed if w is not divisible by 100
Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
E.g., for
W = 10 345 970 345 617 824 751
(twentу digits only!) the algorithm needs
roughly 1020
basic operations
Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
E.g., for
W = 10 345 970 345 617 824 751
(twentу digits only!) the algorithm needs
roughly 1020
basic operations
Solving the knapsack problem in truly
polynomial time is the essence of the P vs NP
problem, the most important open problem in
Computer Science (with a bounty of $1M)
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Chain matrix multiplication
Input: Chain of n matrices A0, . . . , An−1 to be
multiplied.
Output: An order of multiplication minimizing the
total cost of multiplication.
Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
Thus A × B × C × D can be computed, e.g., as
(A × B) × (C × D) or (A × (B × C)) × D
Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
Thus A × B × C × D can be computed, e.g., as
(A × B) × (C × D) or (A × (B × C)) × D
The cost of multiplying two matrices of size
p × q and q × r is pqr
Example: A × ((B × C) × D)
A
50 × 20
B
20 × 1
C
1 × 10
D
10 × 100
× × ×
cost:
Example: A × ((B × C) × D)
A
50 × 20
B × C
20 × 10
D
10 × 100
× ×
cost: 20 · 1 · 10
Example: A × ((B × C) × D)
A
50 × 20
B × C × D
20 × 100
×
cost: 20 · 1 · 10 + 20 · 10 · 100
Example: A × ((B × C) × D)
A × B × C × D
50 × 100
cost: 20 · 1 · 10 + 20 · 10 · 100 + 50 · 20 · 100 = 120 200
Example: (A × B) × (C × D)
A
50 × 20
B
20 × 1
C
1 × 10
D
10 × 100
× × ×
cost:
Example: (A × B) × (C × D)
A × B
50 × 1
C
1 × 10
D
10 × 100
× ×
cost: 50 · 20 · 1
Example: (A × B) × (C × D)
A × B
50 × 1
C × D
1 × 100
×
cost: 50 · 20 · 1 + 1 · 10 · 100
Example: (A × B) × (C × D)
A × B × C × D
50 × 100
cost: 50 · 20 · 1 + 1 · 10 · 100 + 50 · 1 · 100 = 7 000
Order as a Full Binary Tree
D
C
A B
((A × B) × C) × D
A
D
B C
A × ((B × C) × D)
D
A
B C
(A × (B × C)) × D
Analyzing an Optimal Tree
A0, . . . , Ai−1
Ai, . . . , Aj−1
Aj, . . . , Ak−1
Ak, . . . , An−1
each subtree computes
the product of Ap, . . . , Aq for some p ≤ q
Subproblems
Let M(i, j) be the minimum cost of computing
Ai × · · · × Aj−1
Subproblems
Let M(i, j) be the minimum cost of computing
Ai × · · · × Aj−1
Then
M(i, j) = min
i<k<j
{M(i, k)+M(k, j)+mi ·mk ·mj}
Subproblems
Let M(i, j) be the minimum cost of computing
Ai × · · · × Aj−1
Then
M(i, j) = min
i<k<j
{M(i, k)+M(k, j)+mi ·mk ·mj}
Base case: M(i, i + 1) = 0
Recursive Algorithm
1 T = dict ()
2
3 def matrix_mult (m, i , j ) :
4 i f ( i , j ) not in T:
5 i f j == i + 1:
6 T[ i , j ] = 0
7 else :
8 T[ i , j ] = f l o a t ( " i n f " )
9 for k in range ( i + 1 , j ) :
10 T[ i , j ] = min(T[ i , j ] ,
11 matrix_mult (m, i , k ) +
12 matrix_mult (m, k , j ) +
13 m[ i ] * m[ j ] * m[ k ] )
14
15 return T[ i , j ]
16
17 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100] , i =0, j =4))
Converting to an Iterative Algorithm
We want to solve subproblems going from
smaller size subproblems to larger size ones
The size is the number of matrices needed to be
multiplied: j − i
A possible order:
Iterative Algorithm
1 def matrix_mult (m) :
2 n = len (m) − 1
3 T = [ [ f l o a t ( " i n f " ) ] * (n + 1) for _ in range (n + 1 ) ]
4
5 for i in range (n ) :
6 T[ i ] [ i + 1] = 0
7
8 for s in range (2 , n + 1 ) :
9 for i in range (n − s + 1 ) :
10 j = i + s
11 for k in range ( i + 1 , j ) :
12 T[ i ] [ j ] = min(T[ i ] [ j ] ,
13 T[ i ] [ k ] + T[ k ] [ j ] +
14 m[ i ] * m[ j ] * m[ k ] )
15
16 return T [ 0 ] [ n ]
17
18 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100]))
Final Remarks
Running time: O(n3
)
Final Remarks
Running time: O(n3
)
To unwind a solution, go from the cell (0, n) to
a cell (i, i + 1)
Final Remarks
Running time: O(n3
)
To unwind a solution, go from the cell (0, n) to
a cell (i, i + 1)
Brute force search: recursively enumerate all
possible trees
Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
Step 1 (the most important step)
Define subproblems and write down a
recurrence relation (with a base case)
either by analyzing the structure
of an optimal solution, or
by optimizing a brute force
solution
Subproblems: Review
1 Longest increasing subsequence: LIS(i) is the
length of longest common subsequence ending
at element A[i]
2 Edit distance: ED(i, j) is the edit distance
between prefixes of length i and j
3 Knapsack: K(w) is the optimal value of
a knapsack of total weight w
4 Chain matrix multiplication M(i, j) is the
optimal cost of multiplying matrices through i
to j − 1
Step 2
Convert a recurrence relation into a
recursive algorithm:
store a solution to each
subproblem in a table
before solving a subproblem check
whether its solution is already
stored in the table
Step 3
Convert a recursive algorithm into an
iterative algorithm:
initialize the table
go from smaller subproblems to
larger ones
specify an order of subproblems
Step 4
Prove an upper bound on the running
time. Usually the product of the
number of subproblems and the time
needed to solve a subproblem is a
reasonable estimate.
Step 5
Uncover a solution
Step 6
Exploit the regular structure of the
table to check whether space can be
saved
Recursive vs Iterative
Advantages of iterative approach:
No recursion overhead
May allow saving space by exploiting a regular
structure of the table
Advantages of recursive approach:
May be faster if not all the subproblems need to be
solved
An order on subproblems is implicit

More Related Content

What's hot

ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems ReviewACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
Roman Elizarov
 
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems ReviewACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
Roman Elizarov
 
The Complexity Of Primality Testing
The Complexity Of Primality TestingThe Complexity Of Primality Testing
The Complexity Of Primality Testing
Mohammad Elsheikh
 
Lecture26
Lecture26Lecture26
Computer Science Assignment Help
Computer Science Assignment Help Computer Science Assignment Help
Computer Science Assignment Help
Programming Homework Help
 
ARIC Team Seminar
ARIC Team SeminarARIC Team Seminar
ARIC Team Seminar
Benoit Lopez
 
Error control coding bch, reed-solomon etc..
Error control coding   bch, reed-solomon etc..Error control coding   bch, reed-solomon etc..
Error control coding bch, reed-solomon etc..
Madhumita Tamhane
 
Elliptic Curves and Elliptic Curve Cryptography
Elliptic Curves and Elliptic Curve CryptographyElliptic Curves and Elliptic Curve Cryptography
Elliptic Curves and Elliptic Curve Cryptography
Md. Al-Amin Khandaker Nipu
 
Elliptical curve cryptography
Elliptical curve cryptographyElliptical curve cryptography
Elliptical curve cryptography
Barani Tharan
 
Introduction to the AKS Primality Test
Introduction to the AKS Primality TestIntroduction to the AKS Primality Test
Introduction to the AKS Primality Test
Pranshu Bhatnagar
 
Signals Processing Homework Help
Signals Processing Homework HelpSignals Processing Homework Help
Signals Processing Homework Help
Matlab Assignment Experts
 
Computer Science Assignment Help
Computer Science Assignment HelpComputer Science Assignment Help
Computer Science Assignment Help
Programming Homework Help
 
Stdlib functions lesson
Stdlib functions lessonStdlib functions lesson
Stdlib functions lesson
teach4uin
 
Software Construction Assignment Help
Software Construction Assignment HelpSoftware Construction Assignment Help
Software Construction Assignment Help
Programming Homework Help
 
Dynamic Programming - Part II
Dynamic Programming - Part IIDynamic Programming - Part II
Dynamic Programming - Part II
Amrinder Arora
 
Solution 3.
Solution 3.Solution 3.
Solution 3.
sansaristic
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
Edu Assignment Help
 
Gate Previous Years Papers
Gate Previous Years PapersGate Previous Years Papers
Gate Previous Years Papers
Rahul Jain
 
Ch08
Ch08Ch08
Pattern Matching Part Two: k-mismatches
Pattern Matching Part Two: k-mismatchesPattern Matching Part Two: k-mismatches
Pattern Matching Part Two: k-mismatches
Benjamin Sach
 

What's hot (20)

ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems ReviewACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2013 NEERC (Northeastern European Regional Contest) Problems Review
 
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems ReviewACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
ACM ICPC 2012 NEERC (Northeastern European Regional Contest) Problems Review
 
The Complexity Of Primality Testing
The Complexity Of Primality TestingThe Complexity Of Primality Testing
The Complexity Of Primality Testing
 
Lecture26
Lecture26Lecture26
Lecture26
 
Computer Science Assignment Help
Computer Science Assignment Help Computer Science Assignment Help
Computer Science Assignment Help
 
ARIC Team Seminar
ARIC Team SeminarARIC Team Seminar
ARIC Team Seminar
 
Error control coding bch, reed-solomon etc..
Error control coding   bch, reed-solomon etc..Error control coding   bch, reed-solomon etc..
Error control coding bch, reed-solomon etc..
 
Elliptic Curves and Elliptic Curve Cryptography
Elliptic Curves and Elliptic Curve CryptographyElliptic Curves and Elliptic Curve Cryptography
Elliptic Curves and Elliptic Curve Cryptography
 
Elliptical curve cryptography
Elliptical curve cryptographyElliptical curve cryptography
Elliptical curve cryptography
 
Introduction to the AKS Primality Test
Introduction to the AKS Primality TestIntroduction to the AKS Primality Test
Introduction to the AKS Primality Test
 
Signals Processing Homework Help
Signals Processing Homework HelpSignals Processing Homework Help
Signals Processing Homework Help
 
Computer Science Assignment Help
Computer Science Assignment HelpComputer Science Assignment Help
Computer Science Assignment Help
 
Stdlib functions lesson
Stdlib functions lessonStdlib functions lesson
Stdlib functions lesson
 
Software Construction Assignment Help
Software Construction Assignment HelpSoftware Construction Assignment Help
Software Construction Assignment Help
 
Dynamic Programming - Part II
Dynamic Programming - Part IIDynamic Programming - Part II
Dynamic Programming - Part II
 
Solution 3.
Solution 3.Solution 3.
Solution 3.
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
 
Gate Previous Years Papers
Gate Previous Years PapersGate Previous Years Papers
Gate Previous Years Papers
 
Ch08
Ch08Ch08
Ch08
 
Pattern Matching Part Two: k-mismatches
Pattern Matching Part Two: k-mismatchesPattern Matching Part Two: k-mismatches
Pattern Matching Part Two: k-mismatches
 

Similar to Dynamic programing

Skiena algorithm 2007 lecture16 introduction to dynamic programming
Skiena algorithm 2007 lecture16 introduction to dynamic programmingSkiena algorithm 2007 lecture16 introduction to dynamic programming
Skiena algorithm 2007 lecture16 introduction to dynamic programming
zukun
 
Sure interview algorithm-1103
Sure interview algorithm-1103Sure interview algorithm-1103
Sure interview algorithm-1103
Sure Interview
 
Fibonacci using matlab
Fibonacci using matlabFibonacci using matlab
Fibonacci using matlab
Adeshina Adebanjo
 
Recursion.pdf
Recursion.pdfRecursion.pdf
Recursion.pdf
Flavia Tembo Kambale
 
Effective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
Effective Algorithm for n Fibonacci Number By: Professor Lili SaghafiEffective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
Effective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
Professor Lili Saghafi
 
module1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdfmodule1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdf
Shiwani Gupta
 
dynamic programming Rod cutting class
dynamic programming Rod cutting classdynamic programming Rod cutting class
dynamic programming Rod cutting class
giridaroori
 
Unit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfUnit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdf
AmayJaiswal4
 
1 chapter1 introduction
1 chapter1 introduction1 chapter1 introduction
1 chapter1 introduction
SSE_AndyLi
 
Dynamicpgmming
DynamicpgmmingDynamicpgmming
Dynamicpgmming
Muhammad Wasif
 
Class 18: Measuring Cost
Class 18: Measuring CostClass 18: Measuring Cost
Class 18: Measuring Cost
David Evans
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
smruti sarangi
 
lecture4-recursion.pptx
lecture4-recursion.pptxlecture4-recursion.pptx
lecture4-recursion.pptx
Lizhen Shi
 
Python for Scientific Computing
Python for Scientific ComputingPython for Scientific Computing
Python for Scientific Computing
Albert DeFusco
 
6-Python-Recursion PPT.pptx
6-Python-Recursion PPT.pptx6-Python-Recursion PPT.pptx
6-Python-Recursion PPT.pptx
Venkateswara Babu Ravipati
 
Daa
DaaDaa
Approximation algorithms
Approximation algorithmsApproximation algorithms
Approximation algorithms
Ganesh Solanke
 
Python.pptx
Python.pptxPython.pptx
Exploring Tailrec Through Time Until Kotlin.pptx
Exploring Tailrec Through Time Until Kotlin.pptxExploring Tailrec Through Time Until Kotlin.pptx
Exploring Tailrec Through Time Until Kotlin.pptx
João Esperancinha
 
Dynamic pgmming
Dynamic pgmmingDynamic pgmming
Dynamic pgmming
Dr. C.V. Suresh Babu
 

Similar to Dynamic programing (20)

Skiena algorithm 2007 lecture16 introduction to dynamic programming
Skiena algorithm 2007 lecture16 introduction to dynamic programmingSkiena algorithm 2007 lecture16 introduction to dynamic programming
Skiena algorithm 2007 lecture16 introduction to dynamic programming
 
Sure interview algorithm-1103
Sure interview algorithm-1103Sure interview algorithm-1103
Sure interview algorithm-1103
 
Fibonacci using matlab
Fibonacci using matlabFibonacci using matlab
Fibonacci using matlab
 
Recursion.pdf
Recursion.pdfRecursion.pdf
Recursion.pdf
 
Effective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
Effective Algorithm for n Fibonacci Number By: Professor Lili SaghafiEffective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
Effective Algorithm for n Fibonacci Number By: Professor Lili Saghafi
 
module1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdfmodule1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdf
 
dynamic programming Rod cutting class
dynamic programming Rod cutting classdynamic programming Rod cutting class
dynamic programming Rod cutting class
 
Unit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfUnit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdf
 
1 chapter1 introduction
1 chapter1 introduction1 chapter1 introduction
1 chapter1 introduction
 
Dynamicpgmming
DynamicpgmmingDynamicpgmming
Dynamicpgmming
 
Class 18: Measuring Cost
Class 18: Measuring CostClass 18: Measuring Cost
Class 18: Measuring Cost
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
 
lecture4-recursion.pptx
lecture4-recursion.pptxlecture4-recursion.pptx
lecture4-recursion.pptx
 
Python for Scientific Computing
Python for Scientific ComputingPython for Scientific Computing
Python for Scientific Computing
 
6-Python-Recursion PPT.pptx
6-Python-Recursion PPT.pptx6-Python-Recursion PPT.pptx
6-Python-Recursion PPT.pptx
 
Daa
DaaDaa
Daa
 
Approximation algorithms
Approximation algorithmsApproximation algorithms
Approximation algorithms
 
Python.pptx
Python.pptxPython.pptx
Python.pptx
 
Exploring Tailrec Through Time Until Kotlin.pptx
Exploring Tailrec Through Time Until Kotlin.pptxExploring Tailrec Through Time Until Kotlin.pptx
Exploring Tailrec Through Time Until Kotlin.pptx
 
Dynamic pgmming
Dynamic pgmmingDynamic pgmming
Dynamic pgmming
 

Recently uploaded

Exception Handling notes in java exception
Exception Handling notes in java exceptionException Handling notes in java exception
Exception Handling notes in java exception
Ratnakar Mikkili
 
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
Mukeshwaran Balu
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
Dr Ramhari Poudyal
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
Series of visio cisco devices Cisco_Icons.ppt
Series of visio cisco devices Cisco_Icons.pptSeries of visio cisco devices Cisco_Icons.ppt
Series of visio cisco devices Cisco_Icons.ppt
PauloRodrigues104553
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
ChristineTorrepenida1
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
This is my Environmental physics presentation
This is my Environmental physics presentationThis is my Environmental physics presentation
This is my Environmental physics presentation
ZainabHashmi17
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
Aditya Rajan Patra
 
bank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdfbank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdf
Divyam548318
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
jpsjournal1
 
Low power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniquesLow power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniques
nooriasukmaningtyas
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
nooriasukmaningtyas
 
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
skuxot
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
ihlasbinance2003
 
Swimming pool mechanical components design.pptx
Swimming pool  mechanical components design.pptxSwimming pool  mechanical components design.pptx
Swimming pool mechanical components design.pptx
yokeleetan1
 

Recently uploaded (20)

Exception Handling notes in java exception
Exception Handling notes in java exceptionException Handling notes in java exception
Exception Handling notes in java exception
 
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
ACRP 4-09 Risk Assessment Method to Support Modification of Airfield Separat...
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
Series of visio cisco devices Cisco_Icons.ppt
Series of visio cisco devices Cisco_Icons.pptSeries of visio cisco devices Cisco_Icons.ppt
Series of visio cisco devices Cisco_Icons.ppt
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
This is my Environmental physics presentation
This is my Environmental physics presentationThis is my Environmental physics presentation
This is my Environmental physics presentation
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
 
bank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdfbank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdf
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
 
Low power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniquesLow power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniques
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
 
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
一比一原版(UC Berkeley毕业证)加利福尼亚大学|伯克利分校毕业证成绩单专业办理
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
 
Swimming pool mechanical components design.pptx
Swimming pool  mechanical components design.pptxSwimming pool  mechanical components design.pptx
Swimming pool mechanical components design.pptx
 

Dynamic programing

  • 1. Dynamic Programming Alexander S. Kulikov Competitive Programmer’s Core Skills https://www.coursera.org/learn/competitive-programming-core-skills
  • 2. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 3. Dynamic Programming Extremely powerful algorithmic technique with applications in optimization, scheduling, planning, economics, bioinformatics, etc
  • 4. Dynamic Programming Extremely powerful algorithmic technique with applications in optimization, scheduling, planning, economics, bioinformatics, etc At contests, probably the most popular type of problems
  • 5. Dynamic Programming Extremely powerful algorithmic technique with applications in optimization, scheduling, planning, economics, bioinformatics, etc At contests, probably the most popular type of problems A solution is usually not so easy to find, but when found, is easily implementable
  • 6. Dynamic Programming Extremely powerful algorithmic technique with applications in optimization, scheduling, planning, economics, bioinformatics, etc At contests, probably the most popular type of problems A solution is usually not so easy to find, but when found, is easily implementable Need a lot of practice!
  • 7. Fibonacci numbers Fibonacci numbers Fn = ⎧ ⎪ ⎨ ⎪ ⎩ 0, n = 0 , 1, n = 1 , Fn−1 + Fn−2, n > 1 . 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . .
  • 8. Computing Fibonacci Numbers Computing Fn Input: An integer n ≥ 0. Output: The n-th Fibonacci number Fn.
  • 9. Computing Fibonacci Numbers Computing Fn Input: An integer n ≥ 0. Output: The n-th Fibonacci number Fn. 1 def f i b (n ) : 2 i f n <= 1: 3 return n 4 return f i b (n − 1) + f i b (n − 2)
  • 10. Recursion Tree Fn Fn−1 Fn−2 Fn−3 Fn−4 Fn−3 Fn−4 Fn−5 Fn−2 Fn−3 Fn−4 Fn−5 Fn−4 Fn−5 Fn−6 . . .
  • 11. Recursion Tree Fn Fn−1 Fn−2 Fn−3 Fn−4 Fn−3 Fn−4 Fn−5 Fn−2 Fn−3 Fn−4 Fn−5 Fn−4 Fn−5 Fn−6 . . .
  • 12. Running Time Essentially, the algorithm computes Fn as the sum of Fn 1’s
  • 13. Running Time Essentially, the algorithm computes Fn as the sum of Fn 1’s Hence its running time is O(Fn)
  • 14. Running Time Essentially, the algorithm computes Fn as the sum of Fn 1’s Hence its running time is O(Fn) But Fibonacci numbers grow exponentially fast: Fn ≈ 𝜑n , where 𝜑 = 1.618 . . . is the golden ratio
  • 15. Running Time Essentially, the algorithm computes Fn as the sum of Fn 1’s Hence its running time is O(Fn) But Fibonacci numbers grow exponentially fast: Fn ≈ 𝜑n , where 𝜑 = 1.618 . . . is the golden ratio E.g., F150 is already 31 decimal digits long
  • 16. Running Time Essentially, the algorithm computes Fn as the sum of Fn 1’s Hence its running time is O(Fn) But Fibonacci numbers grow exponentially fast: Fn ≈ 𝜑n , where 𝜑 = 1.618 . . . is the golden ratio E.g., F150 is already 31 decimal digits long The Sun may die before your computer returns F150
  • 18. Reason Many computations are repeated “Those who cannot remember the past are condemned to repeat it.” (George Santayana)
  • 19. Reason Many computations are repeated “Those who cannot remember the past are condemned to repeat it.” (George Santayana) A simple, but crucial idea: instead of recomputing the intermediate results, let’s store them once they are computed
  • 20. Memoization 1 def f i b (n ) : 2 i f n <= 1: 3 return n 4 return f i b (n − 1) + f i b (n − 2)
  • 21. Memoization 1 def f i b (n ) : 2 i f n <= 1: 3 return n 4 return f i b (n − 1) + f i b (n − 2) 1 T = dict() 2 3 def f i b (n ) : 4 if n not in T: 5 i f n <= 1: 6 T[n] = n 7 else : 8 T[n] = f i b (n − 1) + f i b (n − 2) 9 10 return T[n]
  • 22. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem?
  • 23. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand:
  • 24. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand: 1 F0 = 0, F1 = 1
  • 25. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand: 1 F0 = 0, F1 = 1 2 F2 = 0 + 1 = 1
  • 26. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand: 1 F0 = 0, F1 = 1 2 F2 = 0 + 1 = 1 3 F3 = 1 + 1 = 2
  • 27. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand: 1 F0 = 0, F1 = 1 2 F2 = 0 + 1 = 1 3 F3 = 1 + 1 = 2 4 F4 = 1 + 2 = 3
  • 28. Hm... But do we really need all this fancy stuff (recursion, memoization, dictionaries) to solve this simple problem? After all, this is how you would compute F5 by hand: 1 F0 = 0, F1 = 1 2 F2 = 0 + 1 = 1 3 F3 = 1 + 1 = 2 4 F4 = 1 + 2 = 3 5 F5 = 2 + 3 = 5
  • 29. Iterative Algorithm 1 def f i b (n ) : 2 T = [ None ] * (n + 1) 3 T[ 0 ] , T[ 1 ] = 0 , 1 4 5 for i in range (2 , n + 1 ) : 6 T[ i ] = T[ i − 1] + T[ i − 2] 7 8 return T[ n ]
  • 30. Hm Again... But do we really need to waste so much space?
  • 31. Hm Again... But do we really need to waste so much space? 1 def f i b (n ) : 2 i f n <= 1: 3 return n 4 5 previous , c u r r e n t = 0 , 1 6 for _ in range (n − 1 ) : 7 new_current = p r e v i o u s + c u r r e n t 8 previous , c u r r e n t = current , new_current 9 10 return c u r r e n t
  • 33. Running Time O(n) additions On the other hand, recall that Fibonacci numbers grow exponentially fast: the binary length of Fn is O(n)
  • 34. Running Time O(n) additions On the other hand, recall that Fibonacci numbers grow exponentially fast: the binary length of Fn is O(n) In theory: we should not treat such additions as basic operations
  • 35. Running Time O(n) additions On the other hand, recall that Fibonacci numbers grow exponentially fast: the binary length of Fn is O(n) In theory: we should not treat such additions as basic operations In practice: just F100 does not fit into a 64-bit integer type anymore, hence we need bignum arithmetic
  • 36. Summary The key idea of dynamic programming: avoid recomputing the same thing again!
  • 37. Summary The key idea of dynamic programming: avoid recomputing the same thing again! At the same time, the case of Fibonacci numbers is a slightly artificial example of dynamic programming since it is clear from the very beginning what intermediate results we need to compute the final result
  • 38. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 39. Longest Increasing Subsequence Longest increasing subsequence Input: An array A = [a0, a1, . . . , an−1]. Output: A longest increasing subsequence (LIS), i.e., ai1 , ai2 , . . . , aik such that i1 < i2 < . . . < ik, ai1 < ai2 < · · · < aik , and k is maximal.
  • 40. Example Example 7 2 1 3 8 4 9 1 2 6 5 9 3 8 1
  • 41. Example Example 7 2 1 3 8 4 9 1 2 6 5 9 3 8 1
  • 42. Analyzing an Optimal Solution Consider the last element x of an optimal increasing subsequence and its previous element z: z x
  • 43. Analyzing an Optimal Solution Consider the last element x of an optimal increasing subsequence and its previous element z: z x First of all, z < x
  • 44. Analyzing an Optimal Solution Consider the last element x of an optimal increasing subsequence and its previous element z: z x First of all, z < x Moreover, the prefix of the IS ending at z must be an optimal IS ending at z as otherwise the initial IS would not be optimal: z x
  • 45. Analyzing an Optimal Solution Consider the last element x of an optimal increasing subsequence and its previous element z: z x First of all, z < x Moreover, the prefix of the IS ending at z must be an optimal IS ending at z as otherwise the initial IS would not be optimal: z x
  • 46. Analyzing an Optimal Solution Consider the last element x of an optimal increasing subsequence and its previous element z: z x First of all, z < x Moreover, the prefix of the IS ending at z must be an optimal IS ending at z as otherwise the initial IS would not be optimal: z x Optimal substructure by “cut-and-paste” trick
  • 47. Subproblems and Recurrence Relation Let LIS(i) be the optimal length of a LIS ending at A[i]
  • 48. Subproblems and Recurrence Relation Let LIS(i) be the optimal length of a LIS ending at A[i] Then LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
  • 49. Subproblems and Recurrence Relation Let LIS(i) be the optimal length of a LIS ending at A[i] Then LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]} Convention: maximum of an empty set is equal to zero
  • 50. Subproblems and Recurrence Relation Let LIS(i) be the optimal length of a LIS ending at A[i] Then LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]} Convention: maximum of an empty set is equal to zero Base case: LIS(0) = 1
  • 51. Algorithm When we have a recurrence relation at hand, converting it to a recursive algorithm with memoization is just a technicality We will use a table T to store the results: T[i] = LIS(i)
  • 52. Algorithm When we have a recurrence relation at hand, converting it to a recursive algorithm with memoization is just a technicality We will use a table T to store the results: T[i] = LIS(i) Initially, T is empty. When LIS(i) is computed, we store its value at T[i] (so that we will never recompute LIS(i) again)
  • 53. Algorithm When we have a recurrence relation at hand, converting it to a recursive algorithm with memoization is just a technicality We will use a table T to store the results: T[i] = LIS(i) Initially, T is empty. When LIS(i) is computed, we store its value at T[i] (so that we will never recompute LIS(i) again) The exact data structure behind T is not that important at this point: it could be an array or a hash table
  • 54. Memoization 1 T = dict () 2 3 def l i s (A, i ) : 4 i f i not in T: 5 T[ i ] = 1 6 7 for j in range ( i ) : 8 i f A[ j ] < A[ i ] : 9 T[ i ] = max(T[ i ] , l i s (A, j ) + 1) 10 11 return T[ i ] 12 13 A = [7 , 2 , 1 , 3 , 8 , 4 , 9 , 1 , 2 , 6 , 5 , 9 , 3] 14 print (max( l i s (A, i ) for i in range ( len (A) ) ) )
  • 55. Running Time The running time is quadratic (O(n2 )): there are n “serious” recursive calls (that are not just table look-ups), each of them needs time O(n) (not counting the inner recursive calls)
  • 56. Table and Recursion We need to store in the table T the value of LIS(i) for all i from 0 to n − 1
  • 57. Table and Recursion We need to store in the table T the value of LIS(i) for all i from 0 to n − 1 Reasonable choice of a data structure for T: an array of size n
  • 58. Table and Recursion We need to store in the table T the value of LIS(i) for all i from 0 to n − 1 Reasonable choice of a data structure for T: an array of size n Moreover, one can fill in this array iteratively instead of recursively
  • 59. Iterative Algorithm 1 def l i s (A) : 2 T = [ None ] * len (A) 3 4 for i in range ( len (A ) ) : 5 T[ i ] = 1 6 for j in range ( i ) : 7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1: 8 T[ i ] = T[ j ] + 1 9 10 return max(T[ i ] for i in range ( len (A) ) )
  • 60. Iterative Algorithm 1 def l i s (A) : 2 T = [ None ] * len (A) 3 4 for i in range ( len (A ) ) : 5 T[ i ] = 1 6 for j in range ( i ) : 7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1: 8 T[ i ] = T[ j ] + 1 9 10 return max(T[ i ] for i in range ( len (A) ) ) Crucial property: when computing T[i], T[j] for all j < i have already been computed
  • 61. Iterative Algorithm 1 def l i s (A) : 2 T = [ None ] * len (A) 3 4 for i in range ( len (A ) ) : 5 T[ i ] = 1 6 for j in range ( i ) : 7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1: 8 T[ i ] = T[ j ] + 1 9 10 return max(T[ i ] for i in range ( len (A) ) ) Crucial property: when computing T[i], T[j] for all j < i have already been computed Running time: O(n2 )
  • 62. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 63. Reconstructing a Solution How to reconstruct an optimal IS?
  • 64. Reconstructing a Solution How to reconstruct an optimal IS? In order to reconstruct it, for each subproblem we will keep its optimal value and a choice leading to this value
  • 65. Adjusting the Algorithm 1 def l i s (A) : 2 T = [ None ] * len (A) 3 prev = [None] * len(A) 4 5 for i in range ( len (A ) ) : 6 T[ i ] = 1 7 prev[i] = -1 8 for j in range ( i ) : 9 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1: 10 T[ i ] = T[ j ] + 1 11 prev[i] = j
  • 66. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 67. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 68. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 69. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 70. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 71. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 72. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 73. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 74. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 75. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 76. Example Example 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T prev -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1 -1 -1 -1 1 3 3 4 -1 2 5 5 9 8 9 -1
  • 77. Unwinding Solution 1 l a s t = 0 2 for i in range (1 , len (A ) ) : 3 i f T[ i ] > T[ l a s t ] : 4 l a s t = i 5 6 l i s= [ ] 7 c u r r e n t = l a s t 8 while c u r r e n t >= 0: 9 l i s . append ( c u r r e n t ) 10 c u r r e n t = prev [ c u r r e n t ] 11 l i s . r e v e r s e () 12 return [A[ i ] for i in l i s ]
  • 78. Reconstructing Again Reconstructing without prev 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T
  • 79. Reconstructing Again Reconstructing without prev 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T
  • 80. Reconstructing Again Reconstructing without prev 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T
  • 81. Reconstructing Again Reconstructing without prev 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T
  • 82. Reconstructing Again Reconstructing without prev 7 0 2 1 1 2 3 3 8 4 4 5 9 6 1 7 2 8 6 9 5 10 9 11 3 12 8 13 1 14 1 1 1 2 3 3 4 1 2 4 4 5 3 5 1 A T
  • 83. Summary Optimal substructure property: any prefix of an optimal increasing subsequence must be a longest increasing subsequence ending at this particular element
  • 84. Summary Optimal substructure property: any prefix of an optimal increasing subsequence must be a longest increasing subsequence ending at this particular element Subproblem: the length of an optimal increasing subsequence ending at i-th element
  • 85. Summary Optimal substructure property: any prefix of an optimal increasing subsequence must be a longest increasing subsequence ending at this particular element Subproblem: the length of an optimal increasing subsequence ending at i-th element A recurrence relation for subproblems can be immediately converted into a recursive algorithm with memoization
  • 86. Summary Optimal substructure property: any prefix of an optimal increasing subsequence must be a longest increasing subsequence ending at this particular element Subproblem: the length of an optimal increasing subsequence ending at i-th element A recurrence relation for subproblems can be immediately converted into a recursive algorithm with memoization A recursive algorithm, in turn, can be converted into an iterative one
  • 87. Summary Optimal substructure property: any prefix of an optimal increasing subsequence must be a longest increasing subsequence ending at this particular element Subproblem: the length of an optimal increasing subsequence ending at i-th element A recurrence relation for subproblems can be immediately converted into a recursive algorithm with memoization A recursive algorithm, in turn, can be converted into an iterative one An optimal solution can be recovered either by using an additional bookkeeping info or by using the computed solutions to all subproblems
  • 88. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 89. The Most Creative Part In most DP algorithms, the most creative part is coming up with the right notion of a subproblem and a recurrence relation
  • 90. The Most Creative Part In most DP algorithms, the most creative part is coming up with the right notion of a subproblem and a recurrence relation When a recurrence relation is written down, it can be wrapped with memoization to get a recursive algorithm
  • 91. The Most Creative Part In most DP algorithms, the most creative part is coming up with the right notion of a subproblem and a recurrence relation When a recurrence relation is written down, it can be wrapped with memoization to get a recursive algorithm In the previous video, we arrived at a reasonable subproblem by analyzing the structure of an optimal solution
  • 92. The Most Creative Part In most DP algorithms, the most creative part is coming up with the right notion of a subproblem and a recurrence relation When a recurrence relation is written down, it can be wrapped with memoization to get a recursive algorithm In the previous video, we arrived at a reasonable subproblem by analyzing the structure of an optimal solution In this video, we’ll provide an alternative way of arriving at subproblems: implement a naive brute force solution, then optimize it
  • 93. Brute Force: Plan Need the longest increasing subsequence? No problem! Just iterate over all subsequences and select the longest one:
  • 94. Brute Force: Plan Need the longest increasing subsequence? No problem! Just iterate over all subsequences and select the longest one: Start with an empty sequence
  • 95. Brute Force: Plan Need the longest increasing subsequence? No problem! Just iterate over all subsequences and select the longest one: Start with an empty sequence Extend it element by element recursively
  • 96. Brute Force: Plan Need the longest increasing subsequence? No problem! Just iterate over all subsequences and select the longest one: Start with an empty sequence Extend it element by element recursively Keep track of the length of the sequence
  • 97. Brute Force: Plan Need the longest increasing subsequence? No problem! Just iterate over all subsequences and select the longest one: Start with an empty sequence Extend it element by element recursively Keep track of the length of the sequence This is going to be slow, but not to worry: we will optimize it later
  • 98. Brute Force: Code 1 def l i s (A, seq ) : 2 r e s u l t = len ( seq ) 3 4 i f len ( seq ) == 0: 5 last_index = −1 6 last_element = float ( "−i n f " ) 7 else : 8 last_index = seq [ −1] 9 last_element = A[ last_index ] 10 11 for i in range ( last_index + 1 , len (A ) ) : 12 i f A[ i ] > last_element : 13 r e s u l t = max( r e s u l t , l i s (A, seq + [ i ] ) ) 14 15 return r e s u l t 16 17 print ( l i s (A=[7 , 2 , 1 , 3 , 8 , 4 , 9] , seq =[]))
  • 99. Optimizing At each step, we are trying to extend the current sequence
  • 100. Optimizing At each step, we are trying to extend the current sequence For this, we pass the current sequence to each recursive call
  • 101. Optimizing At each step, we are trying to extend the current sequence For this, we pass the current sequence to each recursive call At the same time, code inspection reveals that we are not using all of the sequence: we are only interested in its last element and its length
  • 102. Optimizing At each step, we are trying to extend the current sequence For this, we pass the current sequence to each recursive call At the same time, code inspection reveals that we are not using all of the sequence: we are only interested in its last element and its length Let’s optimize!
  • 103. Optimized Code 1 def l i s (A, seq_len , last_index ) : 2 i f last_index == −1: 3 last_element = float ( "−i n f " ) 4 else : 5 last_element = A[ last_index ] 6 7 r e s u l t = seq_len 8 9 for i in range ( last_index + 1 , len (A ) ) : 10 i f A[ i ] > last_element : 11 r e s u l t = max( r e s u l t , 12 l i s (A, seq_len + 1 , i )) 13 14 return r e s u l t 15 16 print ( l i s ( [ 3 , 2 , 7 , 8 , 9 , 5 , 8] , 0 , −1))
  • 104. Optimizing Further Inspecting the code further, we realize that seq_len is not used for extending the current sequence (we don’t need to know even the length of the initial part of the sequence to optimally extend it)
  • 105. Optimizing Further Inspecting the code further, we realize that seq_len is not used for extending the current sequence (we don’t need to know even the length of the initial part of the sequence to optimally extend it) More formally, for any x, extend(A, seq_len, i) is equal to extend(A, seq_len - x, i) + x
  • 106. Optimizing Further Inspecting the code further, we realize that seq_len is not used for extending the current sequence (we don’t need to know even the length of the initial part of the sequence to optimally extend it) More formally, for any x, extend(A, seq_len, i) is equal to extend(A, seq_len - x, i) + x Hence, can optimize the code as follows: max(result, 1 + seq_len + extend(A, 0, i))
  • 107. Optimizing Further Inspecting the code further, we realize that seq_len is not used for extending the current sequence (we don’t need to know even the length of the initial part of the sequence to optimally extend it) More formally, for any x, extend(A, seq_len, i) is equal to extend(A, seq_len - x, i) + x Hence, can optimize the code as follows: max(result, 1 + seq_len + extend(A, 0, i)) Excludes seq_len from the list of parameters!
  • 108. Resulting Code 1 def l i s (A, last_index ) : 2 i f last_index == −1: 3 last_element = float ( "−i n f " ) 4 else : 5 last_element = A[ last_index ] 6 7 r e s u l t = 0 8 9 for i in range ( last_index + 1 , len (A ) ) : 10 i f A[ i ] > last_element : 11 r e s u l t = max( r e s u l t , 1 + l i s (A, i )) 12 13 return r e s u l t 14 15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1))
  • 109. Resulting Code 1 def l i s (A, last_index ) : 2 i f last_index == −1: 3 last_element = float ( "−i n f " ) 4 else : 5 last_element = A[ last_index ] 6 7 r e s u l t = 0 8 9 for i in range ( last_index + 1 , len (A ) ) : 10 i f A[ i ] > last_element : 11 r e s u l t = max( r e s u l t , 1 + l i s (A, i )) 12 13 return r e s u l t 14 15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1)) It remains to add memoization!
  • 110. Summary Subproblems (and recurrence relation on them) is the most important ingredient of a dynamic programming algorithm
  • 111. Summary Subproblems (and recurrence relation on them) is the most important ingredient of a dynamic programming algorithm Two common ways of arriving at the right subproblem:
  • 112. Summary Subproblems (and recurrence relation on them) is the most important ingredient of a dynamic programming algorithm Two common ways of arriving at the right subproblem: Analyze the structure of an optimal solution
  • 113. Summary Subproblems (and recurrence relation on them) is the most important ingredient of a dynamic programming algorithm Two common ways of arriving at the right subproblem: Analyze the structure of an optimal solution Implement a brute force solution and optimize it
  • 114. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 115. Statement Edit distance Input: Two strings A[0 . . . n − 1] and B[0 . . . m − 1]. Output: The minimal number of insertions, deletions, and substitutions needed to transform A to B. This number is known as edit distance or Levenshtein distance.
  • 116. Example: EDITING → DISTANCE EDITING
  • 117. Example: EDITING → DISTANCE EDITING DITING remove E
  • 118. Example: EDITING → DISTANCE EDITING DITING DISTING remove E insert S
  • 119. Example: EDITING → DISTANCE EDITING DITING DISTING DISTANG remove E insert S replace I with by A
  • 120. Example: EDITING → DISTANCE EDITING DITING DISTING DISTANG DISTANC remove E insert S replace I with by A replace G with C
  • 121. Example: EDITING → DISTANCE EDITING DITING DISTING DISTANG DISTANC DISTANCE remove E insert S replace I with by A replace G with C insert E
  • 122. Example: alignment E D I − T I N G − − D I S T A N C E cost: 5
  • 123. Example: alignment E D I − T I N G − − D I S T A N C E cost: 5 substitutions/mismatches deletion insertions matches
  • 124. Analyzing an Optimal Alignment A[0 . . . n − 1] B[0 . . . m − 1]
  • 125. Analyzing an Optimal Alignment A[0 . . . n − 1] B[0 . . . m − 1] A[0 . . . n − 1] − B[0 . . . m − 2] B[m − 1] insertion
  • 126. Analyzing an Optimal Alignment A[0 . . . n − 1] B[0 . . . m − 1] A[0 . . . n − 1] − B[0 . . . m − 2] B[m − 1] insertion A[0 . . . n − 2] A[n − 1] B[0 . . . m − 1] − deletion
  • 127. Analyzing an Optimal Alignment A[0 . . . n − 1] B[0 . . . m − 1] A[0 . . . n − 1] − B[0 . . . m − 2] B[m − 1] insertion A[0 . . . n − 2] A[n − 1] B[0 . . . m − 1] − deletion A[0 . . . n − 2] A[n − 1] B[0 . . . m − 2] B[m − 1] match/mismatch
  • 128. Subproblems Let ED(i, j) be the edit distance of A[0 . . . i − 1] and B[0 . . . j − 1].
  • 129. Subproblems Let ED(i, j) be the edit distance of A[0 . . . i − 1] and B[0 . . . j − 1]. We know for sure that the last column of an optimal alignment is either an insertion, a deletion, or a match/mismatch.
  • 130. Subproblems Let ED(i, j) be the edit distance of A[0 . . . i − 1] and B[0 . . . j − 1]. We know for sure that the last column of an optimal alignment is either an insertion, a deletion, or a match/mismatch. What is left is an optimal alignment of the corresponding two prefixes (by cut-and-paste).
  • 131. Recurrence Relation ED(i, j) = min ⎧ ⎪ ⎨ ⎪ ⎩ ED(i, j − 1) + 1 ED(i − 1, j) + 1 ED(i − 1, j − 1) + diff(A[i], B[j])
  • 132. Recurrence Relation ED(i, j) = min ⎧ ⎪ ⎨ ⎪ ⎩ ED(i, j − 1) + 1 ED(i − 1, j) + 1 ED(i − 1, j − 1) + diff(A[i], B[j]) Base case: ED(i, 0) = i, ED(0, j) = j
  • 133. Recursive Algorithm 1 T = dict () 2 3 def edit_distance (a , b , i , j ) : 4 i f not ( i , j ) in T: 5 i f i == 0: T[ i , j ] = j 6 e l i f j == 0: T[ i , j ] = i 7 else : 8 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1 9 T[ i , j ] = min( 10 edit_distance (a , b , i − 1 , j ) + 1 , 11 edit_distance (a , b , i , j − 1) + 1 , 12 edit_distance (a , b , i − 1 , j − 1) + d i f f ) 13 14 return T[ i , j ] 15 16 17 print ( edit_distance ( a=" e d i t i n g " , b=" d i s t a n c e " , 18 i =7, j =8))
  • 134. Converting to a Recursive Algorithm Use a 2D table to store the intermediate results
  • 135. Converting to a Recursive Algorithm Use a 2D table to store the intermediate results ED(i, j) depends on ED(i − 1, j − 1), ED(i − 1, j), and ED(i, j − 1): 0 n i 0 m j (m is)m atch insertion deletion
  • 136. Filling the Table Fill in the table row by row or column by column: 0 n i 0 m j 0 n i 0 m j
  • 137. Iterative Algorithm 1 def edit_distance (a , b ) : 2 T = [ [ f l o a t ( " i n f " ) ] * ( len (b) + 1) 3 for _ in range ( len ( a ) + 1 ) ] 4 for i in range ( len ( a ) + 1 ) : 5 T[ i ] [ 0 ] = i 6 for j in range ( len (b) + 1 ) : 7 T [ 0 ] [ j ] = j 8 9 for i in range (1 , len ( a ) + 1 ) : 10 for j in range (1 , len (b) + 1 ) : 11 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1 12 T[ i ] [ j ] = min(T[ i − 1 ] [ j ] + 1 , 13 T[ i ] [ j − 1] + 1 , 14 T[ i − 1 ] [ j − 1] + d i f f ) 15 16 return T[ len ( a ) ] [ len (b ) ] 17 18 19 print ( edit_distance ( a=" d i s t a n c e " , b=" e d i t i n g " ))
  • 138. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 139. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 140. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 141. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 142. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 143. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 1 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 144. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 1 2 I 2 2 S 3 3 T 4 4 A 5 5 N 6 6 C 7 7 E 8 8
  • 145. Example E D I T I N G 0 1 2 3 4 5 6 7 0 0 1 2 3 4 5 6 7 D 1 1 1 1 2 3 4 5 6 I 2 2 2 2 1 2 3 4 5 S 3 3 3 3 2 2 3 4 5 T 4 4 4 4 3 2 3 4 5 A 5 5 5 5 4 3 3 4 5 N 6 6 6 6 5 4 4 3 4 C 7 7 7 7 6 5 5 4 4 E 8 8 7 8 7 6 6 5 5
  • 146. Brute Force Recursively construct an alignment column by column
  • 147. Brute Force Recursively construct an alignment column by column Then note, that for extending the partially constructed alignment optimally, one only needs to know the already used length of prefix of A and the length of prefix of B
  • 148. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 149. Reconstructing a Solution To reconstruct a solution, we go back from the cell (n, m) to the cell (0, 0)
  • 150. Reconstructing a Solution To reconstruct a solution, we go back from the cell (n, m) to the cell (0, 0) If ED(i, j) = ED(i − 1, j) + 1, then there exists an optimal alignment whose last column is a deletion
  • 151. Reconstructing a Solution To reconstruct a solution, we go back from the cell (n, m) to the cell (0, 0) If ED(i, j) = ED(i − 1, j) + 1, then there exists an optimal alignment whose last column is a deletion If ED(i, j) = ED(i, j − 1) + 1, then there exists an optimal alignment whose last column is an insertion
  • 152. Reconstructing a Solution To reconstruct a solution, we go back from the cell (n, m) to the cell (0, 0) If ED(i, j) = ED(i − 1, j) + 1, then there exists an optimal alignment whose last column is a deletion If ED(i, j) = ED(i, j − 1) + 1, then there exists an optimal alignment whose last column is an insertion If ED(i, j) = ED(i − 1, j − 1) + diff(A[i], B[j]), then match (if A[i] = B[j]) or mismatch (if A[i] ̸= B[j])
  • 153. Example E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5
  • 154. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 E G
  • 155. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 C E - G
  • 156. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 N C E N - G
  • 157. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 A N C E I N - G
  • 158. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 T A N C E T I N - G
  • 159. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 S T A N C E - T I N - G
  • 160. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 I S T A N C E I - T I N - G
  • 161. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 D I S T A N C E D I - T I N - G
  • 162. Example (m is)m atch insertion deletion E D I T I N G 0 1 2 3 4 5 6 7 D 1 1 1 2 3 4 5 6 I 2 2 2 1 2 3 4 5 S 3 3 3 2 2 3 4 5 T 4 4 4 3 2 3 4 5 A 5 5 5 4 3 3 4 5 N 6 6 6 5 4 4 3 4 C 7 7 7 6 5 5 4 4 E 8 7 8 7 6 6 5 5 - D I S T A N C E E D I - T I N - G
  • 163. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 164. Saving Space When filling in the matrix it is enough to keep only the current column and the previous column: 0 n i 0 m j 0 n i 0 m j
  • 165. Saving Space When filling in the matrix it is enough to keep only the current column and the previous column: 0 n i 0 m j 0 n i 0 m j Thus, one can compute the edit distance of two given strings A[1 . . . n] and B[1 . . . m] in time O(nm) and space O(min{n, m}).
  • 166. Reconstructing a Solution However we need the whole table to find an actual alignment (we trace an alignment from the bottom right corner to the top left corner)
  • 167. Reconstructing a Solution However we need the whole table to find an actual alignment (we trace an alignment from the bottom right corner to the top left corner) There exists an algorithm constructing an optimal alignment in time O(nm) and space O(n + m) (Hirschberg’s algorithm)
  • 168. Weighted Edit Distance The cost of insertions, deletions, and substitutions is not necessarily identical Spell checking: some substitutions are more likely than others Biology: some mutations are more likely than others
  • 169. Generalized Recurrence Relation min ⎧ ⎪ ⎨ ⎪ ⎩ ED(i, j − 1) + inscost(B[j]), ED(i − 1, j) + delcost(A[i]), ED(i − 1, j − 1) + substcost(A[i], B[j])
  • 170. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 171. Knapsack Problem Goal Maximize value ($) while limiting total weight (kg)
  • 172. Applications Classical problem in combinatorial optimization with applications in resource allocation, cryptography, planning
  • 173. Applications Classical problem in combinatorial optimization with applications in resource allocation, cryptography, planning Weights and values may mean various resources (to be maximized or limited):
  • 174. Applications Classical problem in combinatorial optimization with applications in resource allocation, cryptography, planning Weights and values may mean various resources (to be maximized or limited): Select a set of TV commercials (each commercial has duration and cost) so that the total revenue is maximal while the total length does not exceed the length of the available time slot
  • 175. Applications Classical problem in combinatorial optimization with applications in resource allocation, cryptography, planning Weights and values may mean various resources (to be maximized or limited): Select a set of TV commercials (each commercial has duration and cost) so that the total revenue is maximal while the total length does not exceed the length of the available time slot Purchase computers for a data center to achieve the maximal performance under limited budget
  • 178. Problem Variations knapsack fractional knapsack discrete knapsack can take fractions of items each item is either taken or not
  • 181. Problem Variations knapsack fractional knapsack discrete knapsack with repetitions without repetitions unlimited quantities one of each item greedy algorithm greedy does not work for discrete knapsack! will design a dynamic program- ming solution
  • 184. Example 6 $30 3 $14 4 $16 2 $9 6 $30 4 $16 w/o repeats total: $46 6 $30 2 $9 2 $9 w repeats total: $48
  • 185. Example 6 $30 3 $14 4 $16 2 $9 6 $30 4 $16 w/o repeats total: $46 6 $30 2 $9 2 $9 w repeats total: $48 fractional 6 $30 3 1 $4.5 $14 total: $48.5
  • 186. Without repetitions: one of each item With repetitions: unlimited quantities
  • 187. Knapsack with repetitions problem Input: Weights w0, . . . , wn−1 and values v0, . . . , vn−1 of n items; total weight W (vi’s, wi’s, and W are non-negative integers). Output: The maximum value of items whose weight does not exceed W . Each item can be used any number of times.
  • 188. Analyzing an Optimal Solution Consider an optimal solution and an item in it: W wi
  • 189. Analyzing an Optimal Solution Consider an optimal solution and an item in it: W wi If we take this item out then we get an optimal solution for a knapsack of total weight W − wi.
  • 190. Subproblems Let value(u) be the maximum value of knapsack of weight u
  • 191. Subproblems Let value(u) be the maximum value of knapsack of weight u value(u) = max i : wi ≤w {value(u − wi) + vi}
  • 192. Subproblems Let value(u) be the maximum value of knapsack of weight u value(u) = max i : wi ≤w {value(u − wi) + vi} Base case: value(0) = 0
  • 193. Subproblems Let value(u) be the maximum value of knapsack of weight u value(u) = max i : wi ≤w {value(u − wi) + vi} Base case: value(0) = 0 This recurrence relation is transformed into a recursive algorithm in a straightforward way
  • 194. Recursive Algorithm 1 T = dict () 2 3 def knapsack (w, v , u ) : 4 i f u not in T: 5 T[ u ] = 0 6 7 for i in range ( len (w) ) : 8 i f w[ i ] <= u : 9 T[ u ] = max(T[ u ] , 10 knapsack (w, v , u − w[ i ] ) + v [ i ] ) 11 12 return T[ u ] 13 14 15 print ( knapsack (w=[6 , 3 , 4 , 2] , 16 v =[30 , 14 , 16 , 9] , u=10))
  • 195. Recursive into Iterative As usual, one can transform a recursive algorithm into an iterative one
  • 196. Recursive into Iterative As usual, one can transform a recursive algorithm into an iterative one For this, we gradually fill in an array T: T[u] = value(u)
  • 197. Recursive Algorithm 1 def knapsack (W, w, v ) : 2 T = [ 0 ] * (W + 1) 3 4 for u in range (1 , W + 1 ) : 5 for i in range ( len (w) ) : 6 i f w[ i ] <= u : 7 T[ u ] = max(T[ u ] , T[ u − w[ i ] ] + v [ i ] ) 8 9 return T[W] 10 11 12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] , 13 v =[30 , 14 , 16 , 9 ] ) )
  • 198. Example: W = 10 6 $30 3 $14 4 $16 2 $9 0 1 2 3 4 5 6 7 8 9 10 0 0 9 14 18 23 30 32 39 44
  • 199. Example: W = 10 6 $30 3 $14 4 $16 2 $9 0 1 2 3 4 5 6 7 8 9 10 0 0 9 14 18 23 30 32 39 44 +30 +14 +9 +16
  • 200. Example: W = 10 6 $30 3 $14 4 $16 2 $9 0 1 2 3 4 5 6 7 8 9 10 0 0 9 14 18 23 30 32 39 44 48 +30 +14 +9 +16
  • 201. Subproblems Revisited Another way of arriving at subproblems: optimizing brute force solution
  • 202. Subproblems Revisited Another way of arriving at subproblems: optimizing brute force solution Populate a list of used items one by one
  • 203. Brute Force: Knapsack with Repetitions 1 def knapsack (W, w, v , items ) : 2 weight = sum(w[ i ] for i in items ) 3 value = sum( v [ i ] for i in items ) 4 5 for i in range ( len (w) ) : 6 i f weight + w[ i ] <= W: 7 value = max( value , 8 knapsack (W, w, v , items + [ i ] ) ) 9 10 return value 11 12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] , 13 v =[30 , 14 , 16 , 9] , items =[]))
  • 204. Subproblems It remains to notice that the only important thing for extending the current set of items is the weight of this set
  • 205. Subproblems It remains to notice that the only important thing for extending the current set of items is the weight of this set One then replaces items by their weight in the list of parameters
  • 206. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 207. Without repetitions: one of each item With repetitions: unlimited quantities
  • 208. Knapsack without repetitions problem Input: Weights w0, . . . , wn−1 and values v0, . . . , vn−1 of n items; total weight W (vi’s, wi’s, and W are non-negative integers). Output: The maximum value of items whose weight does not exceed W . Each item can be used at most once.
  • 213. Subproblems If the last item is taken into an optimal solution: W wn−1 then what is left is an optimal solution for a knapsack of total weight W − wn−1 using items 0, 1, . . . , n − 2.
  • 214. Subproblems If the last item is taken into an optimal solution: W wn−1 then what is left is an optimal solution for a knapsack of total weight W − wn−1 using items 0, 1, . . . , n − 2. If the last item is not used, then the whole knapsack must be filled in optimally with items 0, 1, . . . , n − 2.
  • 215. Subproblems For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is the maximum value achievable using a knapsack of weight u and the first i items.
  • 216. Subproblems For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is the maximum value achievable using a knapsack of weight u and the first i items. Base case: value(u, 0) = 0, value(0, i) = 0
  • 217. Subproblems For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is the maximum value achievable using a knapsack of weight u and the first i items. Base case: value(u, 0) = 0, value(0, i) = 0 For i > 0, the item i − 1 is either used or not: value(u, i) is equal to max{value(u−wi−1, i−1)+vi−1, value(u, i−1)}
  • 218. Recursive Algorithm 1 T = dict () 2 3 def knapsack (w, v , u , i ) : 4 i f (u , i ) not in T: 5 i f i == 0: 6 T[ u , i ] = 0 7 else : 8 T[ u , i ] = knapsack (w, v , u , i − 1) 9 i f u >= w[ i − 1 ] : 10 T[ u , i ] = max(T[ u , i ] , 11 knapsack (w, v , u − w[ i − 1] , i − 1) + v [ i − 1 ] 12 13 return T[ u , i ] 14 15 16 print ( knapsack (w=[6 , 3 , 4 , 2] , 17 v =[30 , 14 , 16 , 9] , u=10, i =4))
  • 219. Iterative Algorithm 1 def knapsack (W, w, v ) : 2 T = [ [ None ] * ( len (w) + 1) for _ in range (W + 1 ) ] 3 4 for u in range (W + 1 ) : 5 T[ u ] [ 0 ] = 0 6 7 for i in range (1 , len (w) + 1 ) : 8 for u in range (W + 1 ) : 9 T[ u ] [ i ] = T[ u ] [ i − 1] 10 i f u >= w[ i − 1 ] : 11 T[ u ] [ i ] = max(T[ u ] [ i ] , 12 T[ u − w[ i − 1 ] ] [ i − 1] + v [ i − 1 ] ) 13 14 return T[W] [ len (w) ] 15 16 17 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] , 18 v =[30 , 14 , 16 , 9 ] ) )
  • 221. Analysis Running time: O(nW ) Space: O(nW )
  • 222. Analysis Running time: O(nW ) Space: O(nW ) Space can be improved to O(W ) in the iterative version: instead of storing the whole table, store the current column and the previous one
  • 223. Reconstructing a Solution As it usually happens, an optimal solution can be unwound by analyzing the computed solutions to subproblems
  • 224. Reconstructing a Solution As it usually happens, an optimal solution can be unwound by analyzing the computed solutions to subproblems Start with u = W , i = n
  • 225. Reconstructing a Solution As it usually happens, an optimal solution can be unwound by analyzing the computed solutions to subproblems Start with u = W , i = n If value(u, i) = value(u, i − 1), then item i − 1 is not taken. Update i to i − 1
  • 226. Reconstructing a Solution As it usually happens, an optimal solution can be unwound by analyzing the computed solutions to subproblems Start with u = W , i = n If value(u, i) = value(u, i − 1), then item i − 1 is not taken. Update i to i − 1 Otherwise value(u, i) = value(u − wi−1, i − 1) + vi−1 and the item i − i is taken. Update i to i − 1 and u to u − wi−1
  • 227. Subproblems Revisited How to implement a brute force solution for the knapsack without repetitions problem?
  • 228. Subproblems Revisited How to implement a brute force solution for the knapsack without repetitions problem? Process items one by one. For each item, either take into a bag or not
  • 229. 1 def knapsack (W, w, v , items , l a s t ) : 2 weight = sum(w[ i ] for i in items ) 3 4 i f l a s t == len (w) − 1: 5 return sum( v [ i ] for i in items ) 6 7 value = knapsack (W, w, v , items , l a s t + 1) 8 i f weight + w[ l a s t + 1] <= W: 9 items . append ( l a s t + 1) 10 value = max( value , 11 knapsack (W, w, v , items , l a s t + 1)) 12 items . pop () 13 14 return value 15 16 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] , 17 v =[30 , 14 , 16 , 9] , 18 items =[] , l a s t =−1))
  • 230. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 231. Recursive vs Iterative If all subproblems must be solved then an iterative algorithm is usually faster since it has no recursion overhead
  • 232. Recursive vs Iterative If all subproblems must be solved then an iterative algorithm is usually faster since it has no recursion overhead There are cases however when one does not need to solve all subproblems and the knapsack problem is a good example: assume that W and all wi’s are multiples of 100; then value(w) is not needed if w is not divisible by 100
  • 233. Polynomial Time? The running time O(nW ) is not polynomial since the input size is proportional to log W , but not W
  • 234. Polynomial Time? The running time O(nW ) is not polynomial since the input size is proportional to log W , but not W In other words, the running time is O(n2log W ).
  • 235. Polynomial Time? The running time O(nW ) is not polynomial since the input size is proportional to log W , but not W In other words, the running time is O(n2log W ). E.g., for W = 10 345 970 345 617 824 751 (twentу digits only!) the algorithm needs roughly 1020 basic operations
  • 236. Polynomial Time? The running time O(nW ) is not polynomial since the input size is proportional to log W , but not W In other words, the running time is O(n2log W ). E.g., for W = 10 345 970 345 617 824 751 (twentу digits only!) the algorithm needs roughly 1020 basic operations Solving the knapsack problem in truly polynomial time is the essence of the P vs NP problem, the most important open problem in Computer Science (with a bounty of $1M)
  • 237. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 238. Chain matrix multiplication Input: Chain of n matrices A0, . . . , An−1 to be multiplied. Output: An order of multiplication minimizing the total cost of multiplication.
  • 239. Clarifications Denote the sizes of matrices A0, . . . , An−1 by m0 × m1, m1 × m2, . . . , mn−1 × mn respectively. I.e., the size of Ai is mi × mi+1
  • 240. Clarifications Denote the sizes of matrices A0, . . . , An−1 by m0 × m1, m1 × m2, . . . , mn−1 × mn respectively. I.e., the size of Ai is mi × mi+1 Matrix multiplication is not commutative (in general, A × B ̸= B × A), but it is associative: A × (B × C) = (A × B) × C
  • 241. Clarifications Denote the sizes of matrices A0, . . . , An−1 by m0 × m1, m1 × m2, . . . , mn−1 × mn respectively. I.e., the size of Ai is mi × mi+1 Matrix multiplication is not commutative (in general, A × B ̸= B × A), but it is associative: A × (B × C) = (A × B) × C Thus A × B × C × D can be computed, e.g., as (A × B) × (C × D) or (A × (B × C)) × D
  • 242. Clarifications Denote the sizes of matrices A0, . . . , An−1 by m0 × m1, m1 × m2, . . . , mn−1 × mn respectively. I.e., the size of Ai is mi × mi+1 Matrix multiplication is not commutative (in general, A × B ̸= B × A), but it is associative: A × (B × C) = (A × B) × C Thus A × B × C × D can be computed, e.g., as (A × B) × (C × D) or (A × (B × C)) × D The cost of multiplying two matrices of size p × q and q × r is pqr
  • 243. Example: A × ((B × C) × D) A 50 × 20 B 20 × 1 C 1 × 10 D 10 × 100 × × × cost:
  • 244. Example: A × ((B × C) × D) A 50 × 20 B × C 20 × 10 D 10 × 100 × × cost: 20 · 1 · 10
  • 245. Example: A × ((B × C) × D) A 50 × 20 B × C × D 20 × 100 × cost: 20 · 1 · 10 + 20 · 10 · 100
  • 246. Example: A × ((B × C) × D) A × B × C × D 50 × 100 cost: 20 · 1 · 10 + 20 · 10 · 100 + 50 · 20 · 100 = 120 200
  • 247. Example: (A × B) × (C × D) A 50 × 20 B 20 × 1 C 1 × 10 D 10 × 100 × × × cost:
  • 248. Example: (A × B) × (C × D) A × B 50 × 1 C 1 × 10 D 10 × 100 × × cost: 50 · 20 · 1
  • 249. Example: (A × B) × (C × D) A × B 50 × 1 C × D 1 × 100 × cost: 50 · 20 · 1 + 1 · 10 · 100
  • 250. Example: (A × B) × (C × D) A × B × C × D 50 × 100 cost: 50 · 20 · 1 + 1 · 10 · 100 + 50 · 1 · 100 = 7 000
  • 251. Order as a Full Binary Tree D C A B ((A × B) × C) × D A D B C A × ((B × C) × D) D A B C (A × (B × C)) × D
  • 252. Analyzing an Optimal Tree A0, . . . , Ai−1 Ai, . . . , Aj−1 Aj, . . . , Ak−1 Ak, . . . , An−1 each subtree computes the product of Ap, . . . , Aq for some p ≤ q
  • 253. Subproblems Let M(i, j) be the minimum cost of computing Ai × · · · × Aj−1
  • 254. Subproblems Let M(i, j) be the minimum cost of computing Ai × · · · × Aj−1 Then M(i, j) = min i<k<j {M(i, k)+M(k, j)+mi ·mk ·mj}
  • 255. Subproblems Let M(i, j) be the minimum cost of computing Ai × · · · × Aj−1 Then M(i, j) = min i<k<j {M(i, k)+M(k, j)+mi ·mk ·mj} Base case: M(i, i + 1) = 0
  • 256. Recursive Algorithm 1 T = dict () 2 3 def matrix_mult (m, i , j ) : 4 i f ( i , j ) not in T: 5 i f j == i + 1: 6 T[ i , j ] = 0 7 else : 8 T[ i , j ] = f l o a t ( " i n f " ) 9 for k in range ( i + 1 , j ) : 10 T[ i , j ] = min(T[ i , j ] , 11 matrix_mult (m, i , k ) + 12 matrix_mult (m, k , j ) + 13 m[ i ] * m[ j ] * m[ k ] ) 14 15 return T[ i , j ] 16 17 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100] , i =0, j =4))
  • 257. Converting to an Iterative Algorithm We want to solve subproblems going from smaller size subproblems to larger size ones The size is the number of matrices needed to be multiplied: j − i A possible order:
  • 258. Iterative Algorithm 1 def matrix_mult (m) : 2 n = len (m) − 1 3 T = [ [ f l o a t ( " i n f " ) ] * (n + 1) for _ in range (n + 1 ) ] 4 5 for i in range (n ) : 6 T[ i ] [ i + 1] = 0 7 8 for s in range (2 , n + 1 ) : 9 for i in range (n − s + 1 ) : 10 j = i + s 11 for k in range ( i + 1 , j ) : 12 T[ i ] [ j ] = min(T[ i ] [ j ] , 13 T[ i ] [ k ] + T[ k ] [ j ] + 14 m[ i ] * m[ j ] * m[ k ] ) 15 16 return T [ 0 ] [ n ] 17 18 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100]))
  • 260. Final Remarks Running time: O(n3 ) To unwind a solution, go from the cell (0, n) to a cell (i, i + 1)
  • 261. Final Remarks Running time: O(n3 ) To unwind a solution, go from the cell (0, n) to a cell (i, i + 1) Brute force search: recursively enumerate all possible trees
  • 262. Outline 1 1: Longest Increasing Subsequence 1.1: Warm-up 1.2: Subproblems and Recurrence Relation 1.3: Reconstructing a Solution 1.4: Subproblems Revisited 2 2: Edit Distance 2.1: Algorithm 2.2: Reconstructing a Solution 2.3: Final Remarks 3 3: Knapsack 3.1: Knapsack with Repetitions 3.2: Knapsack without Repetitions 3.3: Final Remarks 4 4: Chain Matrix Multiplication 4.1: Chain Matrix Multiplication 4.2: Summary
  • 263. Step 1 (the most important step) Define subproblems and write down a recurrence relation (with a base case) either by analyzing the structure of an optimal solution, or by optimizing a brute force solution
  • 264. Subproblems: Review 1 Longest increasing subsequence: LIS(i) is the length of longest common subsequence ending at element A[i] 2 Edit distance: ED(i, j) is the edit distance between prefixes of length i and j 3 Knapsack: K(w) is the optimal value of a knapsack of total weight w 4 Chain matrix multiplication M(i, j) is the optimal cost of multiplying matrices through i to j − 1
  • 265. Step 2 Convert a recurrence relation into a recursive algorithm: store a solution to each subproblem in a table before solving a subproblem check whether its solution is already stored in the table
  • 266. Step 3 Convert a recursive algorithm into an iterative algorithm: initialize the table go from smaller subproblems to larger ones specify an order of subproblems
  • 267. Step 4 Prove an upper bound on the running time. Usually the product of the number of subproblems and the time needed to solve a subproblem is a reasonable estimate.
  • 268. Step 5 Uncover a solution
  • 269. Step 6 Exploit the regular structure of the table to check whether space can be saved
  • 270. Recursive vs Iterative Advantages of iterative approach: No recursion overhead May allow saving space by exploiting a regular structure of the table Advantages of recursive approach: May be faster if not all the subproblems need to be solved An order on subproblems is implicit