270-1/02-divide-and-conquer_handout.pdf
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Week 2
Divide and Conquer
1 Growth of Functions
2 Divide-and-Conquer
Min-Max-Problem
3 Tutorial
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
General remarks
First we consider an important tool for the analysis of
algorithms: Big-Oh.
Then we introduce an important algorithmic paradigm:
Divide-and-Conquer.
We conclude by presenting and analysing a simple example.
Reading from CLRS for week 2
Chapter 2, Section 3
Chapter 3
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Growth of Functions
A way to describe behaviour of functions in the limit. We
are studying asymptotic efficiency.
Describe growth of functions.
Focus on what’s important by abstracting away low-order
terms and constant factors.
How we indicate running times of algorithms.
A way to compare “sizes” of functions:
O corresponds to ≤
Ω corresponds to ≥
Θ corresponds to =
We consider only functions f , g : N → R≥0.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
O-Notation
O
(
g(n)
)
is the set of all functions f (n) for which there are
positive constants c and n0 such that
f (n) ≤ cg(n) for all n ≥ n0.
cg(n)
f (n)
n
n0
g(n) is an asymptotic upper bound for f (n).
If f (n) ∈ O(g(n)), we write f (n) = O(g(n)) (we will precisely
explain this soon)
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
O-Notation Examples
2n2 = O(n3), with c = 1 and n0 = 2.
Example of functions in O(n2):
n2
n2 + n
n2 + 1000n
1000n2 + 1000n
Also
n
n/1000
n1.999999
n2/ lg lg lg n
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Ω-Notation
Ω
(
g(n)
)
is the set of all functions f (n) for which there are
positive constants c and n0 such that
f (n) ≥ cg(n) for all n ≥ n0.
cg(n)
f (n)
n
n0
g(n) is an asymptotic lower bound for f (n).
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Ω-Notation Examples
√
n = Ω(lg n), with c = 1 and n0 = 16.
Example of functions in Ω(n2):
n2
n2 + n
n2 − n
1000n2 + 1000n
1000n2 − 1000n
Also
n3
n2.0000001
n2 lg lg lg n
22
n
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Θ-Notation
Θ
(
g(n)
)
is the set of all functions f (n) for which there are
positive constants c1, c2 and n0 such that
c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ n0.
c2g(n)
c1g(n)
f (n)
n
n0
g(n) is an asymptotic tight bound for f (n).
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Θ-Notation (cont’d)
E.
2. 3 Tutorial
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
General remarks
First we consider an important tool for the analysis of
algorithms: Big-Oh.
Then we introduce an important algorithmic paradigm:
Divide-and-Conquer.
We conclude by presenting and analysing a simple example.
3. Reading from CLRS for week 2
Chapter 2, Section 3
Chapter 3
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Growth of Functions
A way to describe behaviour of functions in the limit. We
are studying asymptotic efficiency.
Describe growth of functions.
4. Focus on what’s important by abstracting away low-order
terms and constant factors.
How we indicate running times of algorithms.
A way to compare “sizes” of functions:
O corresponds to ≤
Ω corresponds to ≥
Θ corresponds to =
We consider only functions f , g : N → R≥0.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
5. O-Notation
O
(
g(n)
)
is the set of all functions f (n) for which there are
positive constants c and n0 such that
f (n) ≤ cg(n) for all n ≥ n0.
cg(n)
f (n)
n
n0
g(n) is an asymptotic upper bound for f (n).
If f (n) ∈ O(g(n)), we write f (n) = O(g(n)) (we will precisely
explain this soon)
CS 270
Algorithms
Oliver
Kullmann
Growth of
8. cg(n)
f (n)
n
n0
g(n) is an asymptotic lower bound for f (n).
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Ω-Notation Examples
√
n = Ω(lg n), with c = 1 and n0 = 16.
9. Example of functions in Ω(n2):
n2
n2 + n
n2 − n
1000n2 + 1000n
1000n2 − 1000n
Also
n3
n2.0000001
n2 lg lg lg n
22
n
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
12. CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Asymptotic notation in equations
When on right-hand side
Θ(n2) stands for some anonymous function in the set Θ(n2).
2n2 + 3n + 1 = 2n2 + Θ(n) means 2n2 + 3n + 1 = 2n2 + f (n)
for some f (n) ∈ Θ(n). In particular, f (n) = 3n + 1.
When on left-hand side
No matter how the anonymous functions are chosen on the
left-hand side, there is a way to choose the anonymous
functions
13. on the right-hand side to make the equation valid.
Interpret 2n2 + Θ(n) = Θ(n2) as meaning for all functions
f (n) ∈ Θ(n), there exists a function g(n) ∈ Θ(n2) such that
2n2 + f (n) = g(n).
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Asymptotic notation chained together
2n2 + 3n + 1 = 2n2 + Θ(n) = Θ(n2)
Interpretation:
First equation: There exists f (n) ∈ Θ(n) such that
2n2 + 3n + 1 = 2n2 + f (n).
14. Second equation: For all g(n) ∈ Θ(n) (such as the f (n)
used to make the first equation hold), there exists
h(n) ∈ Θ(n2) such that 2n2 + g(n) = h(n).
Note
What has been said of “Θ” on this and the previous slide also
applies to “O” and “Ω”.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Example Analysis
Insertion-Sort(A)
15. 1 for j = 2 to A.length
2 key = A[j]
3 // Insert A[j] into sorted sequence A[1 . . j−1].
4 i = j−1
5 while i > 0 and A[i] > key
6 A[i+1] = A[i]
7 i = i−1
8 A[i+1] = key
The for -loop on line 1 is executed O(n) times; and each
statement costs constant time, except for the while -loop on
lines 5-7 which costs O(n).
Thus overall runtime is: O(n) × O(n) = O(n2).
Note: In fact, as seen last week, worst-case runtime is Θ(n2).
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
16. Problem
Tutorial
Divide-and-Conquer Approach
There are many ways to design algorithms.
For example, insertion sort is incremental: having sorted
A[1 . . j−1], place A[j] correctly, so that A[1 . . j] is sorted.
Divide-and-Conquer is another common approach:
Divide the problem into a number of subproblems that are
smaller instances of the same problem.
Conquer the subproblems by solving them recursively.
Base case: If the subproblem are small enough, just solve
them by brute force.
Combine the subproblem solutions to give a solution to the
original problem.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
17. Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Naive Min-Max
Find minimum and maximum of a list A of n>0 numbers.
Naive-Min-Max(A)
1 least = A[1]
2 for i = 2 to A.length
3 if A[i] < least
4 least = A[i]
5 greatest = A[1]
6 for i = 2 to A.length
7 if A[i] > greatest
8 greatest = A[i]
9 return (least, greatest)
The for-loop on line 2 makes n−1 comparisons, as does the
for-loop on line 6, making a total of 2n−2 comparisons.
Can we do better? Yes!
CS 270
Algorithms
18. Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Divide-and-Conquer Min-Max
As we are dealing with subproblems, we state each subproblem
as computing minimum and maximum of a subarray A[p . . q].
Initially, p = 1 and q = A.length, but these values change as we
recurse through subproblems.
To compute minimum and maximum of A[p . . q]:
Divide by splitting into two subarrays A[p . . r] and A[r+1 . . q],
where r is the halfway point of A[p . . q].
Conquer by recursively computing minimum and maximum of
the two subarrays A[p . . r] and A[r+1 . . q].
Combine by computing the overall minimum as the min of the
two recursively computed minima, similar for the overall
maximum.
20. 9 (min2, max2) = Min-Max(A, r+1, q)
10 return
(
min(min1, min2), max(max1, max2)
)
Note
In line 7, r computes the halfway point of A[p . . q].
n = q − p + 1 is the number of elements from which we compute
the min and max.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
21. Tutorial
Solving the Min-Max Recurrence
Let T(n) be the number of comparisons made by
Min-Max(A, p, q), where n = q−p+1 is the number of
elements from which we compute the min and max.
Then T(1) = 0, T(2) = 1, and for n > 2:
T(n) = T (⌈ n/2⌉ ) + T (⌊ n/2⌋ ) + 2.
Claim
T(n) = 3
2
n − 2 for n = 2k ≥ 2, i.e., powers of 2.
Proof.
The proof is by induction on k (using n = 2k).
Base case: true for k=1, as T(21) = 1 = 3
2
· 21 − 2.
Induction step: assuming T(2k) = 3
2
2k − 2, we get
T(2k+1) = 2T(2k)+2 = 2
(
3
2
2k−2
22. )
+2 = 3
2
2k+1−2
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Solving the Min-Max Recurrence (cont’d)
Some remarks:
1 If we replace line 7 of the algorithm by r = p+1, then the
resulting runtime T ′(n) satisfies T ′(n) =
23. ⌈
3n
2
⌉
−2 for all
n > 0.
2 For example, T ′(6) = 7 whereas T(6) = 8.
3 It can be shown that at least
⌈
3n
2
⌉
− 2 comparisons are
necessary in the worst case to find the maximum and
minimum of n numbers for any comparison-based
algorithm: this is thus a lower bound on the problem.
4 Hence this (last) algorithm is provably optimal.
CS 270
Algorithms
Oliver
Kullmann
25. 10 120n2 +
√
n + 99n = Θ(n2) ? YES
11 sin(n) = O(1) ? YES
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Can we improve the Min-Max algorithm?
Determine T(6), the number of comparisons the Min-Max
algorithms performs for 6 elements.
As usual, the argumentation is important (why is this
correct?).
26. Perhaps best is that you run the algorithm (on paper) for 6
elements.
Notice that the basic parameters for a run do not depend on
the values of the elements, but only on their total number.
Once you found T(6), is this really optimal?
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Unfolding the recursion for Min-Max
We have
27. T(n) =
0 if n = 1
1 if n = 2
T(
⌈
n
2
⌉
) + T(
⌊
n
2
⌋
) + 2 else
29. Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Finding the best min-max algorithm
1 As you can see in the section on the min-max problem, for
some input sizes we can validate the guess T(n) ≈ 3
2
n.
2 One can now try to find a precise general formula for T(n),
3 However we see that we have T(6) = 8, while we can
handle this case with 7 comparisons. So perhaps we can
find a better algorithm?
4 And that is the case:
1 If n is even, find the min-max for the first two elements
using 1 comparison; if n is odd, find the min-max for the
first element using 0 comparisons.
2 Now iteratively find the min-max of the next two elements
using 1 comparison, and compute the new current min-max
using 2 further comparisons. And so on ....
30. This yields an algorithm using precisely
⌈
3
2
n
⌉
− 2
comparisons. And this is precisely optimal for all n.
We learn: Here divide-and-conqueor provided a good stepping
stone to find a really good algorithm.
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
31. Tutorial
Designing an algorithm: Median
The median of a sequence of numbers is the “middle value” —
the value in the middle position of the list after sorting.
Can we do better than the obvious algorithm (by sorting),
using divide-and-conquer?
(Here some judgement is needed, that the precise details about
what actually is the “middle value” won’t make a fundamental
difference, and so it’s best to ignore them for the initial phase,
where developing the ideas is of utmost importance.)
CS 270
Algorithms
Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
32. Tutorial
First strategy: divide in half
1 Divide the array A of numbers into two parts, B and C, of
equal size (more precisely, nearly equal size, but such details
are best ignored in the beginning).
2 In principle B and C could be anything, but easiest is to let
them be the first half and the second half of A.
3 Compute (that is now the conquer-phase) the medians
mB, mC of the arrays B, C.
4 If mC < mB, then swap B and C.
5 Re-order B and C internally, so that the elements smaller
than the median are on the left, and the larger elements on
the right.
6 Now consider the array of elements from mB to mC (note
that this is again half of the size of A): The median of this
array (computed again in the conquer-phase, recursively) is
the median of A.
CS 270
Algorithms
Oliver
Kullmann
Growth of
33. Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
Example run
1, 20, 5, 18, 7, 10, 10 | 4, 20, 7, 7, 17, 14, 1, 12
of length 15, partitioned into 7 and 8 elements.
The left median is 10. The right median is 7 or 12; let’s take 7.
Swap, and partition the two parts according to their medians:
4, 7, 1, 7, 20, 17, 14, 12 | 1, 5, 7, 10, 20, 18, 10.
Compute the median of the new middle part
20, 17, 14, 12, 1, 5, 7, 10
which is 10 (the right answer) or 12.
So, it could work (or not).
CS 270
Algorithms
34. Oliver
Kullmann
Growth of
Functions
Divide-and-
Conquer
Min-Max-
Problem
Tutorial
The recurrence
We are in this phase only interested in the recurrence:
T(n) = 3 · T(n/2) + O(n).
We ignore here all issues about the precise partitioning (and
whether we can get it to work at all!) — all what counts here is
the insight whether we could get a good algorithm!
Next week we develop tools to see immediately
how good this approach could be.
Divide and ConquerGrowth of FunctionsDivide-and-
ConquerMin-Max-ProblemTutorial
270-1/03-solving-recurrences_handout.pdf
39. A[p . . q] Initially, p = 1 and q = A.length, but these values
change again as we recurse through subproblems.
To sort A[p . . q]:
Divide by splitting into two subarrays A[p . . . r] and
A[r+1 . . . q], where r is the halfway point of A[p . . . q].
Conquer by recursively sorting the two subarrays A[p . . . r] and
A[r+1 . . . q].
Combine by merging the two sorted subarrays A[p . . . r] and
A[r+1 . . . q] to produce a single sorted subarray A[p . . . q].
The recursion bottoms out when the subarray has just 1
element, so that it is trivially sorted.
CS 270
Algorithms
Oliver
Kullmann
Divide-and-
Conquer
Merge Sort
Solving
Recurrences
42. subarray in A[p . . q].
We implement is so that it takes Θ(n) time, with
n = q − p + 1 = the number of elements being merged.
CS 270
Algorithms
Oliver
Kullmann
Divide-and-
Conquer
Merge Sort
Solving
Recurrences
Recursion
Trees
Master
Theorem
Divide-and-
Conquer
Matrix
43. multiplication
Tutorial
Merge(A, p, r, q)
1 n1 = r − p + 1
2 n2 = q − r
3 let L[1 . . n1+1] and R[1 . . n2+1] be new arrays
4 for i = 1 to n1
5 L[i] = A[p+i−1]
6 for j = 1 to n2
7 R[j] = A[r+j]
8 L[n1+1] = R[n2+1] = ∞
9 i = j = 1
10 for k = p to q
11 if L[i] ≤ R[j]
12 A[k] = L[i]
13 i = i+1
14 else A[k] = R[j]
15 j = j+1
CS 270
Algorithms
Oliver
Kullmann
Divide-and-
Conquer
44. Merge Sort
Solving
Recurrences
Recursion
Trees
Master
Theorem
Divide-and-
Conquer
Matrix
multiplication
Tutorial
Analysis of Merge-Sort
The runtime T(n), where n = q−p+1 > 1, satisfies:
T(n) = 2T(n/2) + Θ(n).
We will show that T(n) = Θ(n lg n).
It can be shown (see tutorial-section) that Ω(n lg n)
comparisons are necessary in the worst case to sort n
numbers for any comparison-based algorithm: this is thus
an (asymptotic) lower bound on the problem.
Hence Merge-Sort is provably (asymptotically) optimal.
46. Analysing divide-and-conquer algorithms
Recall the divide-and-conquer paradigm:
Divide the problem into a number of subproblems that are
smaller instances of the same problem.
Conquer the subproblems by solving them recursively.
Base case: If the subproblem are small enough, just solve
them by brute force.
Combine the subproblem solutions to give a solution to the
original problem.
We use recurrences to characterise the running time of a
divide-and-conquer algorithm. Solving the recurrence gives us
the asymptotic running time.
A recurrence is a function defined in terms of
one or more base cases, and
itself, with smaller arguments
CS 270
Algorithms
Oliver
Kullmann
Divide-and-