2. Big-O and Big-Ω
• Many students think Big-O is “worst case” and Big-Ω
is “best case.” They are WRONG! Both O(x) and Ω(x)
describe running time for the worst possible input!
• What do O(x) and Ω(x) mean?
• How do we show an algorithm is O(x) or Ω(x)?
Algorithm is O(x) Algorithm is Ω(x)
The algorithm takes at most c*x steps
to run on the worst possible input.
The algorithm takes at least c*x steps
to run on the worst possible input.
Algorithm is O(x) Algorithm is Ω(x)
Show: for every input, the algorithm
takes at most c*x steps.
Show: there is an input that makes the
algorithm take at least c*x steps.
3. Analyzing algorithms using O, Ω, ϴ
• First, we analyze some easy algorithms.
• We can easily find upper and lower bounds
on the running times of these algorithms by
using basic arithmetic.
4. Example 1
for i = 1..100
print *
• Running time: 100*c, which is ϴ(1) (why?)
• O(1), Ω(1) => ϴ(1)
5. Example 2
for i = 1..x
print *
• Running time: x*c
• O(x), Ω(x) => ϴ(x)
• In fact, this is also Ω(1) and O(n^2), but these
are very weak statements.
x
6. Example 3
for i = 1..x
for j = 1..x
print *
• O(x^2), Ω(x^2) => ϴ(x^2)
x
x
7. Example 4a
for i = 1..x
for j = 1..i
print *
• Big-O: i is always ≤ x, so j always iterates up to at most
x, so this at most x*x steps, which is O(x^2).
• Big-Ω:
• When i=1, the loop over j performs “print *” once.
• When i=2, the loop over j performs “print *” twice. […]
• So, “print *” is performed 1+2+3+…+x times.
• Easy summation formula: 1+2+3+…+x = x(x+1)/2
• x(x+1)/2 = x^2/2+x/2 ≥ (1/2) x^2, which is Ω(x^2)
8. Example 4b
for i = 1..x
for j = 1..i
print *
• Big-Ω:
– Useful trick: consider the iterations of the
first loop for which i >= x/2.
– In these iterations, the second loop iterates
from j=1 to at least j=x/2.
– Therefore, the number of steps performed
in these iterations is at least x/2*x/2 = (1/4)
x^2, which is Ω(x^2).
– The time to perform ALL iterations is even
more than this so, of course, the whole
algorithm must take Ω(x^2) time.
– Therefore, this function is ϴ(x^2).
>= for i = x/2..x
for j = 1..x/2
print *
i ≥ x/2
j ≤ x/2
x/2
x/2
9. for i = 1..x
for j = 1..i
for k = 1..j
print *
• Big-O: i, j, k ≤ x, so we know total number of steps is ≤ x*x*x = O(x^3).
• Big-Ω:
– Consider only the iterations of the first loop from x/2 up to x.
– For these values of i, the second loop always iterates up to at least x/2.
– Consider only the iterations of the second loop from x/4 up to x/2.
– For these values of j, the third loop always iterates up to at least x/4.
– For these values of i, j and k, the function performs “print *” at least:
x/2 * x/4 * x/4 = x^3/32 times, which is Ω(x^3).
– Therefore, this function is ϴ(x^3).
Example 5
for i = x/2..x
for j = 1..i
for k = 1..j
print *
for i = x/2..x
for j = 1..x/2
for k = 1..j
print *
for i = x/2..x
for j = x/4..x/2
for k = 1..j
print *
for i = x/2..x
for j = x/4..x/2
for k = 1..x/4
print *
10. Analyzing algorithms using O, Ω, ϴ
• Now, we are going to analyze algorithms
whose running times depend on their inputs.
• For some inputs, they terminate very quickly.
• For other inputs, they can be slow.
11. Example 1
LinearSearch(A[1..n], key)
for i = 1..n
if A[i] = key then return true
return false
• Might take 1 iteration… might take many...
• Big-O: at most n iterations, constant work per
iteration, so O(n)
• Big-Ω: can you find a bad input that makes the
algorithm take Ω(n) time?
12. Example 2
int A[n]
for i = 1..n
binarySearch(A, i)
• Remember: binarySearch is O(lg n).
• O(n lg n)
• How about Ω?
• Maybe it’s Ω(n lg n), but it’s hard to tell.
• Not enough to know binary search is Ω(lg n), because the
worst-case input might make only one invocation of binary
search take c*lg n steps (and the rest might finish in 1 step).
• Would need to find a particular input that causes the
algorithm to take a total of c(n lg n) steps.
• In fact, binarySearch(A, i) takes c*lg n steps when i is not in A,
so this function takes c(n lg n) steps if A[1..n] doesn’t contain
any number in {1,2,…,n}.
13. Example 3
• Your boss tells you that your group needs to
develop a sorting algorithm for the company's
web server that runs in O(n lg n) time.
• If you can't prove that it can sort any input of
length n in O(n lg n) time, it's no good to him. He
doesn't want to lose orders because of a slow
server.
• Your colleague tells you that he's been working
on this piece of code, which he believes should fit
the bill. He says he thinks it can sort any input of
length n in O(n lg n) time, but he doesn't know
how to prove this.
14. Example 3 continued
WeirdSort(A[1..n])
last := n
sorted := false
while not sorted do
sorted := true
for j := 1 to last-1 do
if A[j] > A[j+1] then
swap A[j] and A[j+1]
sorted := false
last := last-1
Editor's Notes
What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?
What do you think about this function? Big-O of what? Big-Omega of what? Any guesses? So what does this computation look like? If we draw a circle for each time the “print *” line is executed, we get…
What do you think about this function? Big-O of what? Big-Omega of what? Any guesses? What does this computation look like? Let’s again draw a circle for each time the “print *” line is executed. In the first iteration of the first loop, print * is executed n times, so we get a full row. In the second iteration, we get another row, and so on…
What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?
Let’s look at this function from another angle. Of course, since it’s so simple, there is an easy summation formula, which we saw on the last slide. However, the technique I’m about to show you applies to more complicated examples that we don’t know summation formulae for. I’ll show you a more complex example on the next slide. So, let’s think about what this computation looks like. In the first iteration of the first loop, i=1, so j goes from 1 to 1, and print * is executed only once. In the second literation of the first loop, i=2, so j goes from 1 to 2, and print * is executed twice. In the third iteration, print * is executed three times, and so on. So, we get a triangle. *narrate the points on the slide*
Now, we look at a more complicated version of the example we just saw. What do you think about this function? Big-O of what? Big-Omega of what? Any guesses?
What do you guys think about this function? Big-O of what? Big-Omega of what? *You don’t need to find the worst input. Any input that is bad enough to give omega(n) will do. For instance, any array where key is in position n, n/2 or n/7. All will yield omega(n).*
*First spend a small amount of time understanding the control flow through the loops, and what the inner loop is doing.*
Loop invariant for the WHILE loop:
At end of iteration i, A[last+1..n] contains the i largest elements of A in sorted order.
Show T(n) is O(n^2)
For every n, and every input of size n:
While loop executed at most n-1 times
Each iteration of while loop takes at most cn time
Total cn(n-1)+d time
Show T(n) is Omega(n)
There are fast inputs
e.g., array already sorted => while loop executed only once
Weak statement
Show T(n) is Omega(n^2)
Identify input BAD ENOUGH to get Omega(n^2) time
e.g., array is in reverse sorted order.
Each iteration of while loop, the largest item in A[1..last] "bubbles" to A[last+1]
which takes time c*last, since the largest item is in A[1].