Introduction to Algorithms
                           6.046J/18.401J
                                     LECTURE 2
                                     Asymptotic Notation
                                     • O-, Ω-, and Θ-notation
                                     Recurrences
                                     • Substitution method
                                     • Iterating the recurrence
                                     • Recursion tree
                                     • Master method
                       Prof. Erik Demaine
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.1
Asymptotic notation
 O-notation (upper bounds):
       We write f(n) = O(g(n)) if there
        We write f(n) = O(g(n)) if there
       exist constants c > 0, n00 > 0 such
        exist constants c > 0, n > 0 such
       that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00..
        that 0 ≤ f(n) ≤ cg(n) for all n ≥ n




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.2
Asymptotic notation
 O-notation (upper bounds):
       We write f(n) = O(g(n)) if there
        We write f(n) = O(g(n)) if there
       exist constants c > 0, n00 > 0 such
        exist constants c > 0, n > 0 such
       that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00..
        that 0 ≤ f(n) ≤ cg(n) for all n ≥ n

EXAMPLE: 2n2 = O(n3)                                      (c = 1, n0 = 2)


September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.3
Asymptotic notation
 O-notation (upper bounds):
       We write f(n) = O(g(n)) if there
        We write f(n) = O(g(n)) if there
       exist constants c > 0, n00 > 0 such
        exist constants c > 0, n > 0 such
       that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00..
        that 0 ≤ f(n) ≤ cg(n) for all n ≥ n

EXAMPLE: 2n2 = O(n3)                                      (c = 1, n0 = 2)

           functions,
           not values
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.4
Asymptotic notation
 O-notation (upper bounds):
       We write f(n) = O(g(n)) if there
        We write f(n) = O(g(n)) if there
       exist constants c > 0, n00 > 0 such
        exist constants c > 0, n > 0 such
       that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00..
        that 0 ≤ f(n) ≤ cg(n) for all n ≥ n

EXAMPLE: 2n2 = O(n3)                                      (c = 1, n0 = 2)
                                                           funny, “one-way”
           functions,                                      equality
           not values
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.5
Set definition of O-notation
   O(g(n)) = { f(n) :: there exist constants
   O(g(n)) = { f(n) there exist constants
                      c > 0, n00 > 0 such
                       c > 0, n > 0 such
                      that 0 ≤ f(n) ≤ cg(n)
                       that 0 ≤ f(n) ≤ cg(n)
                      for all n ≥ n00 }
                       for all n ≥ n }




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.6
Set definition of O-notation
   O(g(n)) = { f(n) :: there exist constants
   O(g(n)) = { f(n) there exist constants
                      c > 0, n00 > 0 such
                       c > 0, n > 0 such
                      that 0 ≤ f(n) ≤ cg(n)
                       that 0 ≤ f(n) ≤ cg(n)
                      for all n ≥ n00 }
                       for all n ≥ n }

EXAMPLE: 2n2 ∈ O(n3)



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.7
Set definition of O-notation
   O(g(n)) = { f(n) :: there exist constants
   O(g(n)) = { f(n) there exist constants
                      c > 0, n00 > 0 such
                       c > 0, n > 0 such
                      that 0 ≤ f(n) ≤ cg(n)
                       that 0 ≤ f(n) ≤ cg(n)
                      for all n ≥ n00 }
                       for all n ≥ n }

EXAMPLE: 2n2 ∈ O(n3)
(Logicians: λn.2n2 ∈ O(λn.n3), but it’s
convenient to be sloppy, as long as we
understand what’s really going on.)
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.8
Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.9
Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.
EXAMPLE:                 f(n) = n3 + O(n2)
                         means
                         f(n) = n3 + h(n)
                         for some h(n) ∈ O(n2) .


September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.10
Macro substitution
Convention: A set in a formula represents
an anonymous function in the set.
EXAMPLE:                 n2 + O(n) = O(n2)
                         means
                         for any f(n) ∈ O(n):
                              n2 + f(n) = h(n)
                              for some h(n) ∈ O(n2) .

September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.11
Ω-notation (lower bounds)
O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.12
Ω-notation (lower bounds)
O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).

   Ω(g(n)) = { f(n) :: there exist constants
   Ω(g(n)) = { f(n) there exist constants
                      c > 0, n00 > 0 such
                       c > 0, n > 0 such
                      that 0 ≤ cg(n) ≤ f(n)
                       that 0 ≤ cg(n) ≤ f(n)
                      for all n ≥ n00 }
                       for all n ≥ n }


September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.13
Ω-notation (lower bounds)
O-notation is an upper-bound notation. It
makes no sense to say f(n) is at least O(n2).

   Ω(g(n)) = { f(n) :: there exist constants
   Ω(g(n)) = { f(n) there exist constants
                      c > 0, n00 > 0 such
                       c > 0, n > 0 such
                      that 0 ≤ cg(n) ≤ f(n)
                       that 0 ≤ cg(n) ≤ f(n)
                      for all n ≥ n00 }
                       for all n ≥ n }

 EXAMPLE:                   n = Ω(lg n) (c = 1, n0 = 16)
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.14
Θ-notation (tight bounds)

             Θ(g(n)) = O(g(n)) ∩ Ω(g(n))
             Θ(g(n)) = O(g(n)) ∩ Ω(g(n))




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.15
Θ-notation (tight bounds)

             Θ(g(n)) = O(g(n)) ∩ Ω(g(n))
             Θ(g(n)) = O(g(n)) ∩ Ω(g(n))


   EXAMPLE:
                            1
                            2
                                n − 2 n = Θ( n )
                                  2                            2




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.16
ο-notation and ω-notation
O-notation and Ω-notation are like ≤ and ≥.
o-notation and ω-notation are like < and >.

ο(g(n)) = { f(n) :: for any constant c > 0,
ο(g(n)) = { f(n) for any constant c > 0,
                                      there is a constant n00 > 0
                                       there is a constant n > 0
                                      such that 0 ≤ f(n) < cg(n)
                                       such that 0 ≤ f(n) < cg(n)
                                      for all n ≥ n00 }
                                       for all n ≥ n }

 EXAMPLE: 2n2 = o(n3)                                      (n0 = 2/c)
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.17
ο-notation and ω-notation
O-notation and Ω-notation are like ≤ and ≥.
o-notation and ω-notation are like < and >.

ω(g(n)) = { f(n) :: for any constant c > 0,
ω(g(n)) = { f(n) for any constant c > 0,
                                      there is a constant n00 > 0
                                       there is a constant n > 0
                                      such that 0 ≤ cg(n) < f(n)
                                       such that 0 ≤ cg(n) < f(n)
                                      for all n ≥ n00 }
                                       for all n ≥ n }

 EXAMPLE:                   n = ω (lg n)                   (n0 = 1+1/c)
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.18
Solving recurrences
  • The analysis of merge sort from Lecture 1
    required us to solve a recurrence.
  • Recurrences are like solving integrals,
    differential equations, etc.
      o Learn a few tricks.
  • Lecture 3: Applications of recurrences to
    divide-and-conquer algorithms.



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.19
Substitution method
  The most general method:
  1. Guess the form of the solution.
  2. Verify by induction.
  3. Solve for constants.




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.20
Substitution method
  The most general method:
  1. Guess the form of the solution.
  2. Verify by induction.
  3. Solve for constants.
  EXAMPLE: T(n) = 4T(n/2) + n
  • [Assume that T(1) = Θ(1).]
  • Guess O(n3) . (Prove O and Ω separately.)
  • Assume that T(k) ≤ ck3 for k < n .
  • Prove T(n) ≤ cn3 by induction.
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.21
Example of substitution
T (n) = 4T (n / 2) + n
      ≤ 4c ( n / 2 ) 3 + n
      = ( c / 2) n 3 + n
      = cn3 − ((c / 2)n3 − n)     desired – residual
      ≤ cn3 desired
whenever (c/2)n3 – n ≥ 0, for example,
if c ≥ 2 and n ≥ 1.
                           residual

September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.22
Example (continued)
 • We must also handle the initial conditions,
   that is, ground the induction with base
   cases.
 • Base: T(n) = Θ(1) for all n < n0, where n0
   is a suitable constant.
 • For 1 ≤ n < n0, we have “Θ(1)” ≤ cn3, if we
   pick c big enough.



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.23
Example (continued)
 • We must also handle the initial conditions,
   that is, ground the induction with base
   cases.
 • Base: T(n) = Θ(1) for all n < n0, where n0
   is a suitable constant.
 • For 1 ≤ n < n0, we have “Θ(1)” ≤ cn3, if we
   pick c big enough.

                     This bound is not tight!
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.24
A tighter upper bound?
We shall prove that T(n) = O(n2).




September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.25
A tighter upper bound?
We shall prove that T(n) = O(n2).
Assume that T(k) ≤ ck2 for k < n:
T (n) = 4T (n / 2) + n
      ≤ 4 c ( n / 2) 2 + n
      = cn 2 + n
      = O(n 2 )



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.26
A tighter upper bound?
We shall prove that T(n) = O(n2).
Assume that T(k) ≤ ck2 for k < n:
T (n) = 4T (n / 2) + n
      ≤ 4c ( n / 2) 2 + n
      = cn 2 + n
      = O(n 2 ) Wrong! We must prove the I.H.



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.27
A tighter upper bound?
We shall prove that T(n) = O(n2).
Assume that T(k) ≤ ck2 for k < n:
T (n) = 4T (n / 2) + n
      ≤ 4c ( n / 2) 2 + n
      = cn 2 + n
      = O(n 2 ) Wrong! We must prove the I.H.
      = cn 2 − (− n) [ desired – residual ]
      ≤ cn 2 for no choice of c > 0. Lose!
September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.28
A tighter upper bound!
IDEA: Strengthen the inductive hypothesis.
• Subtract a low-order term.
Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n.




  September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.29
A tighter upper bound!
IDEA: Strengthen the inductive hypothesis.
• Subtract a low-order term.
Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n.
          T(n) = 4T(n/2) + n
               = 4(c1(n/2)2 – c2(n/2)) + n
               = c1n2 – 2c2n + n
               = c1n2 – c2n – (c2n – n)
               ≤ c1n2 – c2n if c2 ≥ 1.

  September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.30
A tighter upper bound!
IDEA: Strengthen the inductive hypothesis.
• Subtract a low-order term.
Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n.
          T(n) = 4T(n/2) + n
               = 4(c1(n/2)2 – c2(n/2)) + n
               = c1n2 – 2c2n + n
               = c1n2 – c2n – (c2n – n)
               ≤ c1n2 – c2n if c2 ≥ 1.
Pick c1 big enough to handle the initial conditions.
  September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.31
Recursion-tree method
• A recursion tree models the costs (time) of a
  recursive execution of an algorithm.
• The recursion-tree method can be unreliable,
  just like any method that uses ellipses (…).
• The recursion-tree method promotes intuition,
  however.
• The recursion tree method is good for
  generating guesses for the substitution method.

September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.32
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.33
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                                T(n)




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.34
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2
            T(n/4)                                 T(n/2)




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.35
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2
             (n/4)2                                 (n/2)2

 T(n/16)              T(n/8)            T(n/8)               T(n/4)




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.36
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2
             (n/4)2                                 (n/2)2

  (n/16)2             (n/8)2             (n/8)2               (n/4)2
  …




 Θ(1)

 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.37
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2                                                                  n2
             (n/4)2                                 (n/2)2

  (n/16)2             (n/8)2             (n/8)2               (n/4)2
  …




 Θ(1)

 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.38
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2                                                                   n2
                                                                                     5 n2
             (n/4)2                                 (n/2)2
                                                                                    16
  (n/16)2             (n/8)2             (n/8)2               (n/4)2
  …




 Θ(1)

 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson    L2.39
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2                                                                    n2
                                                                                       5 n2
             (n/4)2                                 (n/2)2
                                                                                      16
                                                                                     25 n 2
  (n/16)2             (n/8)2             (n/8)2               (n/4)2
                                                                                    256




                                                                                      …
  …




 Θ(1)

 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson     L2.40
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
                   n2                                                                      n2
                                                                                        5 n2
             (n/4)2                                 (n/2)2
                                                                                       16
                                                                                      25 n 2
  (n/16)2             (n/8)2             (n/8)2               (n/4)2
                                                                                     256




                                                                                       …
  …




 Θ(1)                       Total = n           2
                                                    (    5 + 5 2
                                                    1 + 16 16      ( ) +( ) +L       5 3
                                                                                    16
                                                                                                )
                                  = Θ(n2)                       geometric series
 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson      L2.41
The master method

The master method applies to recurrences of
the form
                     T(n) = a T(n/b) + f (n) ,
where a ≥ 1, b > 1, and f is asymptotically
positive.



September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.42
Three common cases
Compare f (n) with nlogba:
1. f (n) = O(nlogba – ε) for some constant ε > 0.
   • f (n) grows polynomially slower than nlogba
     (by an nε factor).
   Solution: T(n) = Θ(nlogba) .




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.43
Three common cases
Compare f (n) with nlogba:
1. f (n) = O(nlogba – ε) for some constant ε > 0.
   • f (n) grows polynomially slower than nlogba
     (by an nε factor).
   Solution: T(n) = Θ(nlogba) .
2. f (n) = Θ(nlogba lgkn) for some constant k ≥ 0.
   • f (n) and nlogba grow at similar rates.
   Solution: T(n) = Θ(nlogba lgk+1n) .
 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.44
Three common cases (cont.)
Compare f (n) with nlogba:
3. f (n) = Ω(nlogba + ε) for some constant ε > 0.
   • f (n) grows polynomially faster than nlogba (by
     an nε factor),
   and f (n) satisfies the regularity condition that
   a f (n/b) ≤ c f (n) for some constant c < 1.
   Solution: T(n) = Θ( f (n)) .


 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.45
Examples

EX. T(n) = 4T(n/2) + n
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n.
    CASE 1: f (n) = O(n2 – ε) for ε = 1.
    ∴ T(n) = Θ(n2).




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.46
Examples

EX. T(n) = 4T(n/2) + n
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n.
    CASE 1: f (n) = O(n2 – ε) for ε = 1.
    ∴ T(n) = Θ(n2).

EX. T(n) = 4T(n/2) + n2
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n2.
    CASE 2: f (n) = Θ(n2lg0n), that is, k = 0.
    ∴ T(n) = Θ(n2lg n).

 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.47
Examples
EX. T(n) = 4T(n/2) + n3
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n3.
    CASE 3: f (n) = Ω(n2 + ε) for ε = 1
    and 4(n/2)3 ≤ cn3 (reg. cond.) for c = 1/2.
    ∴ T(n) = Θ(n3).




 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.48
Examples
EX. T(n) = 4T(n/2) + n3
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n3.
    CASE 3: f (n) = Ω(n2 + ε) for ε = 1
    and 4(n/2)3 ≤ cn3 (reg. cond.) for c = 1/2.
    ∴ T(n) = Θ(n3).
EX. T(n) = 4T(n/2) + n2/lg n
    a = 4, b = 2 ⇒ nlogba = n2; f (n) = n2/lg n.
    Master method does not apply. In particular,
    for every constant ε > 0, we have nε = ω(lg n).
 September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.49
Idea of master theorem
     Recursion tree:
                              a        f (n)
            f (n/b) f (n/b) … f (n/b)
                       a
   f (n/b2) f (n/b2) … f (n/b2)
    …




  Τ (1)

September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson   L2.50
Idea of master theorem
     Recursion tree:
                                       f (n)                                          f (n)
                              a
            f (n/b) f (n/b) … f (n/b)                                              a f (n/b)
                       a
   f (n/b2) f (n/b2) … f (n/b2)                                                    a2 f (n/b2)
    …




                                                                                     …
  Τ (1)

September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.51
Idea of master theorem
        Recursion tree:
                                          f (n)                                          f (n)
                                  a
                f (n/b) f (n/b) … f (n/b)                                             a f (n/b)
h = logbn                  a
       f (n/b2) f (n/b2) … f (n/b2)                                                   a2 f (n/b2)
       …




                                                                                        …
     Τ (1)

   September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.52
Idea of master theorem
        Recursion tree:
                                          f (n)                                          f (n)
                                  a
                f (n/b) f (n/b) … f (n/b)                                              a f (n/b)
h = logbn                  a
       f (n/b2) f (n/b2) … f (n/b2)                                                   a2 f (n/b2)

                                   #leaves = ah
       …




                                                                                         …
                                           = alogbn
     Τ (1)                                                                            nlogbaΤ (1)
                                           = nlogba

   September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.53
Idea of master theorem
        Recursion tree:
                                          f (n)                                          f (n)
                                  a
                f (n/b) f (n/b) … f (n/b)                                              a f (n/b)
h = logbn                  a
       f (n/b2) f (n/b2) … f (n/b2)                                                   a2 f (n/b2)
       …




           CASE 1: The weight increases




                                                                                         …
            CASE 1: The weight increases
           geometrically from the root to the
            geometrically from the root to the
     Τ (1) leaves. The leaves hold aaconstant
            leaves. The leaves hold constant                                          nlogbaΤ (1)
           fraction of the total weight.
            fraction of the total weight.
                                                                                      Θ(nlogba)
   September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.54
Idea of master theorem
        Recursion tree:
                                            f (n)                                          f (n)
                                  a
                f (n/b) f (n/b) … f (n/b)                                                a f (n/b)
h = logbn                  a
       f (n/b2) f (n/b2) … f (n/b2)                                                     a2 f (n/b2)
       …




                                                                                           …
                        CASE 2: (k = 0) The weight
                         CASE 2: (k = 0) The weight
     Τ (1)              is approximately the same on
                         is approximately the same on                                   nlogbaΤ (1)
                        each of the logbbn levels.
                         each of the log n levels.
                                                                                   Θ(nlogbalg n)
   September 12, 2005     Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.55
Idea of master theorem
        Recursion tree:
                                          f (n)                                          f (n)
                                  a
                f (n/b) f (n/b) … f (n/b)                                              a f (n/b)
h = logbn                  a
       f (n/b2) f (n/b2) … f (n/b2)                                                   a2 f (n/b2)
       …




           CASE 3: The weight decreases




                                                                                         …
            CASE 3: The weight decreases
           geometrically from the root to the
            geometrically from the root to the
     Τ (1) leaves. The root holds aaconstant
            leaves. The root holds constant                                           nlogbaΤ (1)
           fraction of the total weight.
            fraction of the total weight.
                                                                                      Θ( f (n))
   September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson       L2.56
Appendix: geometric series


               2     n  1 − x n +1
       1+ x + x +L+ x =            for x ≠ 1
                          1− x

                           21
             1+ x + x +L =      for |x| < 1
                           1− x

                                                                     Return to last
                                                                     slide viewed.


September 12, 2005   Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson      L2.57

Lec2 Algorth

  • 1.
    Introduction to Algorithms 6.046J/18.401J LECTURE 2 Asymptotic Notation • O-, Ω-, and Θ-notation Recurrences • Substitution method • Iterating the recurrence • Recursion tree • Master method Prof. Erik Demaine September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.1
  • 2.
    Asymptotic notation O-notation(upper bounds): We write f(n) = O(g(n)) if there We write f(n) = O(g(n)) if there exist constants c > 0, n00 > 0 such exist constants c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00.. that 0 ≤ f(n) ≤ cg(n) for all n ≥ n September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.2
  • 3.
    Asymptotic notation O-notation(upper bounds): We write f(n) = O(g(n)) if there We write f(n) = O(g(n)) if there exist constants c > 0, n00 > 0 such exist constants c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00.. that 0 ≤ f(n) ≤ cg(n) for all n ≥ n EXAMPLE: 2n2 = O(n3) (c = 1, n0 = 2) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.3
  • 4.
    Asymptotic notation O-notation(upper bounds): We write f(n) = O(g(n)) if there We write f(n) = O(g(n)) if there exist constants c > 0, n00 > 0 such exist constants c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00.. that 0 ≤ f(n) ≤ cg(n) for all n ≥ n EXAMPLE: 2n2 = O(n3) (c = 1, n0 = 2) functions, not values September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.4
  • 5.
    Asymptotic notation O-notation(upper bounds): We write f(n) = O(g(n)) if there We write f(n) = O(g(n)) if there exist constants c > 0, n00 > 0 such exist constants c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00.. that 0 ≤ f(n) ≤ cg(n) for all n ≥ n EXAMPLE: 2n2 = O(n3) (c = 1, n0 = 2) funny, “one-way” functions, equality not values September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.5
  • 6.
    Set definition ofO-notation O(g(n)) = { f(n) :: there exist constants O(g(n)) = { f(n) there exist constants c > 0, n00 > 0 such c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00 } for all n ≥ n } September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.6
  • 7.
    Set definition ofO-notation O(g(n)) = { f(n) :: there exist constants O(g(n)) = { f(n) there exist constants c > 0, n00 > 0 such c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00 } for all n ≥ n } EXAMPLE: 2n2 ∈ O(n3) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.7
  • 8.
    Set definition ofO-notation O(g(n)) = { f(n) :: there exist constants O(g(n)) = { f(n) there exist constants c > 0, n00 > 0 such c > 0, n > 0 such that 0 ≤ f(n) ≤ cg(n) that 0 ≤ f(n) ≤ cg(n) for all n ≥ n00 } for all n ≥ n } EXAMPLE: 2n2 ∈ O(n3) (Logicians: λn.2n2 ∈ O(λn.n3), but it’s convenient to be sloppy, as long as we understand what’s really going on.) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.8
  • 9.
    Macro substitution Convention: Aset in a formula represents an anonymous function in the set. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.9
  • 10.
    Macro substitution Convention: Aset in a formula represents an anonymous function in the set. EXAMPLE: f(n) = n3 + O(n2) means f(n) = n3 + h(n) for some h(n) ∈ O(n2) . September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.10
  • 11.
    Macro substitution Convention: Aset in a formula represents an anonymous function in the set. EXAMPLE: n2 + O(n) = O(n2) means for any f(n) ∈ O(n): n2 + f(n) = h(n) for some h(n) ∈ O(n2) . September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.11
  • 12.
    Ω-notation (lower bounds) O-notationis an upper-bound notation. It makes no sense to say f(n) is at least O(n2). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.12
  • 13.
    Ω-notation (lower bounds) O-notationis an upper-bound notation. It makes no sense to say f(n) is at least O(n2). Ω(g(n)) = { f(n) :: there exist constants Ω(g(n)) = { f(n) there exist constants c > 0, n00 > 0 such c > 0, n > 0 such that 0 ≤ cg(n) ≤ f(n) that 0 ≤ cg(n) ≤ f(n) for all n ≥ n00 } for all n ≥ n } September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.13
  • 14.
    Ω-notation (lower bounds) O-notationis an upper-bound notation. It makes no sense to say f(n) is at least O(n2). Ω(g(n)) = { f(n) :: there exist constants Ω(g(n)) = { f(n) there exist constants c > 0, n00 > 0 such c > 0, n > 0 such that 0 ≤ cg(n) ≤ f(n) that 0 ≤ cg(n) ≤ f(n) for all n ≥ n00 } for all n ≥ n } EXAMPLE: n = Ω(lg n) (c = 1, n0 = 16) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.14
  • 15.
    Θ-notation (tight bounds) Θ(g(n)) = O(g(n)) ∩ Ω(g(n)) Θ(g(n)) = O(g(n)) ∩ Ω(g(n)) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.15
  • 16.
    Θ-notation (tight bounds) Θ(g(n)) = O(g(n)) ∩ Ω(g(n)) Θ(g(n)) = O(g(n)) ∩ Ω(g(n)) EXAMPLE: 1 2 n − 2 n = Θ( n ) 2 2 September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.16
  • 17.
    ο-notation and ω-notation O-notationand Ω-notation are like ≤ and ≥. o-notation and ω-notation are like < and >. ο(g(n)) = { f(n) :: for any constant c > 0, ο(g(n)) = { f(n) for any constant c > 0, there is a constant n00 > 0 there is a constant n > 0 such that 0 ≤ f(n) < cg(n) such that 0 ≤ f(n) < cg(n) for all n ≥ n00 } for all n ≥ n } EXAMPLE: 2n2 = o(n3) (n0 = 2/c) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.17
  • 18.
    ο-notation and ω-notation O-notationand Ω-notation are like ≤ and ≥. o-notation and ω-notation are like < and >. ω(g(n)) = { f(n) :: for any constant c > 0, ω(g(n)) = { f(n) for any constant c > 0, there is a constant n00 > 0 there is a constant n > 0 such that 0 ≤ cg(n) < f(n) such that 0 ≤ cg(n) < f(n) for all n ≥ n00 } for all n ≥ n } EXAMPLE: n = ω (lg n) (n0 = 1+1/c) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.18
  • 19.
    Solving recurrences • The analysis of merge sort from Lecture 1 required us to solve a recurrence. • Recurrences are like solving integrals, differential equations, etc. o Learn a few tricks. • Lecture 3: Applications of recurrences to divide-and-conquer algorithms. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.19
  • 20.
    Substitution method The most general method: 1. Guess the form of the solution. 2. Verify by induction. 3. Solve for constants. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.20
  • 21.
    Substitution method The most general method: 1. Guess the form of the solution. 2. Verify by induction. 3. Solve for constants. EXAMPLE: T(n) = 4T(n/2) + n • [Assume that T(1) = Θ(1).] • Guess O(n3) . (Prove O and Ω separately.) • Assume that T(k) ≤ ck3 for k < n . • Prove T(n) ≤ cn3 by induction. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.21
  • 22.
    Example of substitution T(n) = 4T (n / 2) + n ≤ 4c ( n / 2 ) 3 + n = ( c / 2) n 3 + n = cn3 − ((c / 2)n3 − n) desired – residual ≤ cn3 desired whenever (c/2)n3 – n ≥ 0, for example, if c ≥ 2 and n ≥ 1. residual September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.22
  • 23.
    Example (continued) •We must also handle the initial conditions, that is, ground the induction with base cases. • Base: T(n) = Θ(1) for all n < n0, where n0 is a suitable constant. • For 1 ≤ n < n0, we have “Θ(1)” ≤ cn3, if we pick c big enough. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.23
  • 24.
    Example (continued) •We must also handle the initial conditions, that is, ground the induction with base cases. • Base: T(n) = Θ(1) for all n < n0, where n0 is a suitable constant. • For 1 ≤ n < n0, we have “Θ(1)” ≤ cn3, if we pick c big enough. This bound is not tight! September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.24
  • 25.
    A tighter upperbound? We shall prove that T(n) = O(n2). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.25
  • 26.
    A tighter upperbound? We shall prove that T(n) = O(n2). Assume that T(k) ≤ ck2 for k < n: T (n) = 4T (n / 2) + n ≤ 4 c ( n / 2) 2 + n = cn 2 + n = O(n 2 ) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.26
  • 27.
    A tighter upperbound? We shall prove that T(n) = O(n2). Assume that T(k) ≤ ck2 for k < n: T (n) = 4T (n / 2) + n ≤ 4c ( n / 2) 2 + n = cn 2 + n = O(n 2 ) Wrong! We must prove the I.H. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.27
  • 28.
    A tighter upperbound? We shall prove that T(n) = O(n2). Assume that T(k) ≤ ck2 for k < n: T (n) = 4T (n / 2) + n ≤ 4c ( n / 2) 2 + n = cn 2 + n = O(n 2 ) Wrong! We must prove the I.H. = cn 2 − (− n) [ desired – residual ] ≤ cn 2 for no choice of c > 0. Lose! September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.28
  • 29.
    A tighter upperbound! IDEA: Strengthen the inductive hypothesis. • Subtract a low-order term. Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.29
  • 30.
    A tighter upperbound! IDEA: Strengthen the inductive hypothesis. • Subtract a low-order term. Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n. T(n) = 4T(n/2) + n = 4(c1(n/2)2 – c2(n/2)) + n = c1n2 – 2c2n + n = c1n2 – c2n – (c2n – n) ≤ c1n2 – c2n if c2 ≥ 1. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.30
  • 31.
    A tighter upperbound! IDEA: Strengthen the inductive hypothesis. • Subtract a low-order term. Inductive hypothesis: T(k) ≤ c1k2 – c2k for k < n. T(n) = 4T(n/2) + n = 4(c1(n/2)2 – c2(n/2)) + n = c1n2 – 2c2n + n = c1n2 – c2n – (c2n – n) ≤ c1n2 – c2n if c2 ≥ 1. Pick c1 big enough to handle the initial conditions. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.31
  • 32.
    Recursion-tree method • Arecursion tree models the costs (time) of a recursive execution of an algorithm. • The recursion-tree method can be unreliable, just like any method that uses ellipses (…). • The recursion-tree method promotes intuition, however. • The recursion tree method is good for generating guesses for the substitution method. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.32
  • 33.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.33
  • 34.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: T(n) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.34
  • 35.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 T(n/4) T(n/2) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.35
  • 36.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 (n/4)2 (n/2)2 T(n/16) T(n/8) T(n/8) T(n/4) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.36
  • 37.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 (n/4)2 (n/2)2 (n/16)2 (n/8)2 (n/8)2 (n/4)2 … Θ(1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.37
  • 38.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 n2 (n/4)2 (n/2)2 (n/16)2 (n/8)2 (n/8)2 (n/4)2 … Θ(1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.38
  • 39.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 n2 5 n2 (n/4)2 (n/2)2 16 (n/16)2 (n/8)2 (n/8)2 (n/4)2 … Θ(1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.39
  • 40.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 n2 5 n2 (n/4)2 (n/2)2 16 25 n 2 (n/16)2 (n/8)2 (n/8)2 (n/4)2 256 … … Θ(1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.40
  • 41.
    Example of recursiontree Solve T(n) = T(n/4) + T(n/2) + n2: n2 n2 5 n2 (n/4)2 (n/2)2 16 25 n 2 (n/16)2 (n/8)2 (n/8)2 (n/4)2 256 … … Θ(1) Total = n 2 ( 5 + 5 2 1 + 16 16 ( ) +( ) +L 5 3 16 ) = Θ(n2) geometric series September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.41
  • 42.
    The master method Themaster method applies to recurrences of the form T(n) = a T(n/b) + f (n) , where a ≥ 1, b > 1, and f is asymptotically positive. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.42
  • 43.
    Three common cases Comparef (n) with nlogba: 1. f (n) = O(nlogba – ε) for some constant ε > 0. • f (n) grows polynomially slower than nlogba (by an nε factor). Solution: T(n) = Θ(nlogba) . September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.43
  • 44.
    Three common cases Comparef (n) with nlogba: 1. f (n) = O(nlogba – ε) for some constant ε > 0. • f (n) grows polynomially slower than nlogba (by an nε factor). Solution: T(n) = Θ(nlogba) . 2. f (n) = Θ(nlogba lgkn) for some constant k ≥ 0. • f (n) and nlogba grow at similar rates. Solution: T(n) = Θ(nlogba lgk+1n) . September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.44
  • 45.
    Three common cases(cont.) Compare f (n) with nlogba: 3. f (n) = Ω(nlogba + ε) for some constant ε > 0. • f (n) grows polynomially faster than nlogba (by an nε factor), and f (n) satisfies the regularity condition that a f (n/b) ≤ c f (n) for some constant c < 1. Solution: T(n) = Θ( f (n)) . September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.45
  • 46.
    Examples EX. T(n) =4T(n/2) + n a = 4, b = 2 ⇒ nlogba = n2; f (n) = n. CASE 1: f (n) = O(n2 – ε) for ε = 1. ∴ T(n) = Θ(n2). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.46
  • 47.
    Examples EX. T(n) =4T(n/2) + n a = 4, b = 2 ⇒ nlogba = n2; f (n) = n. CASE 1: f (n) = O(n2 – ε) for ε = 1. ∴ T(n) = Θ(n2). EX. T(n) = 4T(n/2) + n2 a = 4, b = 2 ⇒ nlogba = n2; f (n) = n2. CASE 2: f (n) = Θ(n2lg0n), that is, k = 0. ∴ T(n) = Θ(n2lg n). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.47
  • 48.
    Examples EX. T(n) =4T(n/2) + n3 a = 4, b = 2 ⇒ nlogba = n2; f (n) = n3. CASE 3: f (n) = Ω(n2 + ε) for ε = 1 and 4(n/2)3 ≤ cn3 (reg. cond.) for c = 1/2. ∴ T(n) = Θ(n3). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.48
  • 49.
    Examples EX. T(n) =4T(n/2) + n3 a = 4, b = 2 ⇒ nlogba = n2; f (n) = n3. CASE 3: f (n) = Ω(n2 + ε) for ε = 1 and 4(n/2)3 ≤ cn3 (reg. cond.) for c = 1/2. ∴ T(n) = Θ(n3). EX. T(n) = 4T(n/2) + n2/lg n a = 4, b = 2 ⇒ nlogba = n2; f (n) = n2/lg n. Master method does not apply. In particular, for every constant ε > 0, we have nε = ω(lg n). September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.49
  • 50.
    Idea of mastertheorem Recursion tree: a f (n) f (n/b) f (n/b) … f (n/b) a f (n/b2) f (n/b2) … f (n/b2) … Τ (1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.50
  • 51.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) … … Τ (1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.51
  • 52.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) h = logbn a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) … … Τ (1) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.52
  • 53.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) h = logbn a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) #leaves = ah … … = alogbn Τ (1) nlogbaΤ (1) = nlogba September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.53
  • 54.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) h = logbn a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) … CASE 1: The weight increases … CASE 1: The weight increases geometrically from the root to the geometrically from the root to the Τ (1) leaves. The leaves hold aaconstant leaves. The leaves hold constant nlogbaΤ (1) fraction of the total weight. fraction of the total weight. Θ(nlogba) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.54
  • 55.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) h = logbn a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) … … CASE 2: (k = 0) The weight CASE 2: (k = 0) The weight Τ (1) is approximately the same on is approximately the same on nlogbaΤ (1) each of the logbbn levels. each of the log n levels. Θ(nlogbalg n) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.55
  • 56.
    Idea of mastertheorem Recursion tree: f (n) f (n) a f (n/b) f (n/b) … f (n/b) a f (n/b) h = logbn a f (n/b2) f (n/b2) … f (n/b2) a2 f (n/b2) … CASE 3: The weight decreases … CASE 3: The weight decreases geometrically from the root to the geometrically from the root to the Τ (1) leaves. The root holds aaconstant leaves. The root holds constant nlogbaΤ (1) fraction of the total weight. fraction of the total weight. Θ( f (n)) September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.56
  • 57.
    Appendix: geometric series 2 n 1 − x n +1 1+ x + x +L+ x = for x ≠ 1 1− x 21 1+ x + x +L = for |x| < 1 1− x Return to last slide viewed. September 12, 2005 Copyright © 2001-5 Erik D. Demaine and Charles E. Leiserson L2.57