Time Complexity
Based on Goodrich, Tamassia, Mount
- DS&A in Python
Department of Computer Science
What is Time Complexity?
• Time Complexity = Time taken by an algorithm
to run as a function of input size.
• Used to estimate algorithm efficiency and
scalability.
Goals of Time Analysis
• - Estimate algorithm running time without
execution
• - Predict efficiency as input grows
• - Compare performance across algorithms
How Time is Measured
• - Count number of basic operations:
• • Arithmetic (+, -, *, /)
• • Comparisons (==, <, >)
• • Assignments, function calls
• - Ignore hardware specifics
Asymptotic Notation
• O(1): Constant time
• O(log n): Logarithmic time
• O(n): Linear time
• O(n log n): Linearithmic time
• O(n^2): Quadratic time
Example: O(1)
• def get_first(lst):
• return lst[0]
• - Always takes same time regardless of list size
Example: O(n)
• def print_all(lst):
• for item in lst:
• print(item)
• - Time grows linearly with input size
Example: O(n^2)
• def print_pairs(lst):
• for i in lst:
• for j in lst:
• print(i, j)
• - Nested loops → quadratic time
Best, Worst, Average Cases
• - Best-case: Fewest operations
• - Worst-case: Most operations
• - Average-case: Expected performance across
all inputs
Algorithm Growth Comparison
• n = 10 → O(n^2) = 100
• n = 100 → O(n^2) = 10,000
• n = 1000 → O(n^2) = 1,000,000
• - Big-O helps compare scalability
Time vs Space Trade-Off
• - Use more memory to reduce time
• - Example: Memoization (store results to save
computation)
Best Practices
• - Minimize nested loops
• - Use efficient data structures (e.g., sets)
• - Avoid repeated work via
caching/precomputation
Time Complexity in Algorithms Explained with Python Examples (Goodrich, Tamassia, Mount)"

Time Complexity in Algorithms Explained with Python Examples (Goodrich, Tamassia, Mount)"

  • 1.
    Time Complexity Based onGoodrich, Tamassia, Mount - DS&A in Python Department of Computer Science
  • 2.
    What is TimeComplexity? • Time Complexity = Time taken by an algorithm to run as a function of input size. • Used to estimate algorithm efficiency and scalability.
  • 3.
    Goals of TimeAnalysis • - Estimate algorithm running time without execution • - Predict efficiency as input grows • - Compare performance across algorithms
  • 4.
    How Time isMeasured • - Count number of basic operations: • • Arithmetic (+, -, *, /) • • Comparisons (==, <, >) • • Assignments, function calls • - Ignore hardware specifics
  • 5.
    Asymptotic Notation • O(1):Constant time • O(log n): Logarithmic time • O(n): Linear time • O(n log n): Linearithmic time • O(n^2): Quadratic time
  • 6.
    Example: O(1) • defget_first(lst): • return lst[0] • - Always takes same time regardless of list size
  • 7.
    Example: O(n) • defprint_all(lst): • for item in lst: • print(item) • - Time grows linearly with input size
  • 8.
    Example: O(n^2) • defprint_pairs(lst): • for i in lst: • for j in lst: • print(i, j) • - Nested loops → quadratic time
  • 9.
    Best, Worst, AverageCases • - Best-case: Fewest operations • - Worst-case: Most operations • - Average-case: Expected performance across all inputs
  • 10.
    Algorithm Growth Comparison •n = 10 → O(n^2) = 100 • n = 100 → O(n^2) = 10,000 • n = 1000 → O(n^2) = 1,000,000 • - Big-O helps compare scalability
  • 11.
    Time vs SpaceTrade-Off • - Use more memory to reduce time • - Example: Memoization (store results to save computation)
  • 12.
    Best Practices • -Minimize nested loops • - Use efficient data structures (e.g., sets) • - Avoid repeated work via caching/precomputation