2. Why is it important?
● Efficiency Matters
○ Efficient algorithms save time and resources.
○ Essential for applications with large datasets or real-time processing.
● Predicting Performance
○ Analysis enables predictions of algorithm behavior with increasing input sizes.
○ Critical for anticipating and addressing performance issues.
● Comparative Evaluation
○ Facilitates the comparison of algorithms solving the same problem.
○ Identifies trade-offs and helps choose the most suitable solution.
● Optimization Opportunities
○ Identifying bottlenecks leads to optimization opportunities.
○ Enhancing algorithm efficiency improves overall system performance.
4. How to evaluate the complexity of an
algorithm?
In computer science, we analyze the time or space complexity of algorithms by considering their
input size and using asymptotic notations.
● Time complexity
○ Counting Operations: Analyze the number of basic operations (such as comparisons,
assignments, and arithmetic operations) executed by the algorithm as a function of the
input size.
● Space complexity
○ Memory Usage: Analyze the amount of memory space required by the algorithm in
terms of the input size.
5. Asymptotic notations
● Does not measure the actual running time.
● Takes in account only the most significant terms and ignore others.
● Expresses how the time or space taken increases when the input size tends to infinity.
6. Most commonly used notations
● Omega notation - (Ω)
○ Represents the best case scenario.
○ The minimum amount of time or space an algorithm may need to solve a problem.
● Theta notation - (Θ)
○ Represents the average-case scenario.
○ The amount of time or space an algorithm typically needs to solve a problem.
● Big O notation - (O)
○ Represents the worst-case scenario.
○ The maximum amount of time or space an algorithm may need to solve a problem.
8. How to evaluate an algorithm complexity in
terms of big O notation?
1. Count the number of operations performed of each program segment in terms of input
size.
2. Ignore the constant values.
3. Take in account only the most dominant terms.
4. Express the result using big O notation.
15. Analysing bubble sort algorithm
1. Implement the algorithm.
2. Count the number of operations in terms of input size..
3. Construct a function with all operations.
4. Discard constants and minor terms.
5. Express the worst-case scenario in big O notation.
https://en.wikipedia.org/wiki/Bubble_sort
16.
17.
18. Construct a time complexity function of
bubble sort in terms of the input size
T(N) = N + 3N * (2N + N + N + N)
=> T(N) = N + 3N * 5N
19. Discard constants and minor terms of bubble
sort time complexity function.
T(N) = N + 3N * 5N
=> T(N) = N * N
=> T(N) = N^2
20. The worst-case scenario of bubble sort in big
O notation.
T(N) = N^2
=> O(N^2)
The function has O(N^2) time complexity or quadratic time complexity.
22. Objects
● Set/get property value - O(1) in vast majority of cases
● Copy an object ({ …obj }) - O(N) N depends on the number of keys
● Getting list of object keys, values, entries - O(N)
24. Strings
● Get a character at index - O(1)
● String concatenation - by a book O(N), but it is optimized to O(1)
● includes/indexOf - O(N)
25. Sets / Maps
● Add an element - O(1)
● Check if element/key is in the set/map - O(1)
● Remove an element - O(1)
● Construct set/map from array - O(N)
27. Pros of Big O Notation
● Simplicity and Abstraction
○ Big O notation provides a simple and abstract way to describe the efficiency of an algorithm
without getting into the details of hardware, programming language, or constant factors.
● Comparative Analysis
○ It allows for easy comparison between different algorithms by focusing on their growth rates.
This makes it easier to choose the most efficient algorithm for a specific problem.
● Standardized Terminology
○ It provides a standardized and widely accepted way of expressing the time and space
complexity of algorithms, fostering a common language for discussing and analyzing
algorithms.
28. Cons of Big O Notation
● Constant Factors Ignored
○ Big O notation does not consider constant factors or lower-order terms, which means it may
not capture the actual performance differences between algorithms with similar growth rates
for small input sizes.
● Non-Uniform Growth Rates
○ It treats all operations as equal, but in reality, different operations can have different execution
times. This can lead to inaccuracies in predicting the actual performance of an algorithm.
● Does Not Consider Parallelism
○ With the increasing use of parallel processing and multi-core architectures, Big O notation
may not accurately reflect the true efficiency of an algorithm in a parallel computing
environment.