3. Algorithm Analysis
Analysis of efficiency of an algorithm can be performed at two different stages, before
implementation and after implementation, as
โ A priori analysis โ This is defined as theoretical analysis of an algorithm. Efficiency of
algorithm is measured by assuming that all other factors e.g. speed of processor, are
constant and have no effect on implementation.
โ A posterior analysis โ This is defined as empirical analysis of an algorithm. The chosen
algorithm is implemented using programming language. Next the chosen algorithm is
executed on target computer machine. In this analysis, actual statistics like running time
and space needed are collected.
Algorithm analysis is dealt with the execution or running time of various operations
involved. Running time of an operation can be defined as number of computer instructions
executed per operation.
4. The complexity of an algorithm is a function describing the efficiency of the algorithm in
terms of the amount of data the algorithm must process. There are two main complexity
measures of the efficiency of an algorithm:
โ Time complexity is a function describing the amount of time an algorithm takes in
terms of the amount of input to the algorithm.
โ Space complexity is a function describing the amount of memory (space) an algorithm
takes in terms of the amount of input to the algorithm.
5. Time Complexity
There are three types of time complexities, which can be found in the analysis of an
algorithm:
๏ง Best case time complexity
๏ง Average case time complexity
๏ง Worst case time complexity
6. Best-case time complexity
The best-case time complexity of an algorithm is a measure of the minimum time that the algorithm
will require. For example, the best case for a simple linear search on a list occurs when the desired
element is the first element of the list.
Worst-case time complexity
The worst-case time complexity of an algorithm is a measure of the maximum time that the
algorithm will require. A worst-case estimate is normally computed because it provides an upper
bound for all inputs including the extreme slowest case also. For example, the worst case for a simple
linear search on a list occurs when the desired element is found at the last position of the list or not
on the list.
Average-case time complexity
The average-case time complexity of an algorithm is a measure of average time of all instances taken
by an algorithm. Average case analysis does not provide the upper bound and sometimes it is difficult
to compute.
Average-case time complexity and worst-case time complexity are the most used in algorithm
analysis. Best-case time complexity is rarely found but is does not have any uses.
7. What is Asymptotic Notation?
โ Whenever we want to perform analysis of an algorithm, we need to calculate the
complexity of that algorithm. But when we calculate the complexity of an algorithm it
does not provide the exact amount of resource required. So instead of taking the exact
amount of resource, we represent that complexity in a general form (Notation) which
produces the basic nature of that algorithm. We use that general form (Notation) for
analysis process.
โ Asymptotic notation of an algorithm is a mathematical representation of its complexity.
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
โ Big - Oh (O)
โ Big - Omega (ฮฉ)
โ Big - Theta (ฮ)
8. Big - Oh Notation (O)
โ Big - Oh notation is used to define the upper bound of
an algorithm in terms of Time Complexity.
โ That means Big - Oh notation always indicates the
maximum time required by an algorithm for all input
values. That means Big - Oh notation describes the
worst case of an algorithm time complexity.
Big - Oh Notation can be defined as follows...
โ Consider function f(n) as time complexity of an
algorithm and g(n) is the most significant term. If f(n)
<= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we
can represent f(n) as O(g(n)).
f(n) = O(g(n))
9. Big - Omega Notation (ฮฉ)
โ Big - Omega notation is used to define the lower bound
of an algorithm in terms of Time Complexity.
โ That means Big-Omega notation always indicates the
minimum time required by an algorithm for all input
values. That means Big-Omega notation describes the best
case of an algorithm time complexity.
Big - Omega Notation can be defined as follows...
โ Consider function f(n) as time complexity of an algorithm
and g(n) is the most significant term. If f(n) >= C g(n) for
all n >= n0, C > 0 and n0 >= 1. Then we can represent
f(n) as ฮฉ(g(n)).
f(n) = ฮฉ(g(n))
10. Big - Theta Notation (ฮ)
โ Big - Theta notation is used to define the average bound
of an algorithm in terms of Time Complexity.
โ That means Big - Theta notation always indicates the
average time required by an algorithm for all input values.
That means Big - Theta notation describes the average
case of an algorithm time complexity.
โ Consider function f(n) as time complexity of an
algorithm and g(n) is the most significant term. If
C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 >
0 and n0 >= 1. Then we can represent f(n) as ฮ(g(n)).
f(n) = ฮ(g(n))
11. What effects run time of an algorithm?
Complexity of an algorithm is a measure of the amount of time and/or space
required by an algorithm for an input of a given size (n). computer used, the hardware
platform.
๏ฑ Representation of abstract data types (ADTโs)
๏ฑ Efficiency of compiler
๏ฑ Competence of implementer (programming skills)
๏ฑ Complexity of underlying algorithm
๏ฑ Size of the input
12. Linear Loops
To calculate the efficiency of an algorithm that has a single loop, we need to first determine the
number of times the statements in the loop will be executed. This is because the number of
iterations is directly proportional to the loop factor. Greater the loop factor, more is the number of
iterations. For example, consider the loop given below:
Here, 100 is the loop factor. We have already said that efficiency is directly proportional to the
number of iterations. Hence, the general formula in the case of linear loops may be given as
However calculating efficiency is not as simple as is shown in the above example. Consider the
loop given below:
Here, the number of iterations is half the number of the loop factor. So, here the efficiency can be
given as
13. Logarithmic Loops
We have seen that in linear loops, the loop updation statement either adds or subtracts the loop-
controlling variable. However, in logarithmic loops, the loop-controlling variable is either multiplied or
divided during each iteration of the loop. For example, look at the loops given below:
โ Consider the first for loop in which the loop-controlling variable i is multiplied by 2. The loop will
be executed only 10 times and not 1000 times because in each iteration the value of I doubles.
Now, consider the second loop in which the loop-controlling variable i is divided by 2.
โ In this case also, the loop will be executed 10 times. Thus, the number of iterations is a function of
the number by which the loop-controlling variable is divided or multiplied. In the examples
discussed, it is 2. That is, when n = 1000, the number of iterations can be given by log 1000 which
is approximately equal to 10.
โ Therefore, putting this analysis in general terms, we can conclude that the efficiency of loops in
which iterations divide or multiply the loop-controlling variables can be given as
f(n) = log n
14. Linear logarithmic loop
โ Consider the following code in which the loop-controlling variable of the inner loop is
multiplied after each iteration. The number of iterations in the inner loop is log 10. This
inner loop is controlled by an outer loop which iterates 10 times. Therefore, according to
the formula, the number of iterations for this code can be given as 10 log 10.
โ In more general terms, the efficiency of such loops can be given as f(n) = n log n.
15. Quadratic loop
In a quadratic loop, the number of iterations in the inner loop is equal to the number of
iterations in the outer loop. Consider the following code in which the outer loop executes 10
times and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore,
the efficiency here is 100.
The generalized formula for quadratic loop can be given as f(n) = n2
16. Dependent quadratic loop
In a dependent quadratic loop, the number of iterations in the inner loop is dependent on
the outer loop. Consider the code given below:
In this code, the inner loop will execute just once in the first iteration, twice in the second
iteration, thrice in the third iteration, so on and so forth. In this way, the number of
iterations can be calculated as
If we calculate the average of this loop (55/10 = 5.5), we will observe that it is equal to the
number of iterations in the outer loop (10) plus 1 divided by 2. In general terms, the inner
loop iterates (n + 1)/2 times. Therefore, the efficiency of such a code can be given as
17.
18. Space complexity
Space complexity of an algorithm represents the amount of memory space needed the
algorithm in its life cycle. Space needed by an algorithm is equal to the sum of the
following two components
โ A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of
the problem.
โ A variable part is a space required by variables, whose size is totally dependent on
the size of the problem. For example, recursion stack space, dynamic memory
allocation etc.
19. Example of Space Complexity
Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated as the fixed
part and S(I) is treated as the variable part of the algorithm which depends on instance
characteristic I. Following is a simple example that tries to explain the concept
Algorithm
โ SUM(P, Q)
โ Step 1 - START
โ Step 2 - R โ P + Q + 10
โ Step 3 - Stop
Here we have three variables P, Q and R and one constant. Hence S(p) = 1+3. Now space is
dependent on data types of given constant types and variables and it will be multiplied
accordingly.