Unit 1: Fundamentals of the Analysis of Algorithmic Efficiency, Units for Measuring Running Time, PROPERTIES OF AN ALGORITHM, Growth of Functions, Algorithm - Analysis, Asymptotic Notations, Recurrence Relation and problems
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
CS8451 - Design and Analysis of Algorithms
1. Fundamentals of the Analysis of
Algorithmic Efficiency
Dr.K.Muthumanickam
Associate Professor/IT
Kongunadu College of Engineering and Technology
Tholur Patti, Thottiam Taluk
Trichy District – 621215, Tamilnadu
2. Introduction
There are two kinds of efficiency: time efficiency
and space efficiency.
Time efficiency - indicates how fast an algorithm in
question runs.
Space efficiency - refers to the amount of memory
units required by the algorithm in addition to
the space needed for its input and output.
3. Measuring an Input’s Size
• One of the observations of an algorithm is
that almost all algorithms run longer on
larger inputs.
• It is logical to investigate an algorithm’s
efficiency as a function of some parameter n
indicating the algorithm’s input size.
• In most cases, selecting such a parameter is
quite straightforward.
4. • There are situations where the choice of a
parameter indicating an input size does
matter. One such example is computing the
product of two n × n matrices.
• The choice of an appropriate size metric can
be influenced by operations of
the algorithm.
5. Units for Measuring Running Time
• we can use some standard unit of time
measurement—a second, or millisecond, and so
on—to measure the running time of a program
implementing the algorithm.
• The drawbacks to such an approach, dependence
on the speed of a particular computer,
dependence on the quality of a program
implementing the algorithm and of the compiler
used in generating the machine code, and the
difficulty of clocking the actual running time of the
program.
6. • Another approach is to count the number of
times each of the algorithm’s operations is
executed.
• This approach is both excessively difficult and
usually unnecessary.
• The thing to do is to identify the most
important operation of the algorithm, called
the basic operation - the operation
contributing the most to the total running
time, and compute the number of times the
basic operation is executed.
7. PROPERTIES OF AN ALGORITHM
1. An algorithm takes zero or more inputs
2. An algorithm results in one or more outputs
3. All operations can be carried out in a finite amount of
time
4. An algorithm should be efficient and flexible
5. It should use less memory space as much as possible
6. An algorithm must terminate after a finite number of
steps.
7. Each step in the algorithm must be easily understood
for some reading it
8. An algorithm should be concise and compact to
facilitate verification of their correctness.
8. PERFORMANCE ANALYSIS OF AN ALGORITHM
Any given problem may be solved by a number of
algorithms. To judge an algorithm there are many criteria.
Some of them are:
1. It must work correctly under all possible condition
2. It must solve the problem according to the given specification
3. It must be clearly written following the top down strategy
4. It must make efficient use of time and resources
5. It must be sufficiently documented so that anybody can
understand it
6. It must be easy to modify, if required.
7. It should not be dependent on being run on a particular
computer.
9. Growth of Functions
• The relative performance of an algorithms
depends on input data size N. If there are
multiple input parameters, we will try to reduce
them to a single parameter, expressing some
parameters in terms of the selected parameter.
• We know that, the performance of algorithm on
an input of size N is generally represented in
terms of 1, logN, N, N log N, N2, N3, and 2N. The
performance depends heavily on loops, and can
be increased by minimizing the inner loops.
11. Algorithm - Analysis
• The worst-case efficiency of an algorithm is its
efficiency for the worst-case input of size n,
which is an input (or inputs) of size n for which
the algorithm runs the longest among all
possible inputs of that size.
• The best-case efficiency of an algorithm is its
efficiency for the best-case input of size n, which
is an input (or inputs) of size n for which the
algorithm runs the fastest among all possible
inputs of that size.
11
12. • The worst-case efficiency of an algorithm is its
efficiency for the worst-case input of size n,
which is an input (or inputs) of size n for which
the algorithm runs the longest among all
possible inputs of that size.
• Neither the worst-case analysis nor its best-case
counterpart yields the necessary information
about an algorithm’s behavior on a “typical” or
“random” input - average-case efficiency.
12
13. Theoretical Analysis of Time
Efficiency
• Count the number of times the algorithm’s basic
operation is executed on inputs of size n: C(n)
T(n) ≈ cop × C(n)
Execution time for
basic operation
No of times basic op.
is executed
Running time
Input size Ignore cop,
Focus on
orders of
growth
14. • Find the total number of step count of
summation of ‘n’ numbers.
Algorithm
No. of
steps
Total no. of times
each statement
is executed
Total Steps
Algorithm Sum(a,n) - - 0
{ - - 0
s=0.0 1 1 1
For i=1 to n do 1 n+1 n+1
s = s + a[i] 1 n n
Return s 1 1 1
} - - 0
Total steps count 2n+3
15. Class Work
(1) Find the total number of step count of
addition of two matrices.
(2) Find the total number of step count of the
following algorithm. (Homework)
Algorithm Rsum(a,n)
{
if (n < 0) then
return 0;
else
return Rsum(a,n-1+a[n])
16. Algorithm
No. of
steps
N=0 N>0 N=0 N>0
Algorithm Rsum(a,n)
{
if (n < 0) then
return 0;
else
return Rsum(a,n-
1+a[n]
}
Total steps count
Matrix Addition = 2mn+2m+1
17. Asymptotic Notations and their
properties
• To compare and rank of growth of algorithm’s
efficiency, computer scientists use three
notations:
• O (big oh) – Worst case
• Ω (big omega) – Best case
• Θ(big theta) – Average case
18. O-notation
• DEFINITION - A function t(n) is said to be in
O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded
above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant
c and some nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0
19.
20. Big – Oh – Example
1) 100n+5
T(n) = 100n + 5
100n+5 <= 100n + n, for all n>= 5
101n <= 101 n2
So, 100n + 5 ϵ O(n2), where c=101, n0=5
2) t(n) = n2 + 10 n
n2 + 10 n <= 2. n2 , for all n>=10
So, n2 + 10 n ϵ O(n2), where c=2, n0=10
21. 3) t(n) = 5. n2
5 n2 <= 5. n2 , for all n>=0
So, 5n2 ϵ O(n2), where c=5, n0=0
Home Work
1) Find the order of 2n+ n3
2) Find the order of n2+ logn
3) If f(x) = x3/2 & g(x) = 37x3+120x+17, show that
g ϵ O(f), but f O(g)
22. Ω -notation
DEFINITION A function t(n) is said to be in
Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is
bounded below by some positive constant
multiple of g(n) for all large n, i.e., if there exist
some positive constant c and some nonnegative
integer n0 such that
t(n) ≥ cg(n) for all n ≥ n0
23.
24. Ω -notation Example
(1) Show that n3 ∈ Ω(n2)
Solution:
t(n) ≥ cg(n) for all n ≥ n0
n3 ≥ 1 × n2 for all n ≥ 0
Thus, we can select c = 1, and n0 = 0
We proved that n3 ∈ Ω(n2)
25. Ω -notation Example
(2) Show that ∈ Ω(n2)
Solution:
For n ≥ 2
≥
≥ n/2 × n/2 (Since, (n-1) ≥ n/2)
= ¼ n2
Thus, c = ¼ , and n0 = 2
2
)
1
(
n
n
2
)
1
(
n
n )
1
(
2
n
n
26. Ω -notation Example
(3) t(n) = 2n2+5 and g(n) = 7n
Solution:
If n=0, t(n) = 5, g(n) = 0; t(n) > g(n)
If n=1, t(n) = 7, g(n) = 7; t(n) = g(n)
If n=3, t(n) = 23, g(n) = 21; t(n) > g(n)
Thus, for n>=3, we get t(n) > g(n)
2n2+5 ∈ Ω(n)
27. Home Work
(1) 100 n + 5 Ω(n2)
(2) Show that n2+10n ∈ Ω(n2)
(3) Show that 5n2 ∈ Ω(n2)
28. Θ -Notation
DEFINITION A function t(n) is said to be in Θ(g(n)),
denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both
above and below by some positive constant
multiples of g(n) for all large n, i.e., if there exist
some positive constants c1 and c2 and some
nonnegative integer n0 such that
c2g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0
29.
30. Θ - Example
(1) t(n) = 2n+8 and g(n) = 7n, where n >= 2
Solution:
c2g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0
5n < 2n+8 < 7n for all n ≥ 2
Thus, c1= 5, c2= 7, and n0= 2
31. Using Limits for Comparing Order
of Growth
• L’ Hospital Rule
Let t and g be two differential functions, with
derivatives t’ and g’ respectively, such that
then
)
(
lim
)
(
lim n
g
n
t
n
n
)
(
)
(
lim
)
(
)
(
lim '
'
n
g
n
t
n
g
n
t
n
n
32. Using Limits for Comparing Orders
of Growth
0 implies that t(n) grows slower than g(n)
c implies that t(n) grows at the same order
as g(n)
∞ implies that t(n) grows faster than g(n)
1. First two cases (0 and c) means t(n) є O(g(n))
2. Last two cases (c and ∞) means t(n) є Ω(g(n))
3. The second case (c) means t(n) є Θ(g(n))
33. Example
(1) Compare orders of growth of and n2
Solution:
=
= ½ = C
i.e. ∈ Θ (n2 )
)
1
(
2
1
n
n
2
2
2
lim
2
1
)
1
(
2
1
lim
n
n
n
n
n
n
n
n
)
1
1
(
lim
2
1
n
n
)
(
2
1
n
n
34. • If the resultant value is
(1) 0, implies given first function belongs to
second given function
(2) α , implies given first function does not
belong to second given function
(3) Constant, implies given first function has
same order of growth of second given
function
35. Home Work
(1) Let f(n) = n3/2 and g(n) = 37n2+120n+7. Show
that g ∈ O(f) and f O(g).
(2) Compare order of growth of n2/2 and n3
(3) Compare order of growth of and
(4) Compare order of growth of n! and 2n
n
n
2
log
36. Properties of O, Ω and Θ
(1) If there are two functions t1(n) and t2(n) such
that t1(n) = O(g1(n)) and t2(n) = O(g2(n)) then
t1(n)+ t2(n) = max(O(g1(n)), O(g2(n)).
(2) If there are two functions t1(n) and t2(n) such
that t1(n) = O(g1(n)) and t2(n) = O(g2(n)) then
t1(n)× t2(n) = O(g1(n) ×g2(n)).
(3) If t(n)=O(g(n)) and g(n)=O(h(n)) then
t(n)=O(h(n))
37. (4) In a polynomial, the highest power term
dominates other terms.
(5) Any constant value leads to O(1) time
complexity
(6) If then t(n) ∈ O(g(n)) but
t(n) O(g(n))
)
(
)
(
lim
n
g
n
t
n
39. Empirical analysis
• Definition – Empirical analysis of an algorithm
means observing the behavior of that
algorithm for certain set of input.
• In empirical analysis, the actual program is
written for the corresponding algorithm and
with the help of some suitable input set, the
algorithm is analyzed.
41. General Plan
• General Plan
–Understand Experiment’s purpose:
•What is the efficiency class?
•Compare two algorithms for same
problem
–Efficiency metric: Operation count vs.
time unit
42. –Characteristic of input sample (range,
size, etc.)
–Write a program
–Generate sample inputs
–Run on sample inputs and record data
–Analyze data
44. Recurrence Relation
A recurrence relation is an equation which
is defined in terms of itself. That is, the
nth term is expressed in terms of one or
more previous elements. (an-1, an-2 etc).
Example:
an = 2an-1 + an-2
45. Recurrence Relation of Fibonacci
Number fib(n):
{0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …}
46. Recurrences
• The expression:
is a recurrence.
– Recurrence: an equation that describes a function
in terms of its value on smaller functions
1
2
2
1
)
(
n
cn
n
T
n
c
n
T
50. Mathematical Analysis of Non-Recursive Algorithms
• General Plan for Analyzing the Time Efficiency of Nonrecursive
Algorithms
General Plan for Analyzing the Time of Non-recursive Algorithms
– 1. Decide on a parameter (or parameters) indicating an input’s
size.
– 2. Identify the algorithm’s basic operation.
– 3. Check whether the number of times the basic operation is
executed depends
• only on the size of an input. If it also depends on some
additional property,
• the worst-case, average-case, and, if necessary, best-case
efficiencies have to be investigated separately.
– 4. Set up a sum expressing the number of times the algorithm’s
basic operation is executed.
– 5. Using standard formulas and rules of sum manipulation, either
find a closed form
51. Analysis of Nonrecursive Algorithms
ALGORITHM MaxElement(A[0..n-1])
//Determines largest element
maxval ← A[0]
for I ← 1 to n-1 do
if A[i] > maxval
maxval ← A[i]
return maxval
Input size: n
Basic operation: > or ←
Consider the problem of finding the value of the largest element in a list
of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving
the problem
52. • The obvious measure of an input’s size here is
the number of elements in the array, i.e., n.
• The operations that are going to be executed
most often are in the algorithm’s for loop. The
comparison to be the algorithm’s basic
operation.
• Let us denote C(n) the number of times this
comparison is executed.
53. • Therefore, we get the following sum for C(n):
• The algorithm makes one comparison on each
execution of the loop, which is repeated for
each value of the loop’s variable i within the
bounds 1 and n − 1, inclusive.
54. • EXAMPLE 2 Consider the element uniqueness
problem: check whether all the elements in a
given array of n elements are distinct. This
problem can be solved by the following
straightforward algorithm.
55. ALGORITHM UniqueElements(A[0..n-1])
//Determines whether all elements are //distinct
for i <- 0 to n-2 do
for j <- i+1 to n-1 do
if A[i] = A[j]
return false
return true
Input size: n
Basic operation: A[i] = A[j]
Does C(n) depend on type of input?
56. • By definition, the worst case input is an array
for which the number of element comparisons
Cworst(n) is the largest among all arrays of size
n.
• There are two kinds of worst-case inputs
(1) Arrays with no equal elements and
(2) Arrays in which the last two elements are the
only pair of equal elements.
57. • For such inputs, one comparison is made for
each repetition of the innermost loop.
• Accordingly, we get
58.
59. • EXAMPLE 3 Given two n × n matrices A and B,
find the time efficiency of the definition-based
algorithm for computing their product C = AB.
By definition, C is an n × n matrix whose
elements are computed as the scalar (dot)
products of the rows of matrix A and the
columns of matrix B:
62. • EXAMPLE 4 The following algorithm finds the
number of binary digits in the binary
representation of a positive decimal integer.
The floor and ceiling functions give us
the nearest integer up or down.