2. 1. Overview of Syllabus
Asymptotic Notations
Time and Space Complexities
Divide and Conquer: Searching algorithms, Sorting algorithms
Greedy Methods: Job sequence, Knapsack problem, Huffman encoding,
Dijikstra’s Algorithm, Minimum Spanning tree
Dynamic Programming: Shortest path, Floyd’s Algorithm, TSP
Backtracking: N-Queens problem, Graph coloring
NP-Complete, NP-Hard Problems
3. CONTENTS
Overview of Syllabus
What is an Algorithm?
Characteristics of algorithm
Algorithmic Specifications: Pseudocode conventions
Example: Computing the greatest common divisor of two integers “gcd(m,n)”
Euclid’s algorithm
Consecutive integer checking algorithm for computing gcd(m,n)
Middle-school procedure for computing gcd(m,n)
Sieve of Ertatosthenes – Algorithm for generating consecutive primes with
example
Assignment questions
4. 3. What is an Algorithm?
Definition: An algorithm is a finite set of instructions that, if followed,
accomplishes a particular task.
Program: It is the expression of an algorithm in a programming language.
Muhammad ibn Musa al-Khwarizmi
5. 3. Characteristics of an Algorithm
Input – zero or more quantities
Output – At least one quantity
Definiteness – Clear and unambiguous
Finiteness – algorithm terminates after a finite number of steps
Effectiveness – each operation be definite and also be feasible
Problem
Algorithm
Input Computer Output
Notion of an Algorithm
Algorithms that are definite and effective are also called computational procedures. E.g.: OS of a
digital computer
6. 4. Example: Computing the greatest
common divisor of two integers
“gcd(m,n)”
Eg: 1 gcd( 60, 24)
gcd(m,n)=gcd(n, m mod n)
gcd (24, 60 mod 24) = gcd (24, 12)
gcd (12, 24 mod 12) = gcd (12, 0)
gcd ( 60,24) = 12
Eg:2 Compute gcd (80, 48) is 16 ?
7. Two descriptions of Euclid’s algorithm
Step 1 If n = 0, return m and stop; otherwise go to Step 2
Step 2 Divide m by n and assign the value fo the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.
Euclid(m,n)
while n ≠ 0 do
r ← m mod n
m← n
n ← r
return m
8. 6. Consecutive integer checking algorithm
for computing gcd(m,n)
Method: 2 Compute gcd (60, 24) in the below method:
Consecutive integer checking algorithm
Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3 Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4 Decrease t by 1 and go to Step 2
Problem: It does not work correctly when one of the input number is zero
9. 7. Middle school procedure for gcd (m,n)
Method:3 Compute gcd ( 60, 24)
gcd (60, 24) = > 60 = { 1, 2, 2, 3, 5} and 8 = { 1, 2, 2, 2, 3}
So, the common divisor’s are: { 1, 2, 2, 3 } = 1 * 2 * 2 * 3 = 12
Therefore, gcd (60, 24) = 12
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime factors
Step 4 Compute the product of all the common prime factors
and return it as gcd(m,n)
Is this an algorithm?
Problem: Its not an legitimate algorithm because the prime factorization steps are not defined
unambiguously; they require a list of prime numbers.
10. 8. Sieve of Ertatosthenes – Algorithm for
generating consecutive primes with
example
Input: Integer n ≥ 2
Output: List of primes less than or equal to n
for p ← 2 to n do A[p] ← p
for p ← 2 to n do
if A[p] 0 //p hasn’t been previously eliminated from the list
j ← p* p
while j ≤ n do
A[j] ← 0 //mark element as eliminated
j ← j + p
Example: 2 3 4 5 6 7 8 9 10 11 12 1 to squareroot of N
11. 2 3 4 5 6 7 8 9 10 11 12
T T T T T T T T T T T STEP 1
F F F F F STEP 2
F STEP 3
STEP 4
STEP 5
SQUAREROOT OF n IS 3
2 3 5 7 11
12. Algorithmic Specifications: Pseudocode
Conventions
Pseudo code is a term which is often used in programming and algorithm based fields. It is a
methodology that allows the programmer to represent the implementation of an algorithm.
Advantages of Pseudocode:
Acts as a bridge between the program and the algorithm or flowchart.
Disadvantages of Pseudocode
Pseudocode does not provide a visual representation of the logic of programming.
Rules:
Comments begin with //
Block representations within {}
An identifier begins with a letter.The datatypes of variables are not explicitly declared.
Assignment <variable>:=<expression> Eg: c:=a+b;
Boolean values: TRUE or FALSE; Logical operators: AND, OR, NOT; Relational operators: < , = , > ,>=,
<=.
Arrays: A[I,j]
Loops: Use of for, while, repeat-until
13. Algorithmic Specifications: Pseudocode
Conventions (contd…)
1. Conditionals:
if<condition> then < stmt1> else<stmt2>
Case statement (switch) case
{
:<condition1> : <statement1>
……
:<condition n> : <statement n>
:else: <statement n+1>
}
2. I/O: since pseudocode use read and write rather than specific call procedures:
Algorithm Name)<parameter list>
Use of return
14. Algorithm vs Pseudocode vs Program
Algorithm
Systematic logical approach
which is well-defined, step
by step procedure that
allows a computer to solve
problem.
Eg: Add 2 numbers
Step1: Start
Step2: Declare variables num1
Step:3 Read values num1 and
num2
Step 4: Add num1 and num2
sum num1+ num2
Step5: Display Sum
Step 6: Stop
Pseudocode
It’s a simpler version of a
programming code in plain
English which uses short
phrases to write code for a
program before it is
implemented in a specific
programming language.
Eg: Add 2 numbers
BEGIN
WRITE “ Please enter two
numbers”
READ num1, num2
Sum = num1 + num2
WRITE sum
End
Program
It is a set of instructions for the
computer to follow.
Eg: Add 2 numbers
#include<stdio.h>
Int main()
{
int num1, num2, sum;
scanf(“%d %d”,&num1, &num2);
sum = num1 +num2;
Printf(“n The sum of 2 numbers
is:%d”, sum);
15. Selection Sort
Algorithm:
For i:=1 to do
{
Examine a[i] to a[n] and suppose
the smallest element is at a[j];
Interchange a[i] and a[j];
}
Pseudocode:
SelectionSort(a,n)
{
for i:=1 to n do
{
j:=i;
for k:=i+1 to n do
if(a[k]<a[j] then j:=k;
t:=a[i]; a[i]=a[j]; a[j]:=t;
}
}
16. Algorithm Specifications: Recursive
Algorithms
Definition: An algorithm is said to be recursive if the same algorithm is involved in the body.
Types:
Direct: Algorithm A is said to be direct recursive if it calls itself.
Indirect: Algorithm A is said to be indirect, if it calls another algorithm which in turn calls A.
Every recursive algorithm has 2 elements:
a) Base case: This statement solves the problem. Every recursive function must have a base case.
b) General case: Each call reduces the size of the problem.
How to design a recursive algorithm?
Determine the base case
Determine the general case
Finally combine the base and general case into an algorithm
17. Recursion
It is a process in which a function calls itself directly or indirectly.
Eg: int fun()
{
…….
}
Output:
fun(3)
return 1 + fun(2)
return 1 + fun(1)
return 1
int fun (int n)
{
if (n==1)
return 1;
else
return 1+ fun(n-1);
}
int main()
{
int n=3;
printf(“%d” , fun(n));
return 0;
}
3
return 3
return 2
return 1
Output: 3
18. Example 1: TOWER OF HANOI
History
There is a story about an ancient temple in India (Some say it’s in Vietnam – hence the name
Hanoi) has a large room with three towers surrounded by 64 golden disks.
These disks are continuously moved by priests in the temple. According to a prophecy, when the
last move of the puzzle is completed the world will end.
These priests acting on the prophecy, follow the immutable rule by Lord Brahma of moving
these disk one at a time.
Hence this puzzle is often called Tower of Brahma puzzle.
Tower of Hanoi is one of the classic problems to look at if you want to learn recursion.
What is the game of Tower of Hanoi?
Tower of Hanoi consists of three towers with n disks placed one over the other.
The objective of the puzzle is to move the stack to another tower following these simple rules.
Only one disk can be moved at a time.
No disk can be placed on top of the smaller disk.
19.
20. 1.Move n-1 discs from x to y using z.
2.Move a disc from x to z.
3.Move n-1 discs from y to z using x.
Program:
Algorithm TowersOfHanoi(n, x, y, z)
//Move the top n disks from tower x to tower y
{
if (n>=1)
{
TowersOfHanoi( n-1, x, y, z);
write (“move top disc from tower”, x,”to top of tower”, y);
TowersOfHanoi( n-1, z, y, x);
}
}
21.
22.
23.
24. Example 2: Permutation Generator
Perm ( a, k, n) => ABC
{
If (k=n) then write (a[1:n]); // output permutation
Else // has more than one permutation
//generate these recursively
For i:=k to n do
{
t := a[k]; a[k] := a[i] ; a[i] := t;
Perm (a, k+1, n);
//All permutations of a[k+1 : n]
T := a[k]; a[k] := a[i]; a[i] := t;
}
}
For ABC
o/p:
ABC
ACB
BAC
BCA
CBA
CAB
25. Fundamentals of Algorithmic Problem
Solving
Steps for designing and analyzing an algorithm
Understand the problem
Ascertain the capabilities of a computational device
Choose between exact and approximate problem solving
Decide on appropriate data structures
27. Fundamentals of Algorithmic Problem
Solving (contd…)
Step 1: Understanding the problem
Identify the problem types and use existing algorithm to find solution.
Input (instance) to the problem and range of the input get fixed.
Step 2: Decision making
(a) Ascertaining the capabilities of the Computational device
Algorithms designed to be executed on RAM machines are called sequential algorithms.
In some newer computers, operations are executed concurrently, i.e., in parallel. Algorithms that take advantage
of this capability are called parallel algorithms.
Choice of computational devices like Processor and memory is mainly based on space and time efficiency.
(b) Choosing between exact and approximate Problem solving
An algorithm used to solve the problem exactly and produce correct result is called an exact algorithm.
If the problem is so complex and not able to get exact solution, then we have to choose an algorithm called an
approximation algorithm. i.e., produces an approximate answer. E.g., extracting square roots, solving nonlinear
equations, and evaluating definite integrals.
28. Fundamentals of Algorithmic Problem Solving
(contd…)
(c) Algorithm Design Techniques:
An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems
algorithmically that is applicable to a variety of problems from different areas of computing.
Algorithms+ Data Structures = Programs
Step 3: Methods of Specifying an Algorithm There are three ways to specify an algorithm. They are:
a. Natural language
Natural Language It is very simple and easy to specify an algorithm using natural language. But many times specification of
algorithm by using natural language is not clear and thereby we get brief specification.
b. Pseudocode
Pseudocode is a mixture of a natural language and programming language constructs. Pseudocode is usually more precise than
natural language.
C. Flowchart
Flowchart is a graphical representation of an algorithm. It is a a method of expressing an algorithm by a collection of
connected geometric shapes containing descriptions of the algorithm’s step
29. Fundamentals of Algorithmic Problem Solving
(contd…)
Step 4: Proving an Algorithm’s Correctness
Once an algorithm has been specified then its correctness must be proved.
An algorithm must yields a required result for every legitimate input in a finite amount of time.
Step 5: Analyzing an Algorithm
For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm efficiency. They are:
Time efficiency, indicating how fast the algorithm runs, and
Space efficiency, indicating how much extra memory it uses.
The efficiency of an algorithm is determined by measuring both time efficiency and space efficiency.
So factors to analyze an algorithm are:
Time efficiency of an algorithm
Space efficiency of an algorithm
Simplicity of an algorithm
Generality of an algorithm
Step 5: Coding an Algorithm
The coding / implementation of an algorithm is done by a suitable programming language like C, C++, JAVA.
It is very essential to write an optimized code (efficient code) to reduce the burden of compiler.
30. Analysis Framework
1. Measuring an Input’s Size
2. Units for Measuring Running Time
3. Orders of Growth
4. Worst-case, Best-Case and Average-case Efficiencies
A Posteriori analysis A priori analysis
Hardware and language dependent Independent of language and hardware.
Gives exact answer Gives approximate answer
Uses seconds and bytes to represent
how time and space requirements
Uses asymptotic notations to represent
how much time the algorithm will take
to complete its execution.
Will differ from system to system. Same for every system.
Improve compiler and hardware to
make program go faster
Improve logic to make algorithm run
faster
31. 1. Measuring an Input size
An algorithm's efficiency is investigated as a function of some
parameter ‘n’ indicating the algorithm's input size.
• Calculate the input size based on the number of items in the input
• Define the input size in terms of the total number of bits
• Define the input size in terms of two numbers
32. Let cop be the execution time of an algorithm‘s basic operation on a particular
computer, and
let C(n) be the number of times this operation needs to be executed for this
algorithm. Then we can estimate the running time
T (n) of a program implementing this algorithm on that computer by the
formula
T (n) ≈ cop C(n).
Total number of steps for basic operation execution, C (n) = n
33. Note: How to compute the Running time an algorithms?
Let cop be the execution time of an algorithm’s basic operation on a particular computer
Let C(n) be the number of times this operation needs to be executed by the algorithm
Estimate the T(n) is running time
T(n) ≈ cop C(n)
QN: How much faster would this algorithm run on a machine that is ten times faster than one we
have?
SOLN:
Assuming that C(n)=1/2n(n-1), how much longer will the algorithm run if we double its input size?
C(n)=1/2n(n-1) C(n) = ½ n2 - ½ n ≈ ½ n2
Therefore,
Note: The efficiency analysis framework ignores multiplicative constants and concentrates
on the count’s order of growth to within a constant multiple for large-size inputs.
34. N Log N N N log N N 2 N 3 2 N N!
1 0 1 0 1 1 2 1
2 1 2 2 4 8 4 2
4 2 4 8 16 64 16 24
8 3 8 24 64 512 256 40320
1 or any constant: Running time of the program is constant
Log N: logarithmic (Binary search)
N: linear (Linear Search)
N log N : (merge, quick and heap sort)
N 2 : Quadratic
N 3: Cubic (Exponential)
2 N:Exponential (Tower of Hanoi)
35.
36. Performance Analysis
There are many criteria upon which we can judge an algorithm.
For example :
1. Does it do what we want it to do? Results
2. Does it work correctly according to original specifications of the task? Quality
3. Is there documentation that describes how to use it and how it works? Documentation
4. Are procedures created in such a way that they perform logical sub-functions? Modularity
5. Is the code readable? Readability
These criteria are all important when it comes to writing software for large systems. *** There are
other criteria for judging algorithms that have a more direct relationship to performance.
These have to do with their Computing Time and Storage Requirements. ***
Example: However research shows that on average, athletes and coaches can only recall 30% of
performance correctly. Performance analysis helps with the remaining 70% by providing the facts of
what happened which makes it a vital component for athlete improvement.
37. Contd….
Space Complexity : The space complexity of an algorithm is the amount of memory it needs to
run to completion.
S(P) = c + Sp (instance characteristics) where,
c = space for (instructions, simple variables, fixed size aggregates) Sp = variable part depends upon
problem instance characteristics
Time Complexity : The time complexity of an algorithm is the amount of computer time it needs
to run to completion.
T(P) = compile time + run time (execution time)
Compile time is constant or same program is executed many times, hence run time of a program (tp) is
important which depends upon instance characteristics
Performance Evaluation consists of :
1. A Priory Estimates or Performance or Asymptotic Analysis
2. A Posteriori Testing or Performance Measurement
38. Performance Evaluation
Priori Estimates: For any instruction in a program:
Total Computation Time = Time required to execute the instruction * Number of times the
instruction is executed in a program (i.e. frequency count).
Time required to execute the instruction depends upon:
Speed of processor (Clock frequency)
Instruction set of the machine language
Time required (processor cycles) to execute an instruction
Compiler used to translate high level instructions to m/c language
All above criteria differ from installation to installation. Therefore for priori estimates
frequency count of each statement is most important factor while analyzing an algorithm or
program.
Frequency count : It is the number of times an instruction is executed in the execution of
a program.
39. Order of growth:
Order of growth is how the time of execution depends on the length of the
input.
In the above example, we can clearly see that the time of execution is
linearly depends on the length of the array.
Order of growth will help us to compute the running time with ease. We will
ignore the lower order terms, since the lower order terms are relatively
insignificant for large input. We use different notation to describe limiting
behavior of a function.
40. Space complexity:
a) Instruction Space
b) Data space
c) Environment space
Constant space complexity
Algorithm requires fixed
amount of space for all input
values is constant.
Eg: int product (int a)
{
return a*a;
}
Linear space complexity
Space needed for an algorithm,
Size variable is n = 1
Array values = n word
Loop variable I = 1 word
Sum variable occupies = 1 word.
Eg: int sum (int A[], int n) n = 1
{ A = n
int sum = 0,I; sum = 1
for (i=0; i<n; i++) i = 1
{ S(n) = n + 3 O(1)
sum = sum+ A[i];
return sum;
}
}
41. Time complexity:
Constant Time complexity
Algorithm requires fixed amount of
time for all input values is
constant.
Eg: int sum (int a, int b)
{
return a + b;
}
Linear time complexity
Time needed for an algorithm,
comments = 0 steps
Assignement stmt = 1 steps
Loop stmt n = n+1 times
Body of the loop = n steps.
Eg: int sum (int A[], int n)
{
int sum = 0,i;
for (i=0; i<n; i++)
{
sum = sum+ A[i];
return sum;
}
}
Repetition
1
(n+1)
n
1 = 1+n+1+n+1
Total f(n) =2n +3
O(n)
42. Examples
Eg:1 Every statement takes one unit of time
Algorithm swap(a,b)
{
temp = a; ------ 1
a=b; ------------ 1
b=temp; -------- 1
}
Time : f(n) = 3 is fixed value/constant O(1)
Space: s(n) = 3 O(1)
43. Analysis Framework Overview
Efficiencies of the Algorithm
There are three cases
i. Best case
ii. Worst case
iii. Average case
Let we have 5 nos.(n=5) 25,31,42,71,105 and we have to find any element in the list.
Best case efficiency
let we have to find 25 in List =>25,31,42,71,105
k=25 25 is present at the first position
Since only single comparison is made to search the element so we say that it is best case efficiency
CBest (n)=1
Worst case efficiency
If we want to search the element which is present at the last of the list or not present at all in the list then such cases are called the worst case
efficiency.
let we have to find 105 in List =>25,31,42,71,105
k=105 Therefore we have to make 5 (=n) comparisons to search the element
CWorst (n)=n
And if we have to find 110 k=110
Since the element is not in the list even then we have to make 5 (=n) comparisons to search the element
CWorst (n)=n
44. Contd….
Average case efficiency
Let the element is not present at the first or the last position
Let it is present somewhere in middle of the list
We know that probability of a successful search = p where 0≤p≤1.
And probability of unsuccessful search = 1-p
Let the element we are searching for is present at position 'i' in the list. Therefore probability of
the element to be found is given by p/n.
Therefore CAvg (n)
=[1*p/n + 2*p/n + ... + i*p/n + ... + n*p/n] + n(1-p)
= p/n[1+2+...+i+...+n] + n(1-p)
=p/n[n*(n+1)/2] + n(1-p)
=p[(n+1)/2] + n(1-p)
45. Contd…
case 1.
If element is available therefore p=1 (for successful search)
Now substituting p=1 in above eqn
CAvg (n)
= 1[(n+1)/2] + n(1-1)
= (n+1)/2
case 2.
If element is unavailable therefore p=0 (for unsuccessful search)
Now substituting p=0 in above eqn
CAvg (n)
= 0[(n+1)/2] + n(1-0)
= n
Therefore on average half of list will be searched to find the element in the list
46. Space complexity
Total amount of computer memory required by an algorithm to
complete itsexecution is called as space complexity of that algorithm.
The Space required by an algorithm is the sum of following components
A fixed part that is independent of the input and output. This includes
memoryspace for codes, variables, constants and so on.
A variable part that depends on the input, output and recursion stack.
(We callthese parameters as instance characteristics)
Space requirement S(P) of an algorithm P,
S(P) = c + Sp
where c is a constant depends on the fixed part, Sp is the instance
characteristics
48. 1. Time Complexity
The time T(p) taken by a program P is the sum of the compile time and the
runtime(execution time)
The compile time does not depend on the instance characteristics. Also we may
assume that a compiled program will be run several times without
recompilation .The run time is denoted by tp(instance characteristics).
Usually, the execution time or run-time of the program is referred as its time
complexity denoted by tp (instance characteristics). This is the sum of the time
taken to execute all instructions in the program.
49. we can estimate that invocation
of Sum( ) executes totalnumber
of 2n+3 steps.
52. Asymptotic Notations
The notation, which we use to describe the asymptotic running time of an algorithm are
defined in terms of functions, whose domains are the set of natural numbers and real
numbers.
The natural number set is denoted as: N = {0, 1, 2, …}
The positive integer set is denoted as: N+ = {1, 2, 3, …}
Real number set is denoted as R.
Positive real number set is denoted as R+.
Non-negative real number set is denoted as R*.
Such notations are convenient for describing the worst case running time function T(n),
which is usually defined only on integer input sizes.
53. The different types of notations are:
Big oh (O) notation
Small oh (o) notation
Theta (θ) notation
Omega (Ω) notation
Small omega (ω) notation
1. Big Oh (O) Notation:
The upper bound for the function is provided by Big Oh (O) notation. We can say, the running time of an
algorithm is O(g(n)), if whenever input size is equal to or exceeds, some threshold „n0‟, its running time
can be bounded by some positive constant „c‟ time g(n).
54. 1.Let f(n) and g(n) are two functions from set of natural
numbers to set of non-negative real numbers and f(n) is
said to be O(g(n)).
2.That is: f(n) = O(g(n)), iff there exist a natural number
„n0‟ and a positive constant c > 0,
such that f(n) ≤ c(g(n)), for all n ≥ n0. n0 Input size
Running time c(g(n)) f(n)
Examples:
1. f(n) = 2n2 + 7n – 10, n = 5, c = 3.
=> f(n) = O(g(n)), where g(n) = n2
f(n) ≤ c(g(n)) => (2n2 + 7n – 10) ≤ (3 x n2 )
2 x 25 + 7 x 5 – 10 ≤ 3 x 25
=> 50 + 35 – 10 ≤ 75 => 75 ≤ 75.
So, it is in O(g(n)) = O(n2).
2. f(n) = 2n2 + 7n – 10, n = 4, c = 3,
g(n) = n2 => f(n) ≤ c(g(n))
2n2 + 7n – 10 ≤ 3 x n2
=> 2 x 16 + 7 x 4 – 10 ≤ 3 x 16
=> 32 + 28 – 10 ≤ 48 => 50 ≤ 48.
So, it is not in O(g(n)).
55. Ω(g(n)) = { f(n): there exist positive
constants c and n0 such that 0 ≤ cg(n) ≤ f(n)
for all n ≥ n0 }
F(n)>= c.g(n)
c. g(n) <= f(n)
56. Theta Notation (Θ-notation)
Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the
running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm.
C2. g(n) >= f(n) f(n) >= c1. g(n)
c1. g(n) < = f(n) <=c2.g(n)
57. 1< log n < √n < n < n log n < n2 < n3 < ……..< 2n < 3n < …….< nn
Eg:1 f(n)= 2n +3
Big oh notation, f(n)<=c *g(n)
2n + 3 < = c * n , c>0, n>=1
2n +3 < = 5 * 1 n=1
2n +3 <= 5 2 (1) +3<=5
5<=5
f(n) = 2n2 + 7n – 10, n = 5, c = 3.
F(n) <= c* g(n)
60. Asymptotic Notations
Example 1: f(n) = 2n +3 is O(n)
Big oh notation, f(n)<=c *g(n)
2n + 3 < = c * n , c>0, n>=1
2n +3 < = 5 * 1 where c = 5, n=1
2n +3 <= 5 2 (1) +3<=5 5<=5 f(n) = O(n)
Instead, put ‘n’ on RHS then
2n + 3 <= 2n +3n for all n>=n0
2n + 3 <= 5n, where c=5, g(n) = n
For any value of n the above condition will satisfy that
f(n) = O(n)
For example, can also be re-written as,
2n + 3 <= 2n2 + 3n2 c= 5, g(n) = n2 f(n) = O(n2 )
But choose the nearest value,
Therefore, f(n) = O(n)
61. Contd…
Big omega notation, c . g(n)<= f(n) for all, n>=n0
f(n)>= c.g(n)
2n +3 2n+3 >= 1*n c=1, g(n) = n
Therefore, f(n) =Ω(n)
Instead, 2n + 3 >= 1 * log n c = 1, g(n) = log n
Therefore, f(n) = Ω( log n)
But we choose, nearest one, so consider
f(n) = Ω(n)
62. Contd….
Big Theta notation, c1 . g(n) <= f(n) <= c2 . g(n)
f(n) = 2n +3, c1 = 1, c2 = 5, n=1
1 * n < = 2n +3 <= 5. n
1 < = 5 < = 5 f(n) = θ (n)
Its not possible to say θ(n2 ) or theta (log n)
Because its an average notation
Therefore, consider the nearest value which is
f(n) = θ (n)
63. Asymptotic notations for Linear search algorithm
Why 3 different analysis?
Big O - worst case
Big Ω - Best case
Big θ – Average case
Linear search algorithm:
Best case: The key value is at first position, then its at constant time,
Ω (1)
Worst case: The key value is at last position, then algorithm has to run for ‘n’ times: O(n)
Average case: It takes the mean point that will be the mid point, θ (n/2), but when we consider
time complexities, we take only variables…
Therefore, θ (n)
64. Asymptotic notations for Binary search algorithm
Sorted Array of 10 elements: 2, 5, 8, 12, 16,
23, 38, 56, 72, 91
Let us say we want to search for 23.
65. Calculating the time complexity: It takes ‘k’ iterations. It terminates after 3 iterations, so here k = 3. At
each iteration, the array is divided by half.
At Iteration 1, Length of array = n
At Iteration 2, Length of array = n/2
At iteration 3, Length of array = (n/2) / 2 = n / 22
Therefore, After Iteration k, Length of array = n / 2k After k divisions, the length of array becomes 1.
Therefore, length of array n / 2k = 1 n = 2k
Applying log functions on both sides log2 (n) = log2 (2k)
log2 (n) = k log2 (2)
Therefore, As logn (a) = 1
k = log2 (n)
Hence the time complexity of Binary search is log2 (n)
Worst case performance is: O(log n)
Best case performance is: omega(1)
Average case performance is: theta (log n)
Worst case space complexity: O(1)
66. Differences between Linear search vs
Binary search:
Input data needs to be sorted in Binary Search and not in Linear Search
Linear search does the sequential access whereas Binary search access data
randomly.
Time complexity of linear search -O(n) , Binary search has time complexity O(log
n).
Linear search performs equality comparisons and Binary search performs ordering
comparisons
67. Example: Prove that f(n) = 3n + 2 , f(n) = θ(n)
Soln: c1. g(n)<= f(n) <= c2 . g(n)
Upper bound:
f(n) <= c2 . g(n) for all n>= n0
3n + 2 <= c2 . g(n)
For 2<=n0, then 3n + 2 <= 3n + n
3n + 2<= 4n
Assume, n=1, 5<=4 not satisfied
n= 2, 8 <= 8 satisfied
n=3, 11<= 12 yes, satisfied
Therefore, c2 = 4, n0 = 2
f(n) <= c2 . g(n) f(n) = O(n)
Hence proved
Lower bound: c1. g(n)<= f(n)
f(n) >= c1. g(n) for all n>= n0
Method 1: 3n + 2 >= c1. g(n)
Since 2>=n 0 3n + 2 >= 3n for n>=1
Remark that the inequality holds also for n>=0, however the
definition of Ω requires n0 > 0.
Assume n=1, 5 >= 3. Therefore c=3, n0 = 1.
Method 2: c1. g(n) < = 3n + 2 c. n <= 3n + 2
cn – 3n <= 2 n ( c -3 ) <= 2 n <= 2/(c-3)
Assume c=4, n0 = 2
Theta Notation: c1. g(n) <= f(n) <= c2 . g(n)
3n < = f(n) <= 4n ==> 3n <= 3n + 2< = 4n
c1 =3 , c2 =3 and n0 = 2
f(n) = θ(n)
68. Ex: 3 Prove that f(n) = 2n2 + 5n + 6 is O(n2 )
Soln: f(n)<= c. g(n)
2n2 + 5n + 6 <= c n2
2n2 + 5n2 + 6n2 <= c n2
13n2 <= c n2, where c=13, n0 = 1
Hence proved f(n) = O(n2 )
Ex: 4 Prove that f(n) = 1000n2 + 1000n is
O(n2 )
Ex: 5 Prove that f(n) = 5n2 + 3n + 20 is O(n2 )
Ex: 6 Prove that 100n + 5 O(n )
Soln: for all n>= 1
100n + 5 <= 100n + 5n
100n + 5 <= 105n
Therefrore, c= 105, n0 = 1
Ex: 7 Prove that 100n + 5 = Ω (n2 )
Soln: f(n) >= c . g(n)
c. n2 <= 100n + 5
This cannot be proved. Since
LHS > RHS, for n>= n0 and for any
combination of c and n0
E x: 8 Prove that n2 = O(n2 )
E x: 9 Prove that n3 = O(n2 )
E x: 10 Prove that 22n = O(2n )
69. Prove that: ½ n (n-1) θ(n2 )
Soln: c1. g(n)<= f(n) <= c2 . g(n)
f(n) <= c2 . g(n) ½ n (n-1) <= c2 . n2 ( ½ n2 - ½ n ) < = ½ n2
c2 = ½ . Assume n=0 on both sides, its true
Assume n=1 on both sides, LHS = 0, RHS = ½ Therefore, f(n) = O(n)
f(n) >= c1 . g(n) ( ½ n2 - ½ n ) > = ( ½ n2 - ½ n * ½ n)
( ½ n2 - ½ n ) > = ( ½ n2 – ¼ n2 ) ( ½ n2 - ½ n ) > = ¼ n2 , c1 = ¼
Assume n=0 on both sides, LHS = ), RHS = 0
Assume n=1 on both sides, LHS = 0, RHS = ¼
Assume n=2 on both sides, LHS = 1, RHS = 1
Therefore, f(n) = Ω (n)
c1. g(n)<= f(n) <= c2 . g(n) ¼ n2 <= ( ½ n2 - ½ n ) <= ½ n2
Hence, c1 = ½, c2 = ¼ , n0 = 2
Hence proved
½ n (n-1) θ(n2 )
72. Little o Notations
There are some other notations present except the Big-Oh, Big-Omega and Big-Theta notations. The little o notation is one of
them.
Little o notation is used to describe an upper bound that cannot be tight. In other words, loose upper bound of f(n).
Let f(n) and g(n) are the functions that map positive real numbers. We can say that the function f(n) is o(g(n)) if for any real
positive constant c, there exists an integer constant n0 ≤ 1 such that f(n) > 0.
Mathematical Relation of Little o notation
Using mathematical relation, we can say that f(n) = o(g(n)) means,
Example on little o asymptotic notation
If f(n) = n2 and g(n) = n3 then check whether f(n) = o(g(n)) or not.
The result is 0, and it satisfies the equation mentioned above. So we can say that f(n) = o(g(n)).
73. Using Limits for comparing Orders of growth
Limits
Compare the orders of growth of
74. Compare the orders of growth of log2 n and √n
L - Hospitals rule lim
𝑛→∞
𝑡 𝑛
𝑔 𝑛
= lim
𝑛→∞
𝑡′ 𝑛
𝑔′ 𝑛
=
Example 2
L'Hospital's Rule tells us that if we have an indeterminate form 0/0 or ∞/∞
all we need to do is differentiate the numerator and differentiate the
denominator and then take the limit.
75.
76.
77. Mathematical Analysis of Non-recursive Algorithms
1. Decide on a parameters indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed depends only on
the size of an input.
If it also depends on some additional property, the worst- case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed form
formula for the count or,at the very least, establish its order of growth.
78. Example:1 Finding the value of the largest element in a list of
n numbers
6 4 5 12
Maxval = 6
85. Let cop be the execution time of an algorithm’s basic operation on a particular computer
Let C(n) be the number of times this operation needs to be executed by the algorithm
Estimate the T(n) is running time
T(n) ≈ cop C(n)
QN: How much faster would this algorithm run on a machine that is ten times faster
than one we have?
SOLN: If we now want to estimate the running time of an algorithm on a particular machine,
we can do it by product
T(n) ≈ cm M(n) ≈ cm n3
Where, cm time of one multiplication on the machine.
Accurate estimate, by addition ca time of one addition
“Estimates differ only by their multiplicative constants, not by their order of growth”
86. Example: 4 Find the number of binary digits in the binary
representation
88. Mathematical Analysis of Recursive Algorithms
1. Decide on a parameter indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on
different inputs of the same size;
If it can, the worst-case, average-case, and best-case efficiencies must be
investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.
Backward substitution concept
89. Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
71
If n=0, M(0) = 1 for one
check, base condition for
recurrence
90. • We can use backward substitutions method to solve
this
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
72
For n=0, not performing any multiplication, case:
M(0) = 0.
Therefore, M(n) = M(0) +n = 0 + n = n
91. Tower of Hanoi
puzzle.
• In this puzzle, There are n disks of different sizes that can slide
onto any of three pegs.
• Initially, all the disks are on the first peg in order of size, the
largest on the bottom and the smallest on top.
• The goal is to move all the disks to the third peg, using the
second one as an auxiliary, if necessary.
• We can move only one disk at a time, and it is forbidden to place
a larger disk on top of a smaller one.
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
73
92. Tower of Hanoi
puzzle.
• The problem has an elegant recursive solution
• To move n>1 disks from peg 1 to peg 3 (with peg 2 as
auxiliary),
– we first move recursively n-1 disks from peg 1 to peg 2 (with
peg 3 as auxiliary),
– then move the largest disk directly from peg 1 to peg 3, and,
– finally, move recursively n-1 disks from peg 2 to peg 3 (using
peg 1 as auxiliary).
• If n = 1, we move the single disk directly from the source
peg to the destination peg.
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
74
94. Algorith
m
TowerOfHanoi(n, source, dest, aux)
If n == 1, then
move disk from source to dest
else
TowerOfHanoi (n - 1, source, aux, dest)
move disk from source to dest
TowerOfHanoi (n - 1, aux, dest, source)
End if
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
76
95. Recurrence relation for total number of
moves
The number of moves M(n) depends only on n. The
recurrence equation is
We have the following recurrence relation for the
number of moves M(n):
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
77
96. • We solve this recurrence by the same method of
backward substitutions:
• The pattern of the first three sums on the left
suggests that the next one will be
24 M(n − 4) + 23 + 22 + 2 + 1, and
generally, after i substitutions, we get
78
18CS42-Design and Analysis of Algorithms Feb-May 2020
Harivinod N
97. • Since the initial condition is specified for n = 1, which
is achieved for i = n - 1,
we get the following formula for the solution to
recurrence
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
79
98. Example
3
• Basic operation is Addition
• A(1) = 0 Smoothness rule
• The recurrence relation can be written as
• Assuming n = 2k
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
80
99. Recurrence relation for basic
operation
Harivinod N 18CS42-Design and Analysis of Algorithms Feb-May 2020
81