1. Applied Algorithms
• Course Objectives
• The primary objective of this subject is to prepare post
graduate students in solving real-life problems and to
develop an ability to design and analyze the algorithms
which will help them in life-long research work too.
• To study how to formulate the problem and how to
apply problem solving skills to solve the problem.
• To study the counting and probability concepts which
are useful in the analysis of an algorithm.
• To study Fundamental computing algorithms,
algorithmic strategies, approximation algorithms,
Geometric algorithms, Linear programming and
asymptotic analysis of these algorithms.
2. Applied Algorithms
• Teaching Scheme : 5 Hours / Week
• Examination Scheme: Int. Assessment : 50 Theory : 50 Marks
• Teaching Plan :
Unit
No.
Contents
Theory
Lectures
I Analysis of Algorithms 10
II Fundamental Computing Algorithms 10
III Approximation Algorithms 10
IV Geometric Algorithms 10
V Linear Programming 10
VI Probability Based Analysis 10
Contents Beyond Syllabus 2
Total Lecture Hours in a semester (13+ weeks) 62
3. Applied Algorithms
Text Books & Reference Books:
Book
No.
Title of the Book Authors Publisher
Reference Books
R-1 Probability & Statistics with Reliability,
Queuing, and Computer Science
Applications
Kishore S. Trivedi PHI
R-2 Algorithms Cormen PHI
R-3 Fundamentals of Algorithms Bressard PHI
R-4 Fundamentals of Computer Algorithms Horowitz, Sahni Galgotia
R-5 Computer Algorithms : Introduction to
Design and Analysis
S.Baase, A.Van Gelder
(3rd Edition)
Addison
Wesley
R-6 The Design & Analysis of Computer
Algorithms
Aho, Hopcraft, Ullman Addison
Wesley
R-7 Combinatorial Optimization C. Papadimitriou and K.
Steiglitz
PHI
4. Applied Algorithms
Expectations from students:
• It is to be noted that there will be 20% weightage for
attendance while assessing the Term Work. Hence 100%
attendance is expected from all the students.
• There will be total six home assignments for this subject,
one each at the end of every unit of the course. Term work
submissions will not be accepted if all six assignments are
not completed with understanding.
• You may contact me for any difficulties in the subject
during visiting hour. Maximum interaction is expected.
5. Applied Algorithms
Expectations from students:… contd:
• There will be Unit test I of 30 marks on first two units
taught and Unit Test II of 30 marks on next two units of
syllabus taught.
• There will be 20% weightage for Unit Tests performance
while assessing the Term Work. Hence it is important to
clear Unit Tests with maximum score.
• This is very important subject in Computer Engineering, &
hence it is necessary to understand each & every concept
clearly.
6. Applied Algorithms
Unit I : Analysis of Algorithms ( Refer T-1 )
• Review of Algorithmic Strategies
• Asymptotic Analysis : upper and lower
complexity bounds, identifying differences
among best, average and worst case behaviors,
Big O, Litle o, Omega Ω , Litle ω and Theta θ
notations
• Standard Complexity Classes
• Empirical Measurements of Performance
• Time and Space Tradeoffs in Algorithms
• Analyzing Recursive Algorithms using
Recurrence Relations
7. Unit I : Analysis of Algorithms
Asymptotic analysis :
Introduction :
• Computer Science
• Data Structures
• Algorithm – Definition and Characteristics
• Importance of Data Structures and Algorithms in
problem solving
• Design of an algorithms – Brute force approach, use of
algorithmic strategies, Randomized algorithms,
approximation algorithms
• Analysis of an algorithm – Why to analyze? How to
analyze?
• Proving the algorithm – Contradiction, M.I. etc.
8. Unit I : Analysis of Algorithms
What is an Algorithm?
• The steps that can be used by a Computer for the solution
of a problem. It is different from words like Process,
Technique or Method
• An algorithm is a finite set of instructions that if followed,
accomplishes a particular task. In addition all algorithms
must satisfy following criteria (characteristics)
1. Input : Zero or more quantities are externally supplied
2. Output : At least one quantity is produced
3. Definiteness : Each instruction is clear &
unambiguous
4. Finiteness : For all input cases it terminates after
finite number of steps
5. Effectiveness : Every instruction must be very basic
so that it can be carried out, in principle, by a person
using only pencil and paper (Feasible Instructions)
9. Unit I : Analysis of Algorithms
Four Research Areas of study in Algorithms :
1. Design of Algorithms : Algorithmic strategies ( D&C,
Greedy, Dynamic Programming, Backtracking, Branch &
Bound, Approximation Algorithms, Randomized
algorithms, Parallel Algorithms etc).
2. Validation of Algorithms : Once an algorithm is
devised, it is necessary to show that it computes the
correct answer for all possible legal inputs. Proof
techniques (Contradiction, Mathematical Induction etc.)
3. Analysis of Algorithms : Computing Time and Storage
requirements for an algorithm
4. Testing of Programs : Software Testing, Test Cases
Design and Debugging
10. Unit I : Analysis of Algorithms
Algorithm (or program) Specification :
Specification is the process of translating a solution of a
problem into an algorithm. This can be done in three ways:
1. Steps or instructions can be written by using a
Natural Language like English. In this case we have
to ensure that the resulting instructions are definite.
2. Graphic representations called Flowcharts can also be
used for specifications, but they work well only if the
algorithm is small and simple.
3. Use of Pseudo Code is the most preferred option to
specify an algorithm. It is represented by using a
Natural Language (English) and C / C++.
11. Unit I : Analysis of Algorithms
Algorithm (or Program) Specification : Example
1. Selection sort algorithm (Pseudo code) :
for (i = 1; i < n ; i++)
{ examine a[ i ] to a[ n ] for
the smallest element a[ j ];
interchange a[ i ] and a[ j ];
}
2. // C Program to Sort the array a[1:n] into non-decreasing order.
typedef int type; int j, k; type t;
void SelectionSort(type a[], int n)
{ for (i = 1; i <n; i++)
{ j = i;
for (k = j+1; k ≤ n; k++)
if (a[k] < a[j])
j = k;
t = a[i]; a[i] = a[j]; a[j] = t;
}
}
12. Unit I : Analysis of Algorithms
Necessary Mathematical Foundation
Summation Formulas :
When an algorithm contains an iterative control construct such
as while or for loop, we can express its running time as the sum
of the times spent on each execution of the body of the loop.
For example in insertion sort jth iteration takes time
proportional to j in the worst case. By adding up the time spent
on each iteration, we obtain the summation (or series)
Σ j = θ(n2)
2≤ j ≤ n
Thus on evaluation of this summation, we get a bound of
θ(n2) on the worst case running time of the insertion sort
algorithm.
13. Unit I : Analysis of Algorithms
Summation Formulas and Properties :
Linearity :
Σ (cak + bk) = c Σ ak + Σ bk
1≤ k ≤ n 1≤ k ≤ n 1≤ k ≤ n
Arithmetic Series :
Σ k = n(n+1) / 2
1≤ k ≤ n
Sums of squares and cubes:
Σ k2 = n(n+1)(2n + 1) / 6
1≤ k ≤ n
Σ k3 = (n(n+1) / 2)2 = n2 (n+1)2 / 4
1≤ k ≤ n
14. Unit I : Analysis of Algorithms
Summation Formulas and Properties :
Divergent & Convergent Series:
Given a sequence a1, a2, … , an of numbers, where ai is a
nonnegative integers, we can write the finite sum a1 + a2 … + an
as
Σ ak If n = 0 the value of the summation is
1≤ j ≤ n defined to be 0
Given an infinite sequence a1, a2, … of numbers, we can write
the infinite sum a1 + a2 … as
Σ aj = Lim Σ aj
1≤ j ≤ ∞ n → ∞ 1≤ j ≤ n
If the limit does not exist, the series diverges, otherwise it
converges
15. Unit I : Analysis of Algorithms
Summation Formulas and Properties :
Geometric or Exponential Series :
Σ xk = ( xn+1 - 1) / (x – 1) for all x ≠ 1
0≤ k ≤ n
When the summation is infinite and | x | < 1
Σ xk = Lim ( xn+1 - 1) / (x – 1) = 1 / ( 1 – x)
0≤ k ≤ ∞ n → ∞
Harmonic Series :
For positive integer n the harmonic number is :
H = 1 + ½ + 1/3 + ¼ + … + 1/n
= Σ 1/k = ln n + O(1)
1≤ k ≤ n
16. Unit I : Analysis of Algorithms
Proof Techniques : Contradiction :
This technique is also known as Indirect Proof
which consists of demonstrating the truth of a statement
by proving that it’s negation yields a contradiction.
Example-1: S = “There are infinitely many prime
numbers” .
Example-2: S = “There exist two irrational numbers x
and y such that xy is rational”.
17. Unit I : Analysis of Algorithms
Proof Techniques : Mathematical Induction:
• M.I. is very powerful technique in mathematics
• For a given statement involving a natural number n,
if we can show that:
1. The statement is true for n = n0 and
2. The statement is true for n = k + 1, assuming
that the statement is true for n = k (k ≥ n0)
• Then we can conclude that the statement is true for
all natural numbers n ≥ n0
• Step 1 is referred as the Basis of Induction, step 2 is
referred as Induction Step and the assumption that
the statement is true for n = k in step 2 is referred as
Induction Hypothesis
18. Unit I : Introduction to Probability & Problem Solving
Strong Induction :
In this case assumption is that the proposition is true for
all values of n from 1 to n and we have to prove that it is
also true for the value n+1. For example, nth Fibonacci
number in Fibonacci sequence can be obtained using
following formula:
fn = (1 / √ 5)(Øn – (-Ø)-n),
where Ø = (1+ √ 5)/2.
Consider the definition of Fibonacci sequence:
f0 = 0, f1 = 1 and fn = fn-1 + fn-2 for n ≥ 2,
then we can prove by strong M.I. that
fn = (1 / √ 5)(Øn – (-Ø)-n),
where Ø = (1+ √ 5)/2.
19. Unit I : Analysis of Algorithms
Exercises:
1. Show that
12 + 22 + … n2 = n (n + 1)(2n+1) / 6 for n ≥ 1
2. Prove that for any positive integer n, the number
n5 - n is divisible by 5
3. Prove by M.I. that for all n ≥1,
1.2 + 2.3 + … + n.(n+1) = n(n + 1)(n + 2) / 3
4. Prove by M.I. : where n is non-negative number:
3 + 3.5 + 3.52 + … + 3. 5n = 3.(5n+1 - 1)/4
5. Consider the definition of Fibonacci sequence:
f0 = 0, f1 = 1 and fn = fn-1 + fn-2 for n ≥ 2, prove by M.I.
that fn = (1 / √ 5)(Øn – (-Ø)-n), where Ø = (1+ √ 5)/2
20. Unit I : Analysis of Algorithms
Performance Analysis of an Algorithm :
There are many criteria upon which we can judge an algorithm.
For example :
1. Does it do what we want it to do? Results
2. Does it work correctly according to original specifications
of the task? Quality
3. Is there documentation that describes how to use it and
how it works? Documentation
4. Are procedures created in such a way that they perform
logical sub-functions? Modularity
5. Is the code readable? Readability
These criteria are all important when it comes to writing software
for large systems.
*** There are other criteria for judging algorithms that have a
more direct relationship to performance. These have to do with
their Computing Time and Storage Requirements. ***
21. Unit I : Analysis of Algorithms
Performance Analysis of an Algorithm – contd. :
Space Complexity : The space complexity of an algorithm is the amount
of memory it needs to run to completion.
S(P) = c + Sp (instance characteristics) where,
c = space for (instructions, simple variables, fixed size aggregates)
Sp = variable part depends upon problem instance characteristics
Time Complexity : The time complexity of an algorithm is the amount
of computer time it needs to run to completion.
T(P) = compile time + run time (execution time)
Compile time is constant or same program is executed many times, hence
run time of a program (tp) is important which depends upon instance
characteristics
Performance Evaluation consists of :
1. A Priory Estimates or Performance or Asymptotic Analysis
2. A Posteriori Testing or Performance Measurement
22. Unit I : Analysis of Algorithms
Performance Evaluation : Asymptotic analysis
Priori Estimates: For any instruction in a program:
Total Computation Time = Time required to execute the instruction *
Number of times the instruction is executed in a
program (i.e. frequency count).
Time required to execute the instruction depends upon:
- Speed of processor (Clock frequency)
- Instruction set of the machine language
- Time required (processor cycles) to execute an instruction
- Compiler used to translate high level instructions to m/c language
All above criteria differ from installation to installation. Therefore for
priori estimates frequency count of each statement is most important factor
while analyzing an algorithm or program.
Frequency count : It is the number of times an instruction is executed
in the execution of a program. Example
23. Unit I : Analysis of Algorithms
Performance Evaluation : Asymptotic analysis
Priori Estimates: Example
Consider the following construct of a program:
{ for (i = 1; i ≤ n; i++) …. 1
for ( j = 1; j ≤ i; j++) …. 2
for (k = 1; k ≤ j; k++) …. 3
x = x+1; } …. 4
Statement # 1 : (Σ 1 ) + 1 = n + 1
1<= i<= n
Statement # 2 : Σ [(Σ 1 ) + 1] = [n(n + 1)/2] + n
1<= i<= n 1<= j<= i
Statement # 3 : Σ [(Σ (Σ 1 ) + 1] = [n(n + 1)(n+5)/6]
1<= i<= n 1<= j<= i 1<= k<= j
Statement # 4 : Σ [(Σ (Σ 1 ) ] = [n(n + 1)(n+2)/6]
1<= i<= n 1<= j<= i 1<= k<= j
24. Unit I : Analysis of Algorithms
Performance Evaluation : Asymptotic analysis
Priori Estimates: Example contd…
Frequency Count of each statement:
Statement # 1 : n + 1
Statement # 2 : n2 /2 + 3n/2
Statement # 3 : n3/6 + n2 + 5n/6
Statement # 4 : n3/6 + n2/2+ n/3
Total Frequency Count = (n3 +6n2 +11n +3)/3 = O(n3)
25. Unit I : Analysis of Algorithms
Asymptotic analysis :
Asymptotic Notations: To enable us to make meaningful (but inexact)
statements about the time and space complexities of an algorithm ,
asymptotic notations (O, o, Ω, ω, θ) are used.
Big “Oh” : The function f(n) = O(g(n)), to be read as “ f of n is Big Oh
of g of n, if and only if there exist positive constants c and n0
such that, f(n) <= c*g(n) for all n>=n0, for example,
i. f(n) = 3n + 2 <= 4n for all n >= 2.
Here c = 4, g(n) = n and n0 = 2.
ii. f(n) = (n3 + 6n2 + 11n +3) <= 2n3 for all n >= 8.
Here c = 2, g(n) = n3 and n0 = 8.
Thus f(n) = O(g(n)) states that g(n) is an upper bound on the value of
f(n) for all n>=n0.
26. Unit I : Analysis of Algorithms
Asymptotic analysis :
Omega (Ω ): The function f(n) = Ω(g(n)), to be read as “ f of n
is omega of g of n, if and only if there exist positive
constants c and n0 such that, f(n) >= c*g(n) for all n>=n0,
for example,
i. f(n) = 3n + 2 >= 3n for all n >= 1.
Here c = 3, g(n) = n and n0 = 1.
ii. f(n) = (n3 + 6n2 + 11n +3) >= n3 for all n >= 1.
Here c = 1, g(n) = n3 and n0 = 1.
Thus f(n) = Ω(g(n)) states that g(n) is an lower bound on
the value of f(n) for all n>=n0.
27. Unit I : Analysis of Algorithms
Asymptotic analysis :
Theta (θ ): The function f(n) = θ(g(n)), to be read as “ f of n is
theta of g of n, if and only if there exist positive constants
c1 , c2 and n0 such that, c1g(n) <= f(n) <= c2*g(n) for all
n>=n0, for example,
i. f(n) = 3n + 2 then
3n <= 3n + 2 <= 4n for all n>=2
Here c1 = 3, c2 = 4, g(n) = n and n0 = 2.
ii. f(n) = (n3 + 6n2 + 11n +3) then
n3 <= (n3 + 6n2 + 11n +3) <= 2n3 for all n >= 8.
Here c1 = 1, c2 = 2, g(n) = n3 and n0 = 8.
Thus f(n) = θ(g(n)) states that g(n) is both upper & lower
bound on the value of f(n) for all n>=n0.
28. Unit I : Analysis of Algorithms
Asymptotic analysis :
Little “oh” : The function f(n) = o(g(n)), to be read as “ f of n
is little oh of g of n, if and only if Lim f(n) / g(n) = 0
n → α
i. f(n) = 3n + 2 = o(n2) or o(nlogn)
ii. f(n) = (n3 + 6n2 + 11n +3) = o(n4) or o(n3logn)
Little omega : The function f(n) = ω(g(n)), to be read as “ f of n is little
omega of g of n, if and only if Lim g(n) / f(n) = 0
n → α
i. f(n) = 3n + 2 = ω(1)
ii. f(n) = (n3 + 6n2 + 11n +3) = ω(n2)
29. Unit I : Analysis of Algorithms
Asymptotic analysis :
Nomenclature used to represent time and space complexity:
For a given size ‘n’ of a problem :
Complexity Nomenclature Values for n = 32
O(1) Constant 1
O(logn) Logarithmic 5
O(n) Linear 32
O(nlogn) Linear
Logarithmic
160
O(n2) Quadratic 1,024
O(n3) Cubic 32,768
O(2n) Exponential 2,147,483,648
30. Unit I : Analysis of Algorithms
Performance analysis :
Performance Measurement : Empirical Measurement of
performance (Posteriori Testing) :
Performance measurement is concerned with obtaining the space
and time requirements for a particular algorithm. Purpose of
posteriori testing may be to work out some empirical formula by
executing the program for different sizes of n, which can be used to
estimate the runtime for a given problem size, or the purpose may
be to compare the two algorithms for their runtime.
For example, for sequential search the runtime may be given by the
empirical formula t = 0.002 + 0.0003067n, where t is the time in
milliseconds and n is size of array.
Time and Space tradeoffs in algorithms :
In general, while solving the problem, for any algorithm,
computation time required will be more when space required is less
and vice-a-versa. Example
31. Unit I : Analysis of Algorithms
Best, Worst and Average case analysis of an algorithm:
Best Case analysis : Best case is that input to the algorithm
which takes minimum time for execution of it. Best case
analysis of an algorithm is the asymptotic analysis of an
algorithm for best case input.
Examples :
• Binary search algorithm : Best case is to search the
element positioned at the middle of the sorted array and
Asymptotic time (Time complexity) required is O(1)
• Insertion sort algorithm : Best case is sorted input in the
required order and Asymptotic time (Time complexity)
required is O(n)
32. Unit I : Analysis of Algorithms
Best, Worst and Average case analysis of an algorithm:
Worst Case analysis : Worst case is that input to the
algorithm which takes maximum time for execution of it.
Worst case analysis of an algorithm is the asymptotic
analysis of an algorithm for worst case input.
Examples :
• Binary search algorithm : Worst case is to search the last
element or the element which is absent in sorted array
and Asymptotic time (Time complexity) required is
O(logn)
• Insertion sort algorithm : Worst case is sorted input in the
reverse order and Asymptotic time (Time complexity)
required is O(n2)
33. Unit I : Analysis of Algorithms
Best, Worst and Average case analysis of an algorithm:
Average Case analysis : For average case analysis all
possible sequences of size ‘n’ are input to the algorithm and
average asymptotic time of the algorithm is computed.
Examples :
• Binary search algorithm : For average case analysis all
elements of sorted array of size ‘n’ are searched one by
one and total number of comparisons are computed.
Average computation time = Total time / n is represented
using asymptotic notation. Asymptotic time (Time
complexity) required is O(logn). Proof
• Example,
Position : 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Numbers : -15 -6 0 7 9 23 54 82 101 112 125 131 142 151
Comparisons : 3 4 2 4 3 4 1 4 3 4 2 4 3 4
34. Unit I : Analysis of Algorithms
Best, Worst and Average case analysis of an algorithm:
Examples :
• Insertion sort algorithm : All possible sequences of size ‘n’
are the input and average time is computed. The Asymptotic
time (Time complexity) required is O(n2). Proof
Total number of comparisons of barometric instruction
= Σci where ci = (i+1)/2
2<=i<=n
= Σ (i+1)/2
2<=i<=n
= (n-1) (n+4) / 4
= θ (n2)
35. Unit I : Introduction to Design & Analysis of Algorithm
Analyzing recursive algorithms using recurrence relations:
Recurrence Equations:
• In the analysis of algorithm, we come across recurrence
equations in the process of computing and we need to
determine it’s time and space complexities. In recursive
procedures size of the problem is reduced and the sub-
problems resemble to original problem (D&C). For
example, Binary search, Quick sort, Merge sort, Tower
of Hanoi etc. Once we formulate recursive equation, we
can solve it in terms of size of problem. Generally we
come across following type of recurrences:
o Homogeneous recurrences
o Inhomogeneous recurrences
oAsymptotic recurrences
36. Unit I : Introduction to Design & Analysis of Algorithm
Analyzing recursive algorithms using recurrence relations:
Recurrence Equations:
• Recurrences can be solved by:
o Intelligent guesswork Example
o Characteristic equations
Solution by Characteristic equations :
• Homogeneous Recurrences :
a0tn + a1tn-1 + … + aktn-k = 0 Example
• Inhomogeneous recurrences :
a0tn + a1tn-1 + … + aktn-k = bnP(n)
where b is a constant and
P(n) is polynomial in n of degree d
Example
• Change of variables : Replace n by 2i Example
37. Unit I : Analysis of Algorithms
Standard Complexity Classes :
Tractable vs Intractable Problems :
• We can distinguish problems between two distinct
classes. Problems which can be solved by a polynomial
time algorithm and problems for which no polynomial
time algorithm is known.
• An algorithm for a given problem is said to be a
polynomial time algorithm if its worst case time
complexity is O(nk), where k is a fixed integer and n is
size of a problem. For example, Sequential search : O(n),
Binary search : O(logn), Insertion sort : O(n2), product of
two matrices : O(n3), Quick sort : O(nlogn) etc.
38. Unit I : Analysis of Algorithms
Standard Complexity Classes :
Tractable vs Intractable Problems contd… :
• The set of all problems that can be solved in polynomial
amount of time are called “Tractable Problems”. These
problems can be solved in a reasonable amount of time
for even very large amount of input data. Their worst
case time complexity is O(nk).
• The set of all problems that can not be solved in
polynomial amount of time are called “Intractable
Problems”. Their worst case time complexity is O(kn).
These problems require huge amount of time for even
modest input sizes. For example, 0-1 Knapsack problem :
O(2n), Traveling Salesperson Problem : O(n22n)
39. Unit I : Analysis of Algorithms
Standard Complexity Classes :
Deterministic vs Non-deterministic Algorithms :
• Deterministic Machines : Conventional Digital
machines are Deterministic in nature. Serialization of
resource access or sequential execution is the basic
concept used in these machines (Von Neumann
Architecture).
• Non-deterministic Machines : These are hypothetical
machines which can do the jobs in parallel fashion i.e
more than one jobs can be done in one unit of time.
40. Unit I : Analysis of Algorithms
Standard Complexity Classes :
Deterministic vs Non-deterministic Algorithms contd … :
• Deterministic Algorithms : Algorithms in which the
result of any operation is uniquely defined are termed as
Deterministic Algorithms. All algorithms studied so far
are deterministic algorithms. Such algorithms agree with
the way programs are executed on a digital computer i.e.
a deterministic machine.
• Non-deterministic Algorithms : If we remove the
restriction on the outcome of every operation, then
outcomes are not uniquely defined but they are limited to
specified sets of possibilities. There is a termination
condition in such algorithms. Such algorithms are called
as non-deterministic algorithms. Examples
41. Unit I : Analysis of Algorithms
Standard Complexity Classes :
Decision Problem & Optimization Problem / Algorithm :
• Decision Problem : Any problem for which answer is
either 0 or 1 is called a decision problem and the
corresponding algorithm is referred as a decision
algorithm. For example : To search a given number
• Optimization Problem : Any problem that involves the
identification of an optimal (either min. or max.) value of
a given cost function is known as a optimization problem
and the corresponding algorithm is referred as an
optimization algorithm. For example, Knapsack problem,
Minimum cost spanning tree.
42. Unit I : Analysis of Algorithms
Standard Complexity Classes :
P vs NP class problems :
• P class : The class of decision problems that can be
solved in polynomial time using deterministic algorithms
is called the P class or Polynomial problems.
• NP class : The class of decision problems that can be
solved in polynomial time using non-deterministic
algorithms is called the NP class or Non-deterministic
Polynomial problems.
• Any P class problem can be solved using NP class
algorithm. Therefore P is contained in NP class
• Whether NP is contained in NP is unknown. Examples
43. Unit I : Analysis of Algorithms
Standard Complexity Classes :
NP-Complete problems :
• A decision problem D is said to be NP-Complete if
1. It belongs to NP class
2. Every problem in NP class is polynomially
reducible to D
• If one instance of such problem can be solved using a
polynomial algorithm, the complete class of problems
can be solved using a polynomial algorithm
• Examples : Traveling Salesperson Problem : optimal tour,
Printed circuit board problem,
Bin packing problem,
0-1 knapsack problem,
Vertex (node) cover problem
44. Unit I : Analysis of Algorithms
Standard Complexity Classes :
NP-Hard problems :
• A problem L is said to be NP Hard problem if and only
if satisfiability α L
• NP hard problems are basically the optimization versions
of the problems in NP complete class
• NP hard problems are not mere yes / no problems. They
are problems wherein we need to find the optimal
solution
• A problem L is NP complete if and only if L is NP hard
and L Є NP
45. Unit I : Analysis of Algorithms
Standard Complexity Classes :
NP-Hard problems contd … :
• Commonly believed relationship among P, NP,
NP- complete and NP-hard
Diagram
• An example of NP-hard problem not in NP-complete :
Halting Problem : To determine for an arbitrary
deterministic algorithm A and an input I whether
algorithm A with input I ever terminates or enters an
infinite loop. This problem is undecidable. No algorithm
exists to solve this problem
46. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Divide and Conquer strategy : In this approach, the
problem is broken into several smaller sub-problems
which are similar to the original problem. These sub-
problems are solved separately and the solutions are
combined to generate the solution to the original
problem. Thus it consists of Divide, Conquer and
Combine.
Examples : Binary Search, Quick sort, Merge sort,
Multiplication of large integers
47. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Control Abstraction for Divide and Conquer :
Algorithm DandC (P)
{ if small (P) then return S(P);
else
{ divide P into smaller instances
P1, P2, … Pk, where k ≥1;
apply DandC to each of these subproblems;
return Combine (DandC(P1), DandC(P2), …
DandC(Pk));
}
}
48. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Time Complexity :
The time complexity of many divide and conquer
algorithms is given by recurrences of the form :
| T (1) for n = 1
T(n) = |
| aT(n/b) + f(n) for n > 1
where a, b and T(1) are known constants and n = bk
(Problem P is divided into ‘a’ number of sub-problems of size
‘b’ each)
49. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Analysis Summary of Algorithms:
Algorithm Case Instance Recurrence
Equation
Time
Complexity
Space
Complexity
Binary
Search
Best Middle element T(n) = 1 O(1)
Rec:O(logn)
Iter: O(1)
Worst Last Element T(n) = T(n/2) + 1 O(logn)
Average All elements As = (1+1/n) Au -1 O(logn)
Quick
Sort
Best Pivot at middle T(n) = 2T(n/2) + n O(nlogn) Rec:O(logn)
Iter: O(1)
Worst Already sorted T(n) = T(n-1) + n O(n2) Rec:O(n)
Iter: O(1)
Average All permutations T(n) ≤ dn + 2Σ t(k)/n
0<= k<= n
O(nlogn) Rec:O(logn)
Iter: O(1)
Merge
Sort
Best
Any Input T(n) = 2T(n/2) + n O(nlogn) Rec:O(logn)
Iter: O(1)
Worst
Average
50. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Multiplication of Large integers :
Classic method to multiply n-figure Large integers takes θ(n2).
For example,
0981 x 1234 = 3924 + 10(2943) + 102(1962) + 103(0981) = 1210554.
i.e. it performs 4 x 4 = 16 multiplications and 7 additions.
D & C method :
Now let us divide each number in two parts of size n/2
let 09 = w, 81 = x, 12 = y and 34 = z.
wy= 09x12 = 108, wz=09x34 = 306, xy = 81x12 = 972, xz = 81x34= 2754
0981 = 10w2 + x and 1234 = 102y + z
0981 x 1234 = (10w2 + x ) (102y + z)
= 104wy +102 (wz + xy) + xz
= 104(108) +102 (306+ 972) + 2754
= 1210554 … 4 multiplications & 3 additions
Let p = wy, q = xz and r = (w+x) (y+z) = wy + (wz + xy) + xz then
(wz + xy) = r – p - q
51. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Multiplication of Large integers :
D & C method :
p = wy = 108, q = xz = 2754, and r = (w+x) (y+z) = 90 x 46 = 4140
0981 x 1234 = (102w + x ) (102y + z)
= 104wy +102 (wz + xy) + xz
= 104p +102 (r – p - q) + q
= 104(108) +102 (4140 – 108 - 2754) + 2754
= 1210554… 3 multiplications & 4 additions using D&C
… 16 multiplications & 4 additions using Classic Method
Now if we apply D&C recursively then,
t(n) = 3(t(n/2)) + g(n) = θ(nlog3). Proof
Table showing runtimes to multiply two large integers using classic & D&C.
Size(n) Classic D&C
600 40 ms 30 ms | as n increases gains will be more!
6000 40 sec 15 sec |
If two numbers are of size ‘m’ and ‘n’ and n > m then runtime for,
Classic method will be θ(mn) and D&C will be θ(nmlog3/2).Proof
52. Unit I : Analysis of Algorithms
Divide and Conquer strategy :
Multiplication of large numbers and Exponentiation
axb (size n and m digits respectively) & to find an (size
of a is m digits) :
Algorithm Multiplication using Multiplication using
Classic Method D&C method
Multiplication of
large numbers (n>m) θ(nm ) θ(nmlog3 )
exposeq θ(m2 n2 ) θ(mlog3n2 )
expoDC θ(m2 n2 ) θ(mlog3 nlog3 )
53. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Greedy strategy : Greedy method is perhaps the most straight
forward design technique for solving problems, in which optimum
(min. or max.) solution is desired from the given input.
Characteristics :
• There are ‘n’ inputs
• Objective is to find out a subset that satisfies some constraints
• Any subset that satisfies these constraints is called a feasible
solution
• We need to find out a feasible solution that either minimizes or
maximizes a given objective function
• There is usually an obvious way to determine a feasible solution
but not necessarily an optimal solution
Examples : Minimum spanning tree (Prim and Kruskal), Job scheduling
with deadlines, Knapsack problem, Shortest path problem.
54. Unit I : Analysis of Algorithms
Greedy Strategy :
Analysis Summary of Algorithms:
Problem Algorithm Time Complexity Space Complexity
Knapsack Knapsack O(nlogn) O(n)
Job sequencing with
deadlines
Sequenc1 O(n2) O(n)
Sequenc2 or fast O(nlogn) O(n)
Optimal Merge
Pattern
Huffman’s O(n2) using array
O(nlonn) using
Heap
Rec:O(logn)
Iter: O(1)
Minimum Spanning
Tree
Prim’s O(n2) O(n)
Kruskal’s O(|E|log|E|) O(n)
Shortest Path Dijkstra’s O(n2) O(n)
55. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Dynamic programming strategy : While solving problems, in
which optimum (min. or max.) solution is desired from the given
input, Greedy method sometimes may not give an optimal solution,
then dynamic programming approach is useful.
Characteristics :
• D & C strategy is based on Divide, Conquer and Combine. It may
lead to several overlapping sub-instances and if we solve these sub-
instances independently then it will lead to inefficient algorithm.
• In DP approach same thing is not computed twice, usually by
storing the results of sub-instance in a table and referring to them in
subsequent computations. Thus overlapping is avoided.
• D & C approach is Top-down approach whereas DP is Bottom-up
technique i.e. solution progresses from smallest sub-instance to the
sub-instance of increasing size i.e. Optimal Sub-structure.
Examples : 0-1-Knapsack, TSP, OBST, Job scheduling, Matrix chain multiplication
56. Unit I : Analysis of Algorithms
0/1 Knapsack Problem : Problem Definition
Problem Statement : We are given ‘n’ objects and a knapsack or a
bag. Object ‘i’ has a weight wi, 1<=i<=n, and the knapsack has a
capacity ‘m’ (maximum weight the knapsack can hold). If an object
xi € {0, 1} is placed into the knapsack then a profit of pixi is earned.
The objective is to obtain a filling of the knapsack that maximizes
the total profit earned. Since the knapsack capacity is ‘m’, total
weight of chosen objects should not exceed ‘m’. Profits and weights
are positive numbers. This problem is Subset Selection problem.
Mathematical Model of 0/1 Knapsack Problem :
maximize Σ pixi …….. (1)
1<= i<= n
subject to Σ wixi <= m …….. (2)
1<= i<= n
xi Є {0, 1} , 1<=i<=n ..….…(3)
pi >= 0, wi>=0 .……..(4)
57. Unit I : Analysis of Algorithms
0/1 Knapsack Problem … contd. :
Consider the following instance for 0/1 knapsack problem
n = 6, {w1, w2, w3, w4, w5, w6} = {1, 2, 4, 9, 10, 20}
m = 20 {p1, p2, p3, p4, p5, p6} = {4, 20, 8, 36, 20, 80}
(For some problems Greedy method may not give an optimal solution)
Objects Data Greedy By Dynamic Programming
i wi pi pi / wi Profit Weight pi / wi Optimal Solution
1 1 4 4 0 1 1 1
2 2 20 10 0 1 1 0
3 4 8 2 0 1 1 0
4 9 36 4 0 1 0 1
5 10 70 7 0 0 1 1
6 20 80 4 1 0 0 0
Total Weight 20 16 17 20
Total Profit 80 68 102 110
58. Unit I : Analysis of Algorithms
Dynamic Programming:
Analysis Summary of Algorithms:
Problem Algorithm Time Complexity Space Complexity
Binomial Coefficients Pascal’s Triangle O(nk) O(k)
Making (Amount = N,
Deno = n)
Makechange O(nN) O(nN)
0-1-knapsack 0-1-knapsack O(2n) - Recursion
O(nm) - Table
O(n) - Recursion
O(nm) - Table
Traveling Salesperson tsp-dp O(n22n) or
O(g(n)n!)
O(n2n)
Optimal Binary
Search Tree
OBST O(n3) or O(n2) O(n2)
Multistage Graph Fgraph &
Bgraph
θ(|V| + |E|) θ(n+k)
59. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Backtracking strategy :
• In the search for fundamental principles of algorithm
design, backtracking represents one of the most general
purpose technique.
• Many problems which deal with searching for a set of
solutions or which asks for an optimal solution satisfying
some constraints can be solved using backtracking
formulations.
• Backtracking is used when number of choices grow
exponentially. It constructs solution one component at a
time and backtracks if criterion is not satisfied or
successfully terminates when the criterion is satisfied
60. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Backtracking strategy contd :
Characteristics:
1. Generally a search is for set of solutions or a solution
which asks for an optimal (min. or max.) solution
satisfying some constraints.
2. The desired solution is expressible as n-tuple (x1, x2, … ,
xn) where xi are chosen from some finite set Si.
3. Often the problem to be solved calls for finding one vector
that minimizes or maximizes or satisfies a criterion
function P(x1, x2, … , xn). Sometimes it is all vectors
satisfying P
Examples : n-queen problem, Graph coloring, 0-1 Knapsack
problem, Traveling Salesperson Problem.
61. Unit I : Analysis of Algorithms
Review of algorithmic strategies:
Branch and Bound strategy : The term branch and bound
refers to all state space tree search methods in which all
children of E-node are generated before any other live node
can become the E-node. BFS like search also referred as
FIFO search and D-search like search also referred as LIFO
search are the examples of Branch and Bound. LCBB is
also B&B technique which uses upper bound and lower
bound functions to limit the number of nodes generated.
Examples : n-queen problem, 0-1 Knapsack problem,
Traveling Salesperson Problem.
62. Unit I : Analysis of Algorithms
Backtracking and Branch & Bound:
Analysis Summary of Algorithms:
Problem Algorithm Time Complexity Space Complexity
N-queens N-queens &
place
O(p(n). 2n) or
O(g(n).n!) – worst
O(p(n). 2n) or
O(g(n).n! )
m-colorability mcoloring &
nextvalue
O(nmn) O(n2)
Hamiltonian Cycles hamiltonian &
nextvalue
O(nmn+1) O(n2)
0-1-knapsack (using
backtracting
Bknap & Bound O(2n) - worst O(n)
0-1-knapsack (using
Branch & Bound)
LCBB, Lbound,
UBound
O(2n) - worst O(n)
Traveling Salesperson
(using Branch &
Bound)
LCBB using
Heap
O(n!) - worst O(n2)