SlideShare a Scribd company logo
1 of 174
Analysis of Algorithms
UNIT I INTRODUCTION
Notion of an Algorithm
What is an algorithm?
An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
Methods for solving the same problem
• Euclid’s Algorithm for computing greatest
common divisor GCD(m,n)
• Euclid’s algorithm is based on repeated
application of equality
• gcd(m,n) = gcd(n, m mod n)
• until the second number becomes 0, which
makes the problem trivial.
Example: gcd(60,24) = gcd(24,12) = gcd(12,0) =
12
Two descriptions of Euclid’s algorithm
1.Structured description
Step 1 If n = 0, return m and stop; otherwise go to Step 2
Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.
2.Pseudocode
while n ≠ 0 do
r ← m mod n
m← n
n ← r
return m
Fundamentals of Algorithmic Problem Solving
• discuss a sequence of steps while designing and analyzing an algorithm.
• Understanding the Problem
• Decision making on Capabilities of the
Computational Device
• Choosing between Exact and
Approximate Problem Solving
• Algorithm Design Techniques
• Designing an Algorithm and
Data Structures
• Methods of Specifying an Algorithm
• Proving an Algorithm’s Correctness
• Analyzing an Algorithm
• Coding an Algorithm
Fundamentals of Algorithmic Problem Solving (Contd..)
Understanding the Problem
• understand the problem completely before
designing the algm.
• Read the problem’s description carefully and
ask questions if you have any doubts about the
problem.
• If you fail to do this, your algorithm may work
correctly for a majority of inputs but crash on
some “boundary” value.
Fundamentals of Algorithmic Problem Solving (Contd..)
Decision making on
Once you completely understand a problem, we need to decide
- the Capabilities of the Computational Devices
- Choosing between Exact and Approximate Problem
Solving
- Algorithm Design Techniques
- Designing an Algorithm and Data Structures
• Instructions are executed one after another, one operation at a
time. Algorithms designed to be executed on such machines are
called sequential algorithms/RAM
• Instructions are executed in parallel, concurrent operation at a
time.Algorithms designed to be executed on such machines are
called parallel algorithms.
Fundamentals of Algorithmic Problem Solving (Cont..)
Choosing between Exact and Approximate Problem Solving
• The next principal decision is to choose between solving the problem exactly
(exact algorithm) or solving the problem approximately (approximation
algorithm).
Exact algorithm
Eg. extracting square roots, solving nonlinear equations, and evaluating definite
integrals.
Approximation algorithm
Eg. Vertex Cover, Traveling Salesman Problem
Algorithm Design Techniques
An algorithm design technique (or “strategy” or “paradigm”) is a general
approach to solving problems algorithmically that is applicable to a variety of
problems from different areas of computing.
Eg.:Brute force technique,Divide and conquer,Greedy technique,Dynamic
Programming ,Backtracking
Fundamentals of Algorithmic Problem Solving (Cont..)
Proving an Algorithm’s Correctness/verification
Analyzing an Algorithm
two kinds of analyzing the algorithm efficiency:
• time efficiency- indicating how fast the algorithm runs
• space efficiency-indicating how much extra memory it uses.
Another desirable characteristic of an algorithm is.
• Range of inputs
These factors are not satisfactory then redesign the algorithm.
Fundamentals of Algorithmic Problem Solving (Cont..)
Coding an Algorithm/implementation
Implementation of algorithm is done by suitable programming
languages.
Optimality-writing the program code is optimal. This will reduce
the burden of the compiler
Important problem types
Computing problems can be classified as
• Sorting
• Searching
• Numerical
• Geometric
• Combinatorial
• Graph
• String processing
Sorting
• Rearrange the items of a given list in increasing and
decreasing order.
• Eg. sort list of nos.,characters from alphabets, character
strings, etc..
Why we want sorting?
• Makes the list easier to answer.
• Useful in searching(eg. telephone books,dictionary).
Properties of sorting
• In place property- does not require extra memory while sorting the list.
• Not In place property- require extra memory while sorting the list.
Searching
• Find out the desired element from the list.
• Searching search-search key
How to choose best searching algm?
• Some algms work faster, they require more memory
• Some algms works better if the list is almost sorted.
• Thus efficiency of algm is varying at varying situations.
Numerical Problems
• Mathematical equations, systems of equations, computing definite
integrals, evaluating functions…
• These pblms are solved by approximate algms.
Geometric Problems
It deals with geometric objects such as points, lines & polygons.
• Used in computer graphics, robotics etc..Eg. closest pair, convex hull.
Combinatorial pblms
•Pblms like permutations and combinations.
•Lot of possible solutions.
• find the optimal solution from possible solution
•Problem size become big, the complexity will become high.
•Eg.graph coloring pblm, tsp
Graph problems
Collection of vertices and edges.
Shortest path, graph traversals,tsp
String processing pblms
• Collection of characters
• Called pattern matching algm.
• Particular word is searching from the text
Fundamentals of the Analysis of Algorithm Efficiency/
Analysis Framework
Measure the performance of an algorithm by computing 2
factors.
• Space complexity
• Time complexity
• Measuring an input size
• Measuring running time
• Order of growth
Space complexity
Space complexity of an algorithm means the amount of space
(memory) taken by an algorithm. By computing space complexity,
we can analyze whether an algorithm requires more or less space.
• Memory space S(P) needed by a program P, consists of two
components:
– A fixed part: needed for instruction space (byte code), simple
variable space, constants space etc.  c
– A variable part: dependent on a particular instance of input
and output data.  Sp(instance)
S(P) = c + Sp(instance)
Algorithm add (a, b, c)
{
return a+b+c;
}
For every instance 3 computer words required to store
variables: a, b, and c (assume a,b,c occupy one word size).
Therefore S(P) = 3.
Space complexity
1. Algorithm Sum(a[], n)
2. {
3. s:= 0.0;
4. for i = 1 to n do
5. s := s + a[i];
6. return s;
7. }
Every instance needs to store array a[] & n.
– Space needed to store n = 1 word.
– Space needed to store a[ ] = n floating point
words (or at least n words)
– Space needed to store i and s = 2 words
• Hence S(P) = (n + 3).(i.e 1+n+2=n+3)
Time complexity
• Time complexity of an algorithm means the amount of time
taken by an algorithm. By computing time complexity, we can
analyze whether an algorithm requires fast or slow speed.
• Difficult to compute physically clocked time.so executing time
depends on many factors such as
• System load
• No. of other programs running time
• Instruction set
• Speed of hardware
• The time complexity is given in terms of frequency count.
• Frequency count is a count denoting number of times of
execution of statement.
Time complexity
Time complexity
Matrix addition
For(i=0;i<n;i++)
{
For(j=0;j<n;j++)
{
C[i][j]=a[i][j]+b[i][j]
}
• Frequency count(FC) can be computed as follows
For(i=0;i<n;i++)
• i=0 executes once . Fc=1
• i<n executes for n+1 times
• i++ executes for n times
For(j=0;j<n;j++)
• j=0 executes n*1=n times
• j<n executes n*(n+1) times= n2+n times
• j++ executes n*n= n2 times
• C[i][j]=a[i][j]+b[i][j] executes n*n= n2 times
Totally 3n2+4n+2
Measuring an input size
If the i/p size is longer, then usually algm runs for a longer time. Hence we can
compute the efficiency of an algm as a function to which i/p size is passed as a
parameter.
Eg. multiply of 2*2 matrix-should know order of these matrices then only we can
enter the elmts of matrices.
Measuring running time
•Time complexity is measured in terms of FC.
• we first identify logic operation of an algm  basic operation (more time
consuming, such basic operation is located in inner loop)
T (n) ≈ cop.C(n)
• Cop execution time of an algorithm’s basic operation on a particular computer
• C(n) number of times this operation needs to be executed for this algorithm.
• T(n)running time of the algorithm
Measuring running time
Eg. sortingcomparing the elements and then placing them in
the correct locations is a basic operation.
Eg.
PROBLEM INPUT SIZE BASIC
OPERATION
Searching element
from list of n
elements
List of n elements Comparison of key
with every element
of list
Matrix
multiplication
Two matrices with
order n * n
Actual
multiplication of the
elements in the
matrices
Gcd of 2 numbers 2 numbers Division
Order of growth
• Measuring the performance of an algorithm in relation with
the input size n is called order of growth.
• For example, the order of growth for varying input size of n
is as given below.
Worst-Case, Best-Case, and Average-Case Efficiencies
Worst Case Time Complexity:
• when there are no matching elements or the first matching element
happens to be the last one on the list.
• Algorithm makes the largest number of key comparisons among
all possible inputs of size.
Cworst(n) = n.
• Algorithm runs the longest among all possible inputs of that size.
• Time complexity for linear search in worst case=O(n)
Best case time complexity
• Algorithm runs the fastest among all possible inputs of that size.
• For example, the best-case inputs for sequential search are lists of
size n with their first element equal to a search key
Cbest(n) = 1
Time complexity for linear search in best case=O(1)
Average Case time complexity
• This type of complexity gives information about the behavior
of an algorithm on specific random input.
• Let p->probability of getting successful search
• Let n->total no.of elements in the list
• The first match of the element will occur at ith location.Hence
probability of occuring first match is p/n for every ith element.
• The probability of getting unsuccessful search is (1-p).
• Cavg(n) = probability of getting successful search(for elements 1 to
n in the list) + probability of getting unsuccessful search.
Cavg(n) =n+1/2
there may be n elements at which chances of not getting element
are possible. Hence n.(1-p).
Time complexity for linear search in average case=O(n)
Amortized analysis
Amortized analysis is a method for analyzing a given algorithm's time
complexity, or how much of a resource, especially time or memory in the
context of computer programs, it takes to execute.
Fundamentals of the Analysis of Algorithm Efficiency (Cont..)
The properties of order of growth:
• If F1 (n) is order of g1 (n) and F2 (n) is order of g2 (n), then
• F1 (n) +F2 (n)
• Polynomials of degree m .That means maximum degree is
considered from the polynomial.
   
 
 
n
g
n
g 2
1 ,
max


Asymptotic Notations and its Properties
• the efficiency analysis framework concentrates the order of growth of
an algorithm’s basic operation count as the principal indicator of the
algorithm’s efficiency.
• To compare and rank such orders of growth, computer scientists use
three notations: O (big oh), (big omega), and (big theta).
• t (n) and g(n) can be any nonnegative functions defined on the set of
natural numbers.
• t (n) will be an algorithm’s running time (usually indicated by its basic
operation count C(n)), and
• g(n) will be some simple function to compare the count with.
• To choose the best algorithm, we need to check efficiency of each
algorithm. The efficiency can be measured by computing time
complexity of each algorithm.
• Asymptotic notation is a shorthand way to represent the time
complexity. Using asymptotic notations we can give time complexity
as “fastest possible”, “slowest possible” or “average time”.
 
Asymptotic Notations and its Properties (Cont..)
The big Oh notation is denoted by O .It is a method of representing the upper
bound of algorithms running time. Using big oh notation we can give longest
amount of time taken by the algorithm to compute.
DEFINITION
A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),if t (n) is
bounded above by some constant multiple of g(n) for all large n, i.e., if there
exist some positive constant c and some nonnegative integer n0 such that
t (n) ≤ cg(n) for all n ≥ n0.
Asymptotic Notations and its Properties (Cont..)
• Omega notation is denoted by ‘ Ω’. This notation is used to represent the
lower bound of algorithms running time. Using omega notation we can denote
shortest amount of time taken by algorithm.
DEFINITION
• A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)), if t (n) is
bounded below by some positive constant multiple of g(n) for all large n, i.e.,
if there exist some positive constant c and some nonnegative integer n0 such
that
t (n) ≥ cg(n) for all n ≥ n0.
Asymptotic Notations and its Properties (Cont..)
• The theta notation is denoted by .By this method the running time is
between upper bound and lower bound.
DEFINITION
• A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)),if t (n) is
bounded both above and below by some positive constant multiples of g(n)
for all large n, i.e., if there exist some positive constants c1 and c2 and some
nonnegative integer n0 such that c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0

Asymptotic Notations and its Properties (Cont..)
• Polynomials of degree m .That means maximum degree is
considered from the polynomial. Eg.3n2+2n2+10 then its time
complexity is O(n3).
• Any constant value leads to O(1) time complexity. that is if
f(n)=c then it ε O(1) time complexity.
Basic Efficiency Classes
the time efficiencies of a large number of algorithms fall into only a few classes.
Basic Efficiency Classes
Mathematical Analysis for Recursive
and Non-Recursive Algorithms
Mathematical Analysis for Recursive and
Non-Recursive Algorithms (Cont..)
Methods for Solving Recurrence Equations:
• Substitution method
– Forward substitution
– Backward substitution
• Masters method
• Tree method
The substitution method guess for the solution is made. There are two types of
substitution:
• Forward substitution
• Backward substitution
• Forward substitution method: This method makes use of an initial condition
in the initial term and value for the next term is generated. This process is
continued until some formula is guessed.
• Backward substitution method: In this method values are substituted
recursively in order to derive some formula.
Forward substitution
T(n)=T(n-1)+n with initial condition T(0)=0
Let T(n)=T(n-1)+n
If n=1 then T(1)=T(1-1)+1=T(0)+1=0+1=1
If n=2 then T(2)=T(2-1)+2=T(1)+2=1+2=3
If n=3 then T(3)=T(3-1)+3=T(2)+3=3+3=6
We can derive the formula
T(n)=n(n+1)/2 =n2/2+n/2 = O(n2)
backward substitution
T(n)=T(n-1)+n ----1
with initial condition T(0)=0
Let n=n-1,T(n-1)=T(n-1-1)+(n-1)
T(n-1)=T(n-2)+(n-1) ----2
Sub eqn 2 in eqn 1
T(n)=T(n-2)+(n-1)+n -----3
Let n=n-2,T(n-2)=(T(n-2-1)+(n-2)
T(n-2)=T(n-3)+(n-2) -----4
Sub eqn 4 in eqn 3
T(n)=T(n-3)+(n-2)+(n-1)+n
.
backward substitution
T(n)=T(n-k)+(n-k+1)+(n-k+2)+….+n
If k=n then
T(n)=T(0)+1+2+…..n T(0)=0
T(n)=0+1+2+…..n
We can derive the formula
T(n)=n(n+1)/2
=n2/2+n/2
T(n)=O(n2)
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
• Decide the input size based on parameter n.
• Identify algorithms basic operation.
• Check how many times the basic operation is executed.
• Set up the recurrence relation with some initial condition and expressing the
basic operation.
• Solve the recurrence or atleast determine the order of growth.
• General Plan for Analysis
• Decide on parameter n indicating input size
• Identify algorithm’s basic operation
• Determine worst, average, and best cases for input of size n
• Set up a sum for the number of times the basic operation is executed
• Simplify the sum using standard formulas and rules
Mathematical Analysis for Recursive and Non-Recursive
Algorithms (Cont..)
EXAMPLE 1 Consider the problem of finding the value of the largest
element
in a list of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving the
problem.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ←A[0]
for i ←1 to n − 1 do
if A[i]>maxval
maxval←A[i] //if any value large//searching the maximum element from
an arrayr than current_maxvalue then set new
maxvalue by obtained larger value
return maxval
Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..)
• measure of an input’s size here is the number of elements in the array,
i.e., n.
• There are two operations in the loop’s body:
• the comparison A[i]> maxval and
• the assignment maxval←A[i].
• Which of these two operations should we consider basic?
• Since the comparison is executed on each repetition of the loop
• and the assignment is not
• we should consider the comparison to be the algorithm’s basic
operation.
• Note:The comparisons is made for each value of n there is no need to
find worst, average, and best cases analysis here.
• C(n) the number of times this comparison is executed
• The algorithm makes one comparison on each execution of the loop,
which is repeated for each value of the loop’s variable i within the
bounds 1 and n − 1, inclusive.
Mathematical Analysis for Recursive and Non-Recursive
Algorithms (Cont..)
Mathematical Analysis for Recursive and Non-Recursive
Algorithms (Cont..)
EXAMPLE 2 Consider the element uniqueness problem: check
whether all the elements in a given array of n elements are distinct.
This problem can be solved by the following straightforward
algorithm.
ALGORITHM UniqueElements(A[0..n − 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct
// and “false” otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false //if any two elmts in the array are similar then
return false indicating that the array elmts are not distinct
return true
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
• input’s size : n, the number of elements in the array.
• basic operation: innermost loop contains a single operation
(the comparison of two elements)
• the worst case input is an array for which the number of element
comparisons Cworst(n) is the largest among all arrays of size n.
• there are two kinds of worst-case i/ps:they are
1) arrays with no equal elements
2) arrays in which the last two elements are the only pair of equal
elements.
• For such inputs, one comparison is made for each repetition of the
innermost loop, i.e., for each value of the loop variable j between its
limits i + 1 and n − 1; this is repeated for each value of the outer
loop, i.e., for each value of the loop variable i between its limits 0
and n − 2.
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1],
B[0..n − 1, 0..n − 1])
//Multiplies two square matrices of order n by the definition-
based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n − 1 do
for j ←0 to n − 1 do
C[i, j ]←0.0
for k←0 to n − 1 do
C[i, j ]←C[i, j ]+ A[i, k] ∗ B[k, j]
return C
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
• input’s size by matrix order n.
• multiplication as the basic operation
• set up a sum for the total number of multiplications M(n)
executed by the algm.
• one multiplication executed on each repetition of the algorithm’s
innermost loop, which is governed by the variable k ranging from
the lower bound 0 to the upper bound n − 1.
• Therefore, the number of multiplications made for every pair of
specific values of variables i and j is
Mathematical Analysis for Recursive and Non-Recursive Algorithms
(Cont..)
• the total number of multiplications M(n) is expressed by the following triple
sum:
Time complexity= Θ(n3)
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)
EXAMPLE 4 The following algorithm finds the number of
binary digits in the
binary representation of a positive decimal integer.
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n←floor(n/2)
return count
Mathematical Analysis for Non-Recursive Algorithms (Cont..)
While loop(execution) eg: n=5
Count=1
1) While(n>1){--------5>1
Count=count+1--1+1=2
n=floor(n/2)-5/2=2.5=2-n=2
count=2
2) While(n>1){--------2>1
Count=count+1--2+1=3
n=floor(n/2)-2/2= 1
count=3
3) While(n>1){--------1>1--false
Analysis
• i/p size : n
• Basic operation: while loop
• value of n is about halved on each repetition of the loop. hence the efficiency is log 2 n.
Floor(log 2 n) = Floor( log 2 5)=Floor(2.5) 2
Floor(log 2 n) + 1 ---------------(since 1 for While(1>1)--------for false condition)
• No.of times while loop executes is Floor(log 2 n) + 1
• Time complexity= Θ(log 2 n)
Mathematical Analysis of Recursive Algorithms
General Plan for Analyzing the Time Efficiency of
Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s
size.
2. Identify the algorithm’s basic operation. Check whether the
number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case,
average-case, and best-case efficiencies must be investigated
separately.
4. Set up a recurrence relation, with an appropriate initial
condition, for the number of times the basic operation is
executed.
5. Solve the recurrence or, at least, ascertain the order of growth
of its solution.
Mathematical Analysis of Recursive Algorithms
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) ∗ n
input size: n
• The basic operation of the algorithm is multiplication, whose number of
executions we denote M(n).
• Since the function F(n) is computed according to the formula
• F(n) = F(n − 1) * n for n > 0
• the number of multiplications M(n) needed to compute it must satisfy the
equality
M(n) = M(n − 1) + 1 for n > 0
• M(n − 1)-- to compute F(n−1) +1-------to multiply F(n−1) by n
Mathematical Analysis of Recursive Algorithms
• M(n − 1) multiplications are spent to compute F(n − 1).
eg .n=55-1=4!0!*1*2*3*4--n-1 multiplications.
• one more multiplication is needed to multiply the result by n.(0!*1*2*3*4) * 5
• if n = 0 return 1. when n = 0, the algorithm performs no multiplications. Therefore, the
initial condition M(0) =0.
• M(0) ---the calls stop when n = 0 0---no multiplications when n = 0
• the recurrence relation and initial condition for the algorithm’s number of multiplications
M(n) = M(n − 1) + 1 for n > 0, M(0) = 0.
backward substitutions:
• M(n) = M(n − 1) + 1 (substitute M(n − 1) = M(n − 2) + 1)
= [M(n − 2) + 1]+ 1= M(n − 2) + 2 (substitute M(n − 2) = M(n− 3) + 1)
= [M(n − 3) + 1]+ 2 = M(n − 3) + 3.
General formula for the pattern:
M(n) = M(n − i) + i.
• substitute i = n in the pattern’s formula to get the ultimate result of our backward
substitutions:
• M(n) = M(n − 1) + 1= . . . = M(n − i) + i = . . . = M(n − n) + n = n.
Tower of Hanoi puzzle
• n disks of different sizes that can slide onto any of three pegs.
• Initially, all the disks are on the first peg in order of size, the largest on
the bottom and the smallest on top.
• The goal is to move all the disks to the third peg, using the second one
as an auxiliary, if necessary.
• We can move only one disk at a time, and it is forbidden to place a
larger disk on top of a smaller one.
• To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary),
• we first move recursively n − 1 disks from peg 1 to peg 2 (with peg 3
as auxiliary),
• Then move the largest disk directly from peg 1 to peg 3, and, finally,
move recursively n − 1 disks from peg 2 to peg 3 (using peg 1 as
auxiliary).
• Of course, if n = 1, we simply move the single disk directly from the
source peg to the destination peg.
Tower of Hanoi puzzle
• input’s size : number of disks n
• moving one disk as the algorithm’s basic operation.
• the number of moves M(n) depends on n only, and we get the following
recurrence equation for it:
M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
initial condition M(1) = 1
Divide-and-Conquer General Method
The most-well known algorithm design strategy:
1. Divide instance of problem into two or more smaller instances
2. Solve smaller instances recursively
3. Obtain solution to original (larger) instance by combining these solutions
4. A control abstraction for divide and conquer is as given below-using control
abstraction a flow of control of a procedure is given.
Divide-and-Conquer General Method
• The Computing time of above procedure of divide and conquer is given by
the recurrence and relation.
a problem of size n
a problem of size n/2
•a problem of size n/2
a solution to
subproblem 1
a solution to
Subproblem 2
a solution to
the original problem
Divide-and-Conquer General Method
• For example ,if we want to compute sum of n numbers then by
divide and conquer we can solve the problem as
Divide-and-Conquer General Method
• If we want to divide a problem of size n into size of n/b taking
f(n) time to divide and combine, then we can set up recurrence
relation for obtaining time for size n is -
General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n)  (nd), d  0
Master Theorem: If a < bd, T(n)  (nd)
If a = bd, T(n)  (nd log n)
If a > bd, T(n)  (nlog b a )
Note: The same results hold with O instead of .
Examples: T(n) = 4T(n/2) + n  T(n)  ?
T(n) = 4T(n/2) + n2  T(n)  ?
T(n) = 4T(n/2) + n3  T(n)  ?
(n^2)
(n^2log n)
(n^3)
Efficiency Analysis of divide and conquer
Efficiency Analysis of divide and conquer
(Cont…)
Efficiency Analysis of divide and conquer
(Cont…)
Binary Search
• A binary search looks for an item in a list using a divide-and
conquer strategy.
• Very efficient algorithm for searching in sorted array:
• Binary search algorithm assumes that the items in the array being searched are
sorted
– The algorithm begins at the middle of the array in a binary search
– If the item for which we are searching is less than the item in the middle,
we know that the item won’t be in the second half of the array
– Once again we examine the “middle” element
– The process continues with each comparison cutting in half the portion of
the array where the item might be
• Binary Search: middle element.
• An element which is to be searched from the list of elements sorted in array
A[0…n-1] is called KEY element.
mid =
left + right
2
Binary Search
Example
• Consider a list of elements sorted in array A as
The Key element (i.e. the element to be searched) is 60.
• Now to obtain middle element we will apply formula.
m=(0+6)/2 =3
left + right
2
mid =
Binary Search
Yes,i.e. the number is present in the list.
Thus we can search the desired number from the
list of elements.
Algorithm for Binary Search
Analysis
• The Basic operation in binary search is comparison of search key (i.e.
KEY) with the array elements.
• To analyze the efficiency of binary search we must count the number of
times the search keys compared with the array elements.
• The comparison is also called a three way comparison because algorithm
makes the comparison to determine whether KEY is smaller, equal to or
greater than A[m].
• In this algorithm after one comparison the list of n elements is divided into
n/2 sub lists.
Analysis
• The worst case efficiency is that the algorithm compares all the array elements for
searching the desired element..
• One comparison is made and based on this comparison array is divided each time
in n/2 sub lists.
• Hence the time complexity is given by
• But as we consider the rounded down value when array gets divided the above
equations can be written as
• The above recurrence relation can be solved further. Assume n = 2k the equation
(1) becomes,
Analysis-Backward substitution Method
Backward substitution Method
Backward substitution Method
Average Case
• To obtain average case efficiency of binary search, consider some samples of
input n.
• Again A[2]=33 and 33<44 we divide list. In right sub list A[4]=44 and key is 44.
thus total 3 comparisons are made to search 44.
• To Summarize the above operations.
To Summarize the above operations.
Observing the above given table we can write
• For instance if n=2 then,,,Similarly if n=8, then……Thus we can write
that,
• Advantage of binary search
– Binary search is an optimal searching algorithm using which we can search the
derired element very efficiently.
• Disadvantage of binary search
– This algorithm requires the list to be sorted. Then only this method is applicable.
• Applications of binary search
– efficient searching method and is used to search desired record from database.
– For solving nonlinear equation with one unknown this method is used.
Implementation of recursive Binary Search algorithm
Implementation of recursive Binary Search algorithm
Merge sort
• The Merge sort is a sorting algorithm that uses the divide and
conquer strategy. In this method division is dynamically
carried out.
– Divide: divide the n-element sequence into two sub
problems of n/2 elements each.
– Conquer: sort the two subsequences recursively using
merge sort. If the length of a sequence is 1, do nothing
since it is already in order.
– Combine: merge the two sorted subsequences to produce
the sorted answer.
Merge sort
• Consider the elements as 70,20,30,40,10,50,60
Merge sort
• Sort the following elements using merge sort
10,5,7,6,1,4,8,3,2,9.
Merge sort
Merge sort
Logic Explanation
Logic Explanation
First Make two sub lists as
Logic Explanation
• Consider that at some instance we have got two sub list
20,30,40,70 and 10,50,60 then
Logic Explanation
Logic Explanation
Logic Explanation
Logic Explanation
Logic Explanation
Logic Explanation
Logic Explanation
Analysis
• In Merge sort algorithm the two recursive calls are made. Each recursive call
focuses on n/2 elements of the list. After two recursive call is made to combine two
sub lists i.e. to merge all n elements. Hence we can write recurrence relation as
• We can obtain the time complexity of merge sort using two methods
– Master Theorem
– Substitution method
• Master Theorem
Master Theorem
Substitution Method
Logic Explanation
Logic Explanation
Selection Sort-brute force technique
•We start selection sort by scanning the entire given list to find its smallest element
and exchange it with the first element, putting the smallest element in its final
position in the sorted list.
•Then we scan the list, starting with the second element, to find the smallest among
the last n − 1 elements and exchange it with the second element, putting the second
smallest element in its final position.
•Generally, on the ith pass through the list, which we number from 0 to n − 2, the
algorithm searches for the smallest item among the last n − i elements and swaps it
with Ai .After n − 1 passes, the list is sorted.
Eg. An array A[5]={64,25,12,22,11}
0 , 1 , 2, 3, 4
PASS 1:
i=0 // range of i={0,1,2,3} // for i=0 to n-2
min0 //min=i
j1 //range of j={1,2,3,4} //for j=i+1 to n-1
25< 64 min1 //if A[j]<A[min] minj
j 2 , 12<25 min2
j 3 , 22<12
j 4 , 11<12 //end of j
Swap 64 and 11 // swap A[i] and A[min]
// Now list becomes 11,25,12,22,64
// Now list is 11,25,12,22,64
PASS 2:
i=1 min1
j2 12< 25 min2
j 3 , 22<12
j 4 , 64<12 //end of j
Swap 25 and 12
// Now list becomes 11,12,25,22,64
PASS 3:
i=2 min2
j 3 , 22<25 min3
j 4 , 64<22 //end of j
Swap 25 and 22
// Now list becomes 11,12,22,25,64
// Now list is 11,12,22,25,64
PASS 4:
i=3 min3
j 4 , 64<25 //end of j
Swap 25 and 64
// Now list becomes 11,12,22,25,64
Output array A[5]={11,12,22,25,64}
Analysis
The input size is given by the number of elements n.
basic operation is the key comparison A[j ]<A[min].
The number of times it is executed depends only on the array size and is given
by the following sum:
Thus, selection sort is a (n2) algorithm on all inputs.
Bubble sort
• brute-force application to the sorting problem is to compare
adjacent elements of the list and exchange them if they are out of
order.
• By doing it repeatedly, we end up “bubbling up” the largest element
to the last position on the list.
• The next pass bubbles up the second largest element, and so on,
until after n − 1 passes the list is sorted.
Eg.an array A[5]={5,1,4,2,8}
0,1,2,3,4
PASS 1:
i=0 //range of i={0,1,2,3} //for i=0 to n-2
j=0 //range of j={0,1,2,3} //for j=0 to n-2-i
1<5 //A[j+1]<A[j]
Swap 1 and 5, //swap A[j] and A[j+1]
//List becomes 1,5,4,2,8
j=1 , 4<5 ,swap 4 and 5 // list becomes 1,4,5,2,8
j=2 , 2<5 ,swap 2 and 5 // list becomes 1,4,2,5,8
j=3, 8<5 // list remains 1,4,2,5,8
Now the list is 1,4,2,5,8
PASS 2:
i=1
j=0 //range of j={0,1,2} //for j=0 to n-2-i
4<1 //A[j+1]<A[j]
//List remains 1,4,2,5,8
j=1 , 2<4 , swap 2 and 4 // list becomes 1,2,4,5,8
j=2 , 5<4 // list remains 1,2,4,5,8
PASS 3:
Now the list is1,2,4,5,8
i=2
j=0 //range of j={0,1} //for j=0 to n-2-i
2<1 // list remains 1,2,4,5,8
j=1 , 4<2 // list remains 1,2,4,5,8
PASS 4:
Now the list is1,2,4,5,8
i=3
j=0 //range of j={0} //for j=0 to n-2-i
2<1 // list remains 1,2,4,5,8
Output array A[5]={1,2,4,5,8}
Analysis:
• The number of key comparisons for the bubble-sort is the same for all
arrays of size n;
• it is obtained by a sum that is almost identical to the sum for selection sort:
sequential search
• This is a straightforward algorithm that searches for a given
item (some search key K) in a list of n elements by checking
successive elements of the list until either a match with the
search key is found or the list is exhausted.
• In the worst case, when there are no matching elements or the first
matching element happens to be the last one on the list, the
algorithm makes the largest number of key comparisons among all
possible inputs of size n. Cworst(n) = n
• The best-case efficiency of an algorithm is its efficiency for the
best-case input of size n, which is an input (or inputs) of size n for
which the algorithm runs the fastest among all possible inputs of
that size.
Cbest(n) = 1
Average case:
• For example, if p = 1 (the search must be successful), the
average number of key comparisons made by sequential search
is (n + 1)/2; that is, the algorithm will inspect, on average,
about half of the list’s elements.
• If p = 0 (the search must be unsuccessful), the average number
of key comparisons will be n because the algorithm will
inspect all n elements on all such inputs.
• Cavg(n)=n
Closest-Pair Problem And Convex-Hull Problems
Step 1 : Divide the points given into two subsets S1 and S2 by a vertical line x = c
so that half the points lie to the left or on the line and half the points lie to
the right or on the line.
Step 2 : Find recursively the closest pairs for the left and right subsets.
Step 3 : Set d = min{d1, d2}.We can limit our attention to the points in the
symmetric vertical strip of width 2d as possible closest pair. Let C1 and C2
be the subsets of points in the left subset S1 and of the right subset S2,
respectively, that lie in this vertical strip. The points in C1 and C2 are stored
in increasing order of their y coordinates, which is maintained by
merging during the execution of the next step.
Step 4 For every point P(x,y) in C1, we inspect points in C2 that may be closer to
P than d. There can be no more than 6 such points (because d ≤ d2)!
Quick Sort
• Select a pivot (partitioning element) – here, the first element
• Rearrange the list so that all the elements in the first s
positions are smaller than or equal to the pivot and all the
elements in the remaining n-s positions are larger than or
equal to the pivot (see next slide for an algorithm)
• Exchange the pivot with the last element in the first (i.e., )
subarray — the pivot is now in its final position
• Sort the two subarrays recursively.
p
A[i]p A[i]p
Quick Sort
• Algorithm Quick(A[0……n-1],low,high)
• Pblm: sorting of array A[0…n-1]
• i/p: An array A[0….n-1]in which unsorted elmts are
given.low indicates leftmost elmt in the list and high
indicates the rightmost elmt in the list.
• o/p: sorted in ascending order.
• If(low<high)then
• //split the array into two sub arrays
• m partition(A[low…high])//m is mid of the array.
• Quick(A[low…m-1])
• Quick(A[mid+1….high])
Quick Sort
• Algorithm partition(A[low…high])
• Pblm: partition the subarray using the first element as
pivot element.
• i/p:subarray A with low as lower most index of the
array and high as higher most index of the array.
• o/p: partitioning of array A is done and pivot
occupies its proper position. And the rightmost index
of the list is returned.
pivotA[low]
ilow
jhigh+1
While(i<=j)do
{
While(A[i]<=pivot) do
ii+1
While(A[j]>=pivot) do
jj-1
If(i<=j)then
Swap(A[i],A[j]) / /swap A[i] and A[j]
}
Swap(A[low],A[j] ) //when i crosses j swap A[low] and
A[j]
Return j //rightmost index of the list
• o/p array is 1 2 3 4 5 7 8 9
Best Case-split in the middle
Recurrence relation
C(n)=C(n/2)+C(n/2) + n for n>1
C(n/2)time required to sort left and right subarray.
n time required for partitioning the subarray.
With initial condition: C(1)=0
Analysis
• In the worst case, one of the two subarrays will
be empty arrays, i.e., for inputs for which the
problem is already solved
• if A[0..n − 1] is a strictly increasing array and we
use A[0] as the pivot, the left-to-right scan will
stop on A[1] while the right-to-left scan will go all
the way to reach A[0], indicating the split at
position 0.
• So, after making n + 1 comparisons to get to this
partition and exchanging the pivot A[0] with
itself, the algorithm will be left with the strictly
increasing array A[1..n − 1] to sort.
• The total number of key comparisons made
will be equal to
• Cworst(n)=(n+1)+n+(n-1)+(n-2)+….2+1
• Let Cavg(n) be the average number of key
comparisons made by quicksort on a randomly
ordered array of size n.
• A partition can happen in any position s (0 ≤ s ≤
n−1) after n+1comparisons are made to achieve
the partition.
• After the partition, the left and right subarrays
will have s and n − 1− s elements, respectively.
• Assuming that the partition split can happen in
each position s with the same probability 1/n,
we get the following recurrence relation:
• we get the following recurrence relation:
m
Dynamic programming
• Invented by a U.S. mathematician, Richard Bellman in the 1950s.
• Dynamic programming is a technique for solving problems with
overlapping subproblems.
• a given problem’s solution to solutions of its smaller subproblems.
• Rather than solving overlapping subproblems again and again,
• dynamic programming suggests solving each of the smaller
subproblems only once and recording the results in a table from
which a solution to the original problem can then be obtained.
• Eg: The Fibonacci numbers are the elements of the sequence 0, 1,
1, 2, 3, 5, 8, 13, 21, 34, . . . ,
• which can be defined by the simple recurrence
• F(n) = F(n − 1) + F(n − 2) for n > 1
• Two initial conditions are F(0)=0, F(1)=1
• dynamic programming is a method for solving a complex problem by
breaking it down into a collection of simpler subproblems,
• solving each of those subproblems just once, and storing their solutions -
using a memory-based data structure.
• The next time the same subproblem occurs, instead of recomputing its
solution, one simply looks up the previously computed solution, thereby
saving computation time at the expense of a modest expenditure in
storage space.
• The technique of storing solutions to subproblems instead of
recomputing them is called "memorization".
DIFFERENCE BETWEEN DC & DP
DIVIDE AND CONQUER DYNAMIC PROGRAMMING
1. Divide the given problem into many
Subproblems. Find the individual
solutions and combine them to get the
solution for the main problem.
1. Many decisions and sequences
are guaranteed and all the
overlapping subinstances
are considered.
2. Follows top down technique.
(recursive)
2. Follows bottom up technique.
(iterative)
3. Split the input only at specific points
(midpoint).
3. Split the input at every possible
Points rather than at a
particular point.After trying all split
points it determines which split point
is optimal.
4. Each problem is independent. 4. Sub-problems are dependent on the
main problem.
DIFFERENCE BETWEEN DC & DP
DIVIDE AND CONQUER DYNAMIC PROGRAMMING
Less efficient because of rework on
solutions
Efficient than Divide & conquer
Duplicate sub solutions may be
obtained(duplications are neglected)
Duplications in solutions is avoided
totally.
principle of optimality
• an optimal solution to any instance of an optimization problem is
composed of optimal solutions to its subinstances.
• an optimization problem is the problem of finding the best solution
from all feasible solutions. Two major types of optimization problems:
minimization or maximization
Floyd’ algorithm
• Floyd’s algorithm is for finding the shortest path
between every pair of vertices of a graph. The
algorithm works for both directed and undirected
graphs.
• The graph may contain negative edges but it should not
contain negative cycles.
Knapsack Problem and Memory Functions
• dynamic programming algorithm for the knapsack
problem:
• given n items of known weights w1, . . . , wn and
values v1, . . . , vn and a knapsack of capacity W, find
the most valuable subset of the items that fit into the
knapsack.
• Let us consider an instance defined by the first i items,
1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . . , vi ,
and knapsack capacity j, 1 ≤ j ≤ W.
• Let F(i, j) be the value of an optimal solution to this
instance, i.e., the value of the most valuable subset of
the first i items that fit into the knapsack of capacity j.
We can divide all the subsets of the first i items that fit the knapsack of
capacity j into two categories: those that do not include the ith item
and those that do.
Note the following:
1. Among the subsets that do not include the ith item, the value of
an optimal subset is, by definition, F(i − 1, j).
2. Among the subsets that do include the ith item (hence, j − w ≥ 0),
an optimal subset is made up of this item and an optimal subset of
the first i − 1 items that fits into the knapsack of capacity j − wi . The
value of such an optimal subset is vi + F(i − 1, j − wi).
i j 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
Knapsack Problem and Memory
Functions
Steps to select actual knapsack item:
Let i = n and k = W then
while (i>0 and k>0)
{
if(table [i,k] table[i-1,k]) then
mark ith item as in knapsack
Knapsack Problem and Memory
Functions (Cont.…)
i = i-1 and k=k-wi // selection of ith item.
else
i = i-1 // do not select of ith item.
}
Knapsack Problem and Memory
Functions (Cont.…)
Example:
Greedy Technique
• The greedy method uses the subset paradigm or ordering
paradigm to obtain the solution.
• In subset paradigm, at each stage the decision is made based on
whether a particular input is in optimal solution or not.
• Prim’s Algorithm
Minimum Spanning Tree:
• A minimum spanning tree of a weighted connected graph G is
a minimum tree with minimum or smallest weight.
Spanning tree:
• A spanning tree of a graph G is a sub graph which is basically
a tree and it contains all the vertices of G containing no circuit
Prim’s Algorithm (Cont.….)
• ALGORITHM Prim (G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing a minimum spanning tree
of G
VT← {v0} //the set of tree vertices can be initialized with any vertex
ET←∅for i ←1 to |V| − 1 do
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u)
such that v is in VT and u is in V − VT
VT←VT∪ {u∗}
ET←ET∪ {e∗}
Kruskal’s Algorithm
Kruskal’s algorithm looks at a minimum spanning tree of a weighted
connected graph G = (V, E) as an acyclic sub graph with |V| − 1 edges
for which the sum of the edge weights is the smallest.
Kruskal’s Algorithm Prim’s algorithm
 This algorithm is for
obtaining minimum spanning
tree by selecting the adjacent
vertices of already selected
vertices.
 This algorithm is for
obtaining minimum spanning
tree but it is not necessary to
choose adjacent vertices of
already selected vertices.
Dijkstra’s Algorithm
• Dijkstra’s algorithm is used to find shortest path. This algorithm is
also known as single source shortest path algorithm. In this
algorithm, for a given vertex called source the shortest path to all
other vertices is obtained.
• In this algorithm the main focus is not to find only one single path
but to find the shortest paths from any vertex to all other remaining
vertices. This algorithm is applicable to graphs with non-negative
weights only.
Algorithm DagShortestPaths (G, s)
//Solves the single-source shortest paths problem for a dag
//Input: A weighted dag G = [V,E] and its vertex s
//Output: The length dv of a shortest path from s to v and
// its ultimate vertex pv for every vertex v in V topologically sort the
vertices of G
Dijkstra’s Algorithm (Cont…)
for every vertex v do
dv ←∞; pv ← null
ds ← 0
for every vertex v taken in topological order do
for every vertex u adjacent to v do
if dv + w(v, u) < du
Graph coloring
• Given an undirected graph and a number m, determine if the
graph can be colored with at most m colors such that no two
adjacent vertices of the graph are colored with same color.
• Here coloring of a graph means assignment of colors to all
vertices or vertex coloring.
• The smallest number of colors needed to color a graph G is called
its chromatic number.
Limitations of Algorithm Power
• we studied dozens of algorithms for solving a variety of different
problems.
• But the power of algorithms is limited, because of following reasons,
• There are some problems cannot be solved by any algorithm.
• Other problems can be solved algorithmically but not in polynomial
time.
• when a problem can be solved in polynomial time by some
algorithms, there are usually lower bounds on their efficiency
Lower bound Arguments
• Lower bound- Estimating minimum amount of work needed to solve
a given problem.
Eg. Examples:
• number of comparisons needed to find the largest element in a set of
n numbers
• number of comparisons needed to sort an array of size n
• number of comparisons necessary for searching in a sorted array
• number of multiplications needed to multiply two n-by-n matrices
• To obtain efficiency of particular algorithm there are 2 ways.
• Establish the asymptotic efficiency class for the given pblm.
• What is the asymptotic efficiency class for finding lower bounds?
• Check whether the pblm lies in linear, quadratic,
logarithmic/exponential category of efficiency class.
• Whenever we need to find an efficiency of an algm , it is better to
compare one algm with other algms that are used to solve same
type of pblms.
• Eg.sorting of elmts.
• Insertion sortefficiency n2(slower algm)
• Quick sort, heap sortnlogn(faster algm)
Contd..
• Lower bound can be
– an exact count
– an efficiency class ()
Tight lower bound: there exists an algorithm with the same
efficiency as the lower bound.
Problem Lower bound Tightness
sorting (comparison-based) (nlog n) yes
searching in a sorted array (log n) yes
element uniqueness (nlog n) yes
n-digit integer multiplication (n) unknown
multiplication of n-by-n matrices (n2) unknown
Methods for Establishing Lower Bounds
• trivial lower bounds
• information-theoretic arguments (decision trees)
• adversary arguments
• problem reduction
Trivial Lower Bounds
Trivial lower bounds: based on counting the number of items that must
be processed in input and generated as output.
Examples:
• Trivial lower bound for generating all permutations of n number will
be O(n!)..size of output is n!
• Product of 2 n x n matrices the trivial lower bound is O(n2 ).Here 2n2
elements input are to be processed and n2 elements get produced as
an output.
• TSP- the trivial lower bound is (n2 ).
Here n(n-1)/2 distances( input ) and n+1 cities as output.
• Binary search.
Conclusions
• may and may not be useful
• be careful in deciding how many elements must be processed.
Information-theoretic arguments(decision trees)
Decision tree — a convenient model of algorithms involving comparisons in
which:
• internal nodes represent comparisons
• leaves represent outcomes (or input cases)
Decision tree for 3-element insertion sort
a < b
b < c a < c
yes
yes no
no
yes
no
a < c b < c
a < b < c
c < a < b
b < a < c
b < c < a
no yes
abc
abc bac
bca
acb
yes
a < c < b c < b < a
no
Decision Trees and Sorting Algorithms
• Any comparison-based sorting algorithm can be
represented by a decision tree (for each fixed n)
• Number of leaves (outcomes)  n!
• Height of binary tree with n! leaves  log2n!
• Minimum number of comparisons in the worst case
 log2n! for any comparison-based sorting
algorithm, since the longest path represents the
worst case and its length is the height.
• log2n!  n log2n (by Sterling approximation)
Decision Trees and Sorting Algorithms
• This lower bound is tight (mergesort or heapsort)
Ex. Prove that 5 (or 7) comparisons are necessary and sufficient
for sorting 4 keys (or 5 keys, respectively).
Adversary Arguments
It’s a game between the adversary and the unknown algorithm.
The adversary has the input and the algorithm asks questions to
the adversary about the input. The adversary tries to make the
algorithm work the hardest by adjusting the input
(consistently). It wins the “game” after the lower bound time
(lower bound proven) if it is able to come up with two different
inputs.
Example 1: “Guessing” a number between 1 and n using yes/no
questions (Is it larger than x?)
Adversary: Puts the number in a larger of the two subsets
generated by last question.
Example 2: Merging two sorted lists of size n
a1 < a2 < … < an and b1 < b2 < … < bn
Adversary: Keep the ordering b1 < a1 < b2 < a2 < … < bn < an in mind
and answer comparisons consistently.
Claim: Any algorithm requires at least 2n-1 comparisons to output the
above ordering (because it has to compare each pair of adjacent
elements in the ordering)
Ex: Design an adversary to prove that finding the smallest element in
a set of n elements requires at least n-1 comparisons.
Problem Reduction
• A is a difficult unsolvable pblm is to be reduced to another solvable
pblm B with known algm.
• If pblm A is at least as hard as pblm as pblm B, then a LB for B is also
lower bound for A.
• Hence we have to find pblm B with a known LB which is further used
for finding LB of pblm A. This technique is called as Reduction.
Strassen’s Matrix Multiplication
• An algorithm was published by V. Strassen in 1969.
• The principal of the algorithm lies in the discovery that we can find the
product C of two 2 × 2 matrices A and B with just seven multiplications
as opposed to the eight required by the brute-force algorithm .
• This is accomplished by using the following formulas:
• Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm
makes seven multiplications and 18 additions/subtractions,
• whereas the brute-force algorithm requires eight multiplications
and four additions.
• These numbers should not lead us to multiplying 2 × 2 matrices
by Strassen’s algorithm.
• Let A and B be two n × n matrices where n is a power of 2. (If n
is not a power of 2, matrices can be padded with rows and
columns of zeros.)
• We can divide A,B, and their product C into four n/2 × n/2 submatrices
each as follows:
For example, C00 can be computed either as
A00 ∗ B00 +A01 ∗ B10 or as M1 + M4 − M5 + M7 where M1, M4, M5,
and M7 are found by Strassen’s formulas, with the numbers replaced by
the corresponding submatrices.
If the seven products of n/2 × n/2 matrices are computed recursively by
the same method, we have Strassen’s algorithm for matrix multiplication.
asymptotic efficiency of this algorithm.
If M(n) is the number of multiplications made by Strassen’s algorithm
in multiplying two n × n matrices (where n is a power of 2), we get the
following recurrence relation for it:
• Since this savings in the number of multiplications
was achieved at the expense of making extra
additions.
• To multiply two matrices of order n>1, the algorithm
needs to multiply seven, matrices of order n/2 and
make 18 additions/subtractions of matrices of size n/2;
when n = 1, no additions are made since two
numbers are simply multiplied.
• Strassen’s algorithm efficiency is θ(n
log
2
7), which is a
better efficiency class than θ(n3) of the brute-force
method.
Travelling Salesman Problem Using Branch and Bound
• TSP can be stated as follows, consider that there are n cities and
travelling salesman has to visit each city exactly once and has to
return to the city from where he has started.
• Finding shortest hamiltonian circuit of a graph.
• Computing lower bounds using the formula.
• LB=∑ (sum of costs of the two least cost edges
vε V adjacent to v) / 2
a b c d e
LB = [(1+ 3) + (3 + 6) + (1+ 2) + (3 + 4) + (2 + 3)]/2
= 28/2=14.(this is the root of the state space tree)
• Compute the distances at level 1a-b,a-c,a-d,a-e.
• Compute the distances at level 2a-b-c,a-b-d,
a-b-e.
• Compute the distances at level 3a-b-c-d,a-b-c-e
and a-b-d-c,a-b-d-e.
• Thus the state space tree can be
LB = [(1+ 3) + (3 + 6) + (1+ 2) + (3 + 4) + (2 + 3)]/2
= 28/2=14.(this is the root of the state space tree)
Node 1:consider distance a-b in computation of the
corresponding vertices along with 1 minimum
distance. Find 2 least cost edges adjacent to V/2.
a=a-b + a-c=3+1=4 (consider a-b here)
b=a-b+b-c=3+6=9 (consider a-b here)
c=a-c+c-e=1+2=3 LB=4+9+3+7+5=28/2=14
d=d-e+c-d=3+4=7
e=c-e+d-e=2+3=5
Node 2:consider distance a-c in computation of the
corresponding vertices along with 1 minimum
distance. Find 2 least cost edges adjacent to V/2.
a=a-b + a-c=3+1=4 (consider a-c here)
b=a-b+b-c=3+6=9
c=a-c+c-e=1+2=3 (consider a-c here)
d=d-e+c-d=3+4=7
e=c-e+d-e=2+3=5
LB=4+9+3+7+5=28/2=14
Node 3:consider distance a-d in computation of the
corresponding vertices along with 1 minimum
distance. Find 2 least cost edges adjacent to V/2.
a=a-c + a-d=1+5=6 (consider a-d here)
b=a-b+b-c=3+6=9
c=a-c+c-e=1+2=3
d=d-e+a-d=3+5=8 (consider a-d here)
e=c-e+d-e=2+3=5
LB=6+9+3+8+5=31/2=15.5=16
Node 4:consider distance a-e in computation of
the corresponding vertices along with 1
minimum distance. Find 2 least cost edges
adjacent to V/2.
a=a-c + a-e=1+8=9 (consider a-e here)
b=a-b+b-c=3+6=9
c=a-c+c-e=1+2=3
d=c-d+d-e=4+3=7
e=c-e+a-e=2+8=10 (consider a-e here)
LB=9+9+3+7+10=38/2=19
Node 5:consider distance a-b-c .including edges
(a-b),(b-c) wherever possible. Find 2 least cost
edges adjacent to V/2.
a=a-b + a-c=3+1=4 (consider a-b here)
b=a-b+b-c=3+6=9
c=a-c+b-c=1+6=7 (consider b-c here)
d=d-c+d-e=4+3=7
e=c-e+d-e=2+3=5
LB=4+9+7+7+5=32/2=16
Same as node 6 (a-b-d),node 7 (a-b-e)
Node 8:consider distance a-b-c-d (e,a).including
edges (a-b),(b-c) ,(c-d) (e,a)wherever possible.
Find 2 least cost edges adjacent to V/2.
a=a-b + a-e=3+8=11 (consider a-b & e,ahere)
b=a-b+b-c=3+6=9 (consider a-b,b-c here)
c=b-c+c-d=6+4=10 (consider b-c,c-d here)
d=c-d+d-e=4+3=7 (consider c-d here)
e=a-e+d-e=8+3=11(consider a-e here)
LB=11+9+10+7+11=48/2=24
• Same as node 9 a-b-c-e(d,a)
• Same as node 10 a-b-d-c (e,a)
Node 11:consider distance a-b-d-e (c,a).including
edges (a-b),(b-d) ,(d-e) (c,a)wherever possible. Find
2 least cost edges adjacent to V/2.
a=a-b + a-c=3+1=4 (consider a-b & e,ahere)
b=a-b+b-d=3+7=10 (consider a-b,b-c here)
c=a-c+c-e=1+2=3 (consider b-c,c-d here)
d=b-d+d-e=7+3=10 (consider c-d here)
e=c-e+d-e=2+3=5(consider a-e here)
LB=4+10+3+10+5=32/2=16
• At node 11 we get optimum tour .
• Hence the optimal tour of TSP is a-b-d-e-c-a with
cost 16.
Approximation Algorithm for NP-Hard Problems
• If an NP hard pblm can be solved in polynomial time then all NP-
complete pblms can also be solved in polynomial time.
• All NP-complete pblms are NP-hard but all NP-hard pblms cannot be
NP-complete.
• A pblm is NP-complete if it belongs to NP-class and also every pblm
in NP can also be solved in polynomial time.
• The Np class pblms are the decision pblms that can be solved by non-
deterministic polynomial algorithms.
• non-deterministic-no particular rule is defined.
• Approximation Algorithm for TSP
Decision version of this algm belongs to NP complete pblm.
Optimization version of this algm belongs to NP-hard pblms.
2 Approximation algms are used for TSP.A NP-hard class of pblms
and those are
• Approximation algms for TSP.
• Nearest Neighbour algm.
• Twice around the tree algm.
Nearest Neighbour algm
• Idea-choose nearest neighbour while traversing from one city to another.
Q1.using nearest neighbour algm,obtain the optimal solution for given
travelling salesman pblm and also find the accuracy ratio for the same.
Accuracy ratio r(Sa) = f(Sa)/f(S*)
Where,
• Sa approximate solution
• f(Sa)value of objective function for soln given by approximation algm.
• f(S*) value of objective function
• Generally r(Sa) >=1.
• When r(Sa) reaches close to 1 then it is a better approximate solution.
Nearest Neighbour algm
• We will apply Nearest Neighbour algm for the tour for given TSP. The
output is a-b-c-d-a.., Sa =2+3+2+7=14.
• Now the optimal solution is a-b-d-c -a S*=2+4+2+4=12.
• Accuracy ratio r(Sa) = f(Sa)/f(S*)=14/12=1.16
• Hence the tour Sa is 16% longer than optimal tour S*.
• Drawback: long path.
• Important role is the distance d-a=7
• r(Sa) = f(Sa)/f(S*)
= 7+w/12 (i.e w is the edge d-a)
• As ‘w’ increases, r(Sa) increases.For longer value of w, r(Sa) tends to
infinity.Hence our AA will fail to obtain the optimal soln.
Twice around the tree algorithm-Algm:
• Compute MST from the graph.
• Start at any arbitrary city and walk around the tree and record nodes
visited.
• Eliminate duplicates from the generated node list
Christofides Algorithm
• There is an approximation algorithm with a better
performance ratio for the Euclidean traveling salesman
problem. The tour yields a − b − c − e − d − a of length 37.

More Related Content

Similar to Design Analysis of Alogorithm 1 ppt 2024.pptx

Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)jehan1987
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptTekle12
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptTekle12
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxSyed Zaid Irshad
 
Data Structure & Algorithms - Introduction
Data Structure & Algorithms - IntroductionData Structure & Algorithms - Introduction
Data Structure & Algorithms - Introductionbabuk110
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithmsiqbalphy1
 
Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity CalculationAkhil Kaushik
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxJeevaMCSEKIOT
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Muhammad Hammad Waseem
 
Ch1. Analysis of Algorithms.pdf
Ch1. Analysis of Algorithms.pdfCh1. Analysis of Algorithms.pdf
Ch1. Analysis of Algorithms.pdfzoric99
 
complexity analysis.pdf
complexity analysis.pdfcomplexity analysis.pdf
complexity analysis.pdfpasinduneshan
 
Fundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencyFundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptxSayanSen36
 

Similar to Design Analysis of Alogorithm 1 ppt 2024.pptx (20)

Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)
 
chapter 1
chapter 1chapter 1
chapter 1
 
Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
 
DSA
DSADSA
DSA
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
 
Data Structure & Algorithms - Introduction
Data Structure & Algorithms - IntroductionData Structure & Algorithms - Introduction
Data Structure & Algorithms - Introduction
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity Calculation
 
Unit 2 algorithm
Unit   2 algorithmUnit   2 algorithm
Unit 2 algorithm
 
Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptx
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
 
Ch1. Analysis of Algorithms.pdf
Ch1. Analysis of Algorithms.pdfCh1. Analysis of Algorithms.pdf
Ch1. Analysis of Algorithms.pdf
 
complexity analysis.pdf
complexity analysis.pdfcomplexity analysis.pdf
complexity analysis.pdf
 
Daa chapter 1
Daa chapter 1Daa chapter 1
Daa chapter 1
 
Fundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencyFundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm Efficiency
 
Algorithm Design and Analysis
Algorithm Design and AnalysisAlgorithm Design and Analysis
Algorithm Design and Analysis
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptx
 

More from rajesshs31r

Module 1 - Industrial Design Project.ppt
Module 1 - Industrial Design Project.pptModule 1 - Industrial Design Project.ppt
Module 1 - Industrial Design Project.pptrajesshs31r
 
Analysis of Algorithm II Unit version .pptx
Analysis of Algorithm  II Unit version .pptxAnalysis of Algorithm  II Unit version .pptx
Analysis of Algorithm II Unit version .pptxrajesshs31r
 
Analysis of Algorithm full version 2024.pptx
Analysis of Algorithm  full version  2024.pptxAnalysis of Algorithm  full version  2024.pptx
Analysis of Algorithm full version 2024.pptxrajesshs31r
 
enterprise resource planning intro new.ppt
enterprise resource planning intro new.pptenterprise resource planning intro new.ppt
enterprise resource planning intro new.pptrajesshs31r
 
monetary and its eloborateds policy.pptx
monetary and its eloborateds policy.pptxmonetary and its eloborateds policy.pptx
monetary and its eloborateds policy.pptxrajesshs31r
 
inflation in Economics for Engineering.pptx
inflation in Economics for Engineering.pptxinflation in Economics for Engineering.pptx
inflation in Economics for Engineering.pptxrajesshs31r
 

More from rajesshs31r (6)

Module 1 - Industrial Design Project.ppt
Module 1 - Industrial Design Project.pptModule 1 - Industrial Design Project.ppt
Module 1 - Industrial Design Project.ppt
 
Analysis of Algorithm II Unit version .pptx
Analysis of Algorithm  II Unit version .pptxAnalysis of Algorithm  II Unit version .pptx
Analysis of Algorithm II Unit version .pptx
 
Analysis of Algorithm full version 2024.pptx
Analysis of Algorithm  full version  2024.pptxAnalysis of Algorithm  full version  2024.pptx
Analysis of Algorithm full version 2024.pptx
 
enterprise resource planning intro new.ppt
enterprise resource planning intro new.pptenterprise resource planning intro new.ppt
enterprise resource planning intro new.ppt
 
monetary and its eloborateds policy.pptx
monetary and its eloborateds policy.pptxmonetary and its eloborateds policy.pptx
monetary and its eloborateds policy.pptx
 
inflation in Economics for Engineering.pptx
inflation in Economics for Engineering.pptxinflation in Economics for Engineering.pptx
inflation in Economics for Engineering.pptx
 

Recently uploaded

Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 

Recently uploaded (20)

Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 

Design Analysis of Alogorithm 1 ppt 2024.pptx

  • 2. UNIT I INTRODUCTION Notion of an Algorithm What is an algorithm? An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time.
  • 3. Methods for solving the same problem • Euclid’s Algorithm for computing greatest common divisor GCD(m,n) • Euclid’s algorithm is based on repeated application of equality • gcd(m,n) = gcd(n, m mod n) • until the second number becomes 0, which makes the problem trivial. Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
  • 4. Two descriptions of Euclid’s algorithm 1.Structured description Step 1 If n = 0, return m and stop; otherwise go to Step 2 Step 2 Divide m by n and assign the value of the remainder to r Step 3 Assign the value of n to m and the value of r to n. Go to Step 1. 2.Pseudocode while n ≠ 0 do r ← m mod n m← n n ← r return m
  • 5. Fundamentals of Algorithmic Problem Solving • discuss a sequence of steps while designing and analyzing an algorithm. • Understanding the Problem • Decision making on Capabilities of the Computational Device • Choosing between Exact and Approximate Problem Solving • Algorithm Design Techniques • Designing an Algorithm and Data Structures • Methods of Specifying an Algorithm • Proving an Algorithm’s Correctness • Analyzing an Algorithm • Coding an Algorithm
  • 6. Fundamentals of Algorithmic Problem Solving (Contd..) Understanding the Problem • understand the problem completely before designing the algm. • Read the problem’s description carefully and ask questions if you have any doubts about the problem. • If you fail to do this, your algorithm may work correctly for a majority of inputs but crash on some “boundary” value.
  • 7. Fundamentals of Algorithmic Problem Solving (Contd..) Decision making on Once you completely understand a problem, we need to decide - the Capabilities of the Computational Devices - Choosing between Exact and Approximate Problem Solving - Algorithm Design Techniques - Designing an Algorithm and Data Structures • Instructions are executed one after another, one operation at a time. Algorithms designed to be executed on such machines are called sequential algorithms/RAM • Instructions are executed in parallel, concurrent operation at a time.Algorithms designed to be executed on such machines are called parallel algorithms.
  • 8. Fundamentals of Algorithmic Problem Solving (Cont..) Choosing between Exact and Approximate Problem Solving • The next principal decision is to choose between solving the problem exactly (exact algorithm) or solving the problem approximately (approximation algorithm). Exact algorithm Eg. extracting square roots, solving nonlinear equations, and evaluating definite integrals. Approximation algorithm Eg. Vertex Cover, Traveling Salesman Problem Algorithm Design Techniques An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing. Eg.:Brute force technique,Divide and conquer,Greedy technique,Dynamic Programming ,Backtracking
  • 9. Fundamentals of Algorithmic Problem Solving (Cont..) Proving an Algorithm’s Correctness/verification Analyzing an Algorithm two kinds of analyzing the algorithm efficiency: • time efficiency- indicating how fast the algorithm runs • space efficiency-indicating how much extra memory it uses. Another desirable characteristic of an algorithm is. • Range of inputs These factors are not satisfactory then redesign the algorithm.
  • 10. Fundamentals of Algorithmic Problem Solving (Cont..) Coding an Algorithm/implementation Implementation of algorithm is done by suitable programming languages. Optimality-writing the program code is optimal. This will reduce the burden of the compiler Important problem types Computing problems can be classified as • Sorting • Searching • Numerical • Geometric • Combinatorial • Graph • String processing
  • 11. Sorting • Rearrange the items of a given list in increasing and decreasing order. • Eg. sort list of nos.,characters from alphabets, character strings, etc.. Why we want sorting? • Makes the list easier to answer. • Useful in searching(eg. telephone books,dictionary).
  • 12. Properties of sorting • In place property- does not require extra memory while sorting the list. • Not In place property- require extra memory while sorting the list. Searching • Find out the desired element from the list. • Searching search-search key How to choose best searching algm? • Some algms work faster, they require more memory • Some algms works better if the list is almost sorted. • Thus efficiency of algm is varying at varying situations. Numerical Problems • Mathematical equations, systems of equations, computing definite integrals, evaluating functions… • These pblms are solved by approximate algms. Geometric Problems It deals with geometric objects such as points, lines & polygons. • Used in computer graphics, robotics etc..Eg. closest pair, convex hull.
  • 13. Combinatorial pblms •Pblms like permutations and combinations. •Lot of possible solutions. • find the optimal solution from possible solution •Problem size become big, the complexity will become high. •Eg.graph coloring pblm, tsp Graph problems Collection of vertices and edges. Shortest path, graph traversals,tsp String processing pblms • Collection of characters • Called pattern matching algm. • Particular word is searching from the text
  • 14. Fundamentals of the Analysis of Algorithm Efficiency/ Analysis Framework Measure the performance of an algorithm by computing 2 factors. • Space complexity • Time complexity • Measuring an input size • Measuring running time • Order of growth
  • 15. Space complexity Space complexity of an algorithm means the amount of space (memory) taken by an algorithm. By computing space complexity, we can analyze whether an algorithm requires more or less space. • Memory space S(P) needed by a program P, consists of two components: – A fixed part: needed for instruction space (byte code), simple variable space, constants space etc.  c – A variable part: dependent on a particular instance of input and output data.  Sp(instance) S(P) = c + Sp(instance) Algorithm add (a, b, c) { return a+b+c; } For every instance 3 computer words required to store variables: a, b, and c (assume a,b,c occupy one word size). Therefore S(P) = 3.
  • 16. Space complexity 1. Algorithm Sum(a[], n) 2. { 3. s:= 0.0; 4. for i = 1 to n do 5. s := s + a[i]; 6. return s; 7. } Every instance needs to store array a[] & n. – Space needed to store n = 1 word. – Space needed to store a[ ] = n floating point words (or at least n words) – Space needed to store i and s = 2 words • Hence S(P) = (n + 3).(i.e 1+n+2=n+3)
  • 17. Time complexity • Time complexity of an algorithm means the amount of time taken by an algorithm. By computing time complexity, we can analyze whether an algorithm requires fast or slow speed. • Difficult to compute physically clocked time.so executing time depends on many factors such as • System load • No. of other programs running time • Instruction set • Speed of hardware • The time complexity is given in terms of frequency count. • Frequency count is a count denoting number of times of execution of statement.
  • 19. Time complexity Matrix addition For(i=0;i<n;i++) { For(j=0;j<n;j++) { C[i][j]=a[i][j]+b[i][j] } • Frequency count(FC) can be computed as follows For(i=0;i<n;i++) • i=0 executes once . Fc=1 • i<n executes for n+1 times • i++ executes for n times For(j=0;j<n;j++) • j=0 executes n*1=n times • j<n executes n*(n+1) times= n2+n times • j++ executes n*n= n2 times • C[i][j]=a[i][j]+b[i][j] executes n*n= n2 times Totally 3n2+4n+2
  • 20. Measuring an input size If the i/p size is longer, then usually algm runs for a longer time. Hence we can compute the efficiency of an algm as a function to which i/p size is passed as a parameter. Eg. multiply of 2*2 matrix-should know order of these matrices then only we can enter the elmts of matrices. Measuring running time •Time complexity is measured in terms of FC. • we first identify logic operation of an algm  basic operation (more time consuming, such basic operation is located in inner loop) T (n) ≈ cop.C(n) • Cop execution time of an algorithm’s basic operation on a particular computer • C(n) number of times this operation needs to be executed for this algorithm. • T(n)running time of the algorithm
  • 21. Measuring running time Eg. sortingcomparing the elements and then placing them in the correct locations is a basic operation. Eg. PROBLEM INPUT SIZE BASIC OPERATION Searching element from list of n elements List of n elements Comparison of key with every element of list Matrix multiplication Two matrices with order n * n Actual multiplication of the elements in the matrices Gcd of 2 numbers 2 numbers Division
  • 22. Order of growth • Measuring the performance of an algorithm in relation with the input size n is called order of growth. • For example, the order of growth for varying input size of n is as given below.
  • 23. Worst-Case, Best-Case, and Average-Case Efficiencies Worst Case Time Complexity: • when there are no matching elements or the first matching element happens to be the last one on the list. • Algorithm makes the largest number of key comparisons among all possible inputs of size. Cworst(n) = n. • Algorithm runs the longest among all possible inputs of that size. • Time complexity for linear search in worst case=O(n) Best case time complexity • Algorithm runs the fastest among all possible inputs of that size. • For example, the best-case inputs for sequential search are lists of size n with their first element equal to a search key Cbest(n) = 1 Time complexity for linear search in best case=O(1)
  • 24. Average Case time complexity • This type of complexity gives information about the behavior of an algorithm on specific random input. • Let p->probability of getting successful search • Let n->total no.of elements in the list • The first match of the element will occur at ith location.Hence probability of occuring first match is p/n for every ith element. • The probability of getting unsuccessful search is (1-p). • Cavg(n) = probability of getting successful search(for elements 1 to n in the list) + probability of getting unsuccessful search. Cavg(n) =n+1/2 there may be n elements at which chances of not getting element are possible. Hence n.(1-p). Time complexity for linear search in average case=O(n)
  • 25. Amortized analysis Amortized analysis is a method for analyzing a given algorithm's time complexity, or how much of a resource, especially time or memory in the context of computer programs, it takes to execute.
  • 26. Fundamentals of the Analysis of Algorithm Efficiency (Cont..) The properties of order of growth: • If F1 (n) is order of g1 (n) and F2 (n) is order of g2 (n), then • F1 (n) +F2 (n) • Polynomials of degree m .That means maximum degree is considered from the polynomial.         n g n g 2 1 , max  
  • 27. Asymptotic Notations and its Properties • the efficiency analysis framework concentrates the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency. • To compare and rank such orders of growth, computer scientists use three notations: O (big oh), (big omega), and (big theta). • t (n) and g(n) can be any nonnegative functions defined on the set of natural numbers. • t (n) will be an algorithm’s running time (usually indicated by its basic operation count C(n)), and • g(n) will be some simple function to compare the count with. • To choose the best algorithm, we need to check efficiency of each algorithm. The efficiency can be measured by computing time complexity of each algorithm. • Asymptotic notation is a shorthand way to represent the time complexity. Using asymptotic notations we can give time complexity as “fastest possible”, “slowest possible” or “average time”.  
  • 28. Asymptotic Notations and its Properties (Cont..) The big Oh notation is denoted by O .It is a method of representing the upper bound of algorithms running time. Using big oh notation we can give longest amount of time taken by the algorithm to compute. DEFINITION A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t (n) ≤ cg(n) for all n ≥ n0.
  • 29. Asymptotic Notations and its Properties (Cont..) • Omega notation is denoted by ‘ Ω’. This notation is used to represent the lower bound of algorithms running time. Using omega notation we can denote shortest amount of time taken by algorithm. DEFINITION • A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)), if t (n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t (n) ≥ cg(n) for all n ≥ n0.
  • 30. Asymptotic Notations and its Properties (Cont..) • The theta notation is denoted by .By this method the running time is between upper bound and lower bound. DEFINITION • A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)),if t (n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0 
  • 31. Asymptotic Notations and its Properties (Cont..) • Polynomials of degree m .That means maximum degree is considered from the polynomial. Eg.3n2+2n2+10 then its time complexity is O(n3). • Any constant value leads to O(1) time complexity. that is if f(n)=c then it ε O(1) time complexity.
  • 32. Basic Efficiency Classes the time efficiencies of a large number of algorithms fall into only a few classes.
  • 34. Mathematical Analysis for Recursive and Non-Recursive Algorithms
  • 35. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) Methods for Solving Recurrence Equations: • Substitution method – Forward substitution – Backward substitution • Masters method • Tree method The substitution method guess for the solution is made. There are two types of substitution: • Forward substitution • Backward substitution • Forward substitution method: This method makes use of an initial condition in the initial term and value for the next term is generated. This process is continued until some formula is guessed. • Backward substitution method: In this method values are substituted recursively in order to derive some formula.
  • 36. Forward substitution T(n)=T(n-1)+n with initial condition T(0)=0 Let T(n)=T(n-1)+n If n=1 then T(1)=T(1-1)+1=T(0)+1=0+1=1 If n=2 then T(2)=T(2-1)+2=T(1)+2=1+2=3 If n=3 then T(3)=T(3-1)+3=T(2)+3=3+3=6 We can derive the formula T(n)=n(n+1)/2 =n2/2+n/2 = O(n2) backward substitution T(n)=T(n-1)+n ----1 with initial condition T(0)=0 Let n=n-1,T(n-1)=T(n-1-1)+(n-1) T(n-1)=T(n-2)+(n-1) ----2 Sub eqn 2 in eqn 1 T(n)=T(n-2)+(n-1)+n -----3 Let n=n-2,T(n-2)=(T(n-2-1)+(n-2) T(n-2)=T(n-3)+(n-2) -----4 Sub eqn 4 in eqn 3 T(n)=T(n-3)+(n-2)+(n-1)+n .
  • 37. backward substitution T(n)=T(n-k)+(n-k+1)+(n-k+2)+….+n If k=n then T(n)=T(0)+1+2+…..n T(0)=0 T(n)=0+1+2+…..n We can derive the formula T(n)=n(n+1)/2 =n2/2+n/2 T(n)=O(n2)
  • 38. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) • Decide the input size based on parameter n. • Identify algorithms basic operation. • Check how many times the basic operation is executed. • Set up the recurrence relation with some initial condition and expressing the basic operation. • Solve the recurrence or atleast determine the order of growth. • General Plan for Analysis • Decide on parameter n indicating input size • Identify algorithm’s basic operation • Determine worst, average, and best cases for input of size n • Set up a sum for the number of times the basic operation is executed • Simplify the sum using standard formulas and rules
  • 39. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) EXAMPLE 1 Consider the problem of finding the value of the largest element in a list of n numbers. For simplicity, we assume that the list is implemented as an array. The following is pseudocode of a standard algorithm for solving the problem. ALGORITHM MaxElement(A[0..n − 1]) //Determines the value of the largest element in a given array //Input: An array A[0..n − 1] of real numbers //Output: The value of the largest element in A maxval ←A[0] for i ←1 to n − 1 do if A[i]>maxval maxval←A[i] //if any value large//searching the maximum element from an arrayr than current_maxvalue then set new maxvalue by obtained larger value return maxval
  • 40. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) • measure of an input’s size here is the number of elements in the array, i.e., n. • There are two operations in the loop’s body: • the comparison A[i]> maxval and • the assignment maxval←A[i]. • Which of these two operations should we consider basic? • Since the comparison is executed on each repetition of the loop • and the assignment is not • we should consider the comparison to be the algorithm’s basic operation. • Note:The comparisons is made for each value of n there is no need to find worst, average, and best cases analysis here. • C(n) the number of times this comparison is executed • The algorithm makes one comparison on each execution of the loop, which is repeated for each value of the loop’s variable i within the bounds 1 and n − 1, inclusive.
  • 41. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..)
  • 42. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) EXAMPLE 2 Consider the element uniqueness problem: check whether all the elements in a given array of n elements are distinct. This problem can be solved by the following straightforward algorithm. ALGORITHM UniqueElements(A[0..n − 1]) //Determines whether all the elements in a given array are distinct //Input: An array A[0..n − 1] //Output: Returns “true” if all the elements in A are distinct // and “false” otherwise for i ←0 to n − 2 do for j ←i + 1 to n − 1 do if A[i]= A[j ] return false //if any two elmts in the array are similar then return false indicating that the array elmts are not distinct return true
  • 43. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) • input’s size : n, the number of elements in the array. • basic operation: innermost loop contains a single operation (the comparison of two elements) • the worst case input is an array for which the number of element comparisons Cworst(n) is the largest among all arrays of size n. • there are two kinds of worst-case i/ps:they are 1) arrays with no equal elements 2) arrays in which the last two elements are the only pair of equal elements. • For such inputs, one comparison is made for each repetition of the innermost loop, i.e., for each value of the loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the outer loop, i.e., for each value of the loop variable i between its limits 0 and n − 2.
  • 44. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..)
  • 45. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1]) //Multiplies two square matrices of order n by the definition- based algorithm //Input: Two n × n matrices A and B //Output: Matrix C = AB for i ←0 to n − 1 do for j ←0 to n − 1 do C[i, j ]←0.0 for k←0 to n − 1 do C[i, j ]←C[i, j ]+ A[i, k] ∗ B[k, j] return C
  • 46. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) • input’s size by matrix order n. • multiplication as the basic operation • set up a sum for the total number of multiplications M(n) executed by the algm. • one multiplication executed on each repetition of the algorithm’s innermost loop, which is governed by the variable k ranging from the lower bound 0 to the upper bound n − 1. • Therefore, the number of multiplications made for every pair of specific values of variables i and j is
  • 47. Mathematical Analysis for Recursive and Non-Recursive Algorithms (Cont..) • the total number of multiplications M(n) is expressed by the following triple sum: Time complexity= Θ(n3)
  • 48. Mathematical Analysis for Non-Recursive Algorithms (Cont..) EXAMPLE 4 The following algorithm finds the number of binary digits in the binary representation of a positive decimal integer. ALGORITHM Binary(n) //Input: A positive decimal integer n //Output: The number of binary digits in n’s binary representation count ←1 while n > 1 do count ←count + 1 n←floor(n/2) return count
  • 49. Mathematical Analysis for Non-Recursive Algorithms (Cont..) While loop(execution) eg: n=5 Count=1 1) While(n>1){--------5>1 Count=count+1--1+1=2 n=floor(n/2)-5/2=2.5=2-n=2 count=2 2) While(n>1){--------2>1 Count=count+1--2+1=3 n=floor(n/2)-2/2= 1 count=3 3) While(n>1){--------1>1--false Analysis • i/p size : n • Basic operation: while loop • value of n is about halved on each repetition of the loop. hence the efficiency is log 2 n. Floor(log 2 n) = Floor( log 2 5)=Floor(2.5) 2 Floor(log 2 n) + 1 ---------------(since 1 for While(1>1)--------for false condition) • No.of times while loop executes is Floor(log 2 n) + 1 • Time complexity= Θ(log 2 n)
  • 50. Mathematical Analysis of Recursive Algorithms General Plan for Analyzing the Time Efficiency of Recursive Algorithms 1. Decide on a parameter (or parameters) indicating an input’s size. 2. Identify the algorithm’s basic operation. Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately. 4. Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed. 5. Solve the recurrence or, at least, ascertain the order of growth of its solution.
  • 51. Mathematical Analysis of Recursive Algorithms ALGORITHM F(n) //Computes n! recursively //Input: A nonnegative integer n //Output: The value of n! if n = 0 return 1 else return F(n − 1) ∗ n input size: n • The basic operation of the algorithm is multiplication, whose number of executions we denote M(n). • Since the function F(n) is computed according to the formula • F(n) = F(n − 1) * n for n > 0 • the number of multiplications M(n) needed to compute it must satisfy the equality M(n) = M(n − 1) + 1 for n > 0 • M(n − 1)-- to compute F(n−1) +1-------to multiply F(n−1) by n
  • 52. Mathematical Analysis of Recursive Algorithms • M(n − 1) multiplications are spent to compute F(n − 1). eg .n=55-1=4!0!*1*2*3*4--n-1 multiplications. • one more multiplication is needed to multiply the result by n.(0!*1*2*3*4) * 5 • if n = 0 return 1. when n = 0, the algorithm performs no multiplications. Therefore, the initial condition M(0) =0. • M(0) ---the calls stop when n = 0 0---no multiplications when n = 0 • the recurrence relation and initial condition for the algorithm’s number of multiplications M(n) = M(n − 1) + 1 for n > 0, M(0) = 0. backward substitutions: • M(n) = M(n − 1) + 1 (substitute M(n − 1) = M(n − 2) + 1) = [M(n − 2) + 1]+ 1= M(n − 2) + 2 (substitute M(n − 2) = M(n− 3) + 1) = [M(n − 3) + 1]+ 2 = M(n − 3) + 3. General formula for the pattern: M(n) = M(n − i) + i. • substitute i = n in the pattern’s formula to get the ultimate result of our backward substitutions: • M(n) = M(n − 1) + 1= . . . = M(n − i) + i = . . . = M(n − n) + n = n.
  • 53. Tower of Hanoi puzzle • n disks of different sizes that can slide onto any of three pegs. • Initially, all the disks are on the first peg in order of size, the largest on the bottom and the smallest on top. • The goal is to move all the disks to the third peg, using the second one as an auxiliary, if necessary. • We can move only one disk at a time, and it is forbidden to place a larger disk on top of a smaller one. • To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary), • we first move recursively n − 1 disks from peg 1 to peg 2 (with peg 3 as auxiliary), • Then move the largest disk directly from peg 1 to peg 3, and, finally, move recursively n − 1 disks from peg 2 to peg 3 (using peg 1 as auxiliary). • Of course, if n = 1, we simply move the single disk directly from the source peg to the destination peg.
  • 54. Tower of Hanoi puzzle • input’s size : number of disks n • moving one disk as the algorithm’s basic operation. • the number of moves M(n) depends on n only, and we get the following recurrence equation for it: M(n) = M(n − 1) + 1+ M(n − 1) for n > 1. initial condition M(1) = 1
  • 55. Divide-and-Conquer General Method The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original (larger) instance by combining these solutions 4. A control abstraction for divide and conquer is as given below-using control abstraction a flow of control of a procedure is given.
  • 56. Divide-and-Conquer General Method • The Computing time of above procedure of divide and conquer is given by the recurrence and relation. a problem of size n a problem of size n/2 •a problem of size n/2 a solution to subproblem 1 a solution to Subproblem 2 a solution to the original problem
  • 57. Divide-and-Conquer General Method • For example ,if we want to compute sum of n numbers then by divide and conquer we can solve the problem as
  • 58. Divide-and-Conquer General Method • If we want to divide a problem of size n into size of n/b taking f(n) time to divide and combine, then we can set up recurrence relation for obtaining time for size n is -
  • 59. General Divide-and-Conquer Recurrence T(n) = aT(n/b) + f (n) where f(n)  (nd), d  0 Master Theorem: If a < bd, T(n)  (nd) If a = bd, T(n)  (nd log n) If a > bd, T(n)  (nlog b a ) Note: The same results hold with O instead of . Examples: T(n) = 4T(n/2) + n  T(n)  ? T(n) = 4T(n/2) + n2  T(n)  ? T(n) = 4T(n/2) + n3  T(n)  ? (n^2) (n^2log n) (n^3)
  • 60. Efficiency Analysis of divide and conquer
  • 61. Efficiency Analysis of divide and conquer (Cont…)
  • 62. Efficiency Analysis of divide and conquer (Cont…)
  • 63. Binary Search • A binary search looks for an item in a list using a divide-and conquer strategy. • Very efficient algorithm for searching in sorted array: • Binary search algorithm assumes that the items in the array being searched are sorted – The algorithm begins at the middle of the array in a binary search – If the item for which we are searching is less than the item in the middle, we know that the item won’t be in the second half of the array – Once again we examine the “middle” element – The process continues with each comparison cutting in half the portion of the array where the item might be • Binary Search: middle element. • An element which is to be searched from the list of elements sorted in array A[0…n-1] is called KEY element. mid = left + right 2
  • 65. Example • Consider a list of elements sorted in array A as The Key element (i.e. the element to be searched) is 60. • Now to obtain middle element we will apply formula. m=(0+6)/2 =3 left + right 2 mid =
  • 66. Binary Search Yes,i.e. the number is present in the list. Thus we can search the desired number from the list of elements.
  • 68. Analysis • The Basic operation in binary search is comparison of search key (i.e. KEY) with the array elements. • To analyze the efficiency of binary search we must count the number of times the search keys compared with the array elements. • The comparison is also called a three way comparison because algorithm makes the comparison to determine whether KEY is smaller, equal to or greater than A[m]. • In this algorithm after one comparison the list of n elements is divided into n/2 sub lists.
  • 69. Analysis • The worst case efficiency is that the algorithm compares all the array elements for searching the desired element.. • One comparison is made and based on this comparison array is divided each time in n/2 sub lists. • Hence the time complexity is given by • But as we consider the rounded down value when array gets divided the above equations can be written as • The above recurrence relation can be solved further. Assume n = 2k the equation (1) becomes,
  • 73. Average Case • To obtain average case efficiency of binary search, consider some samples of input n. • Again A[2]=33 and 33<44 we divide list. In right sub list A[4]=44 and key is 44. thus total 3 comparisons are made to search 44.
  • 74. • To Summarize the above operations. To Summarize the above operations. Observing the above given table we can write
  • 75. • For instance if n=2 then,,,Similarly if n=8, then……Thus we can write that, • Advantage of binary search – Binary search is an optimal searching algorithm using which we can search the derired element very efficiently. • Disadvantage of binary search – This algorithm requires the list to be sorted. Then only this method is applicable. • Applications of binary search – efficient searching method and is used to search desired record from database. – For solving nonlinear equation with one unknown this method is used.
  • 76. Implementation of recursive Binary Search algorithm
  • 77. Implementation of recursive Binary Search algorithm
  • 78. Merge sort • The Merge sort is a sorting algorithm that uses the divide and conquer strategy. In this method division is dynamically carried out. – Divide: divide the n-element sequence into two sub problems of n/2 elements each. – Conquer: sort the two subsequences recursively using merge sort. If the length of a sequence is 1, do nothing since it is already in order. – Combine: merge the two sorted subsequences to produce the sorted answer.
  • 79. Merge sort • Consider the elements as 70,20,30,40,10,50,60
  • 80. Merge sort • Sort the following elements using merge sort 10,5,7,6,1,4,8,3,2,9.
  • 84. Logic Explanation First Make two sub lists as
  • 85. Logic Explanation • Consider that at some instance we have got two sub list 20,30,40,70 and 10,50,60 then
  • 93.
  • 94.
  • 95. Analysis • In Merge sort algorithm the two recursive calls are made. Each recursive call focuses on n/2 elements of the list. After two recursive call is made to combine two sub lists i.e. to merge all n elements. Hence we can write recurrence relation as • We can obtain the time complexity of merge sort using two methods – Master Theorem – Substitution method • Master Theorem
  • 100. Selection Sort-brute force technique •We start selection sort by scanning the entire given list to find its smallest element and exchange it with the first element, putting the smallest element in its final position in the sorted list. •Then we scan the list, starting with the second element, to find the smallest among the last n − 1 elements and exchange it with the second element, putting the second smallest element in its final position. •Generally, on the ith pass through the list, which we number from 0 to n − 2, the algorithm searches for the smallest item among the last n − i elements and swaps it with Ai .After n − 1 passes, the list is sorted.
  • 101. Eg. An array A[5]={64,25,12,22,11} 0 , 1 , 2, 3, 4 PASS 1: i=0 // range of i={0,1,2,3} // for i=0 to n-2 min0 //min=i j1 //range of j={1,2,3,4} //for j=i+1 to n-1 25< 64 min1 //if A[j]<A[min] minj j 2 , 12<25 min2 j 3 , 22<12 j 4 , 11<12 //end of j Swap 64 and 11 // swap A[i] and A[min] // Now list becomes 11,25,12,22,64 // Now list is 11,25,12,22,64 PASS 2: i=1 min1 j2 12< 25 min2 j 3 , 22<12
  • 102. j 4 , 64<12 //end of j Swap 25 and 12 // Now list becomes 11,12,25,22,64 PASS 3: i=2 min2 j 3 , 22<25 min3 j 4 , 64<22 //end of j Swap 25 and 22 // Now list becomes 11,12,22,25,64 // Now list is 11,12,22,25,64 PASS 4: i=3 min3 j 4 , 64<25 //end of j Swap 25 and 64 // Now list becomes 11,12,22,25,64 Output array A[5]={11,12,22,25,64}
  • 103. Analysis The input size is given by the number of elements n. basic operation is the key comparison A[j ]<A[min]. The number of times it is executed depends only on the array size and is given by the following sum: Thus, selection sort is a (n2) algorithm on all inputs.
  • 104. Bubble sort • brute-force application to the sorting problem is to compare adjacent elements of the list and exchange them if they are out of order. • By doing it repeatedly, we end up “bubbling up” the largest element to the last position on the list. • The next pass bubbles up the second largest element, and so on, until after n − 1 passes the list is sorted.
  • 105. Eg.an array A[5]={5,1,4,2,8} 0,1,2,3,4 PASS 1: i=0 //range of i={0,1,2,3} //for i=0 to n-2 j=0 //range of j={0,1,2,3} //for j=0 to n-2-i 1<5 //A[j+1]<A[j] Swap 1 and 5, //swap A[j] and A[j+1] //List becomes 1,5,4,2,8 j=1 , 4<5 ,swap 4 and 5 // list becomes 1,4,5,2,8 j=2 , 2<5 ,swap 2 and 5 // list becomes 1,4,2,5,8 j=3, 8<5 // list remains 1,4,2,5,8
  • 106. Now the list is 1,4,2,5,8 PASS 2: i=1 j=0 //range of j={0,1,2} //for j=0 to n-2-i 4<1 //A[j+1]<A[j] //List remains 1,4,2,5,8 j=1 , 2<4 , swap 2 and 4 // list becomes 1,2,4,5,8 j=2 , 5<4 // list remains 1,2,4,5,8 PASS 3: Now the list is1,2,4,5,8 i=2 j=0 //range of j={0,1} //for j=0 to n-2-i 2<1 // list remains 1,2,4,5,8 j=1 , 4<2 // list remains 1,2,4,5,8
  • 107. PASS 4: Now the list is1,2,4,5,8 i=3 j=0 //range of j={0} //for j=0 to n-2-i 2<1 // list remains 1,2,4,5,8 Output array A[5]={1,2,4,5,8} Analysis: • The number of key comparisons for the bubble-sort is the same for all arrays of size n; • it is obtained by a sum that is almost identical to the sum for selection sort:
  • 108. sequential search • This is a straightforward algorithm that searches for a given item (some search key K) in a list of n elements by checking successive elements of the list until either a match with the search key is found or the list is exhausted.
  • 109. • In the worst case, when there are no matching elements or the first matching element happens to be the last one on the list, the algorithm makes the largest number of key comparisons among all possible inputs of size n. Cworst(n) = n • The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size. Cbest(n) = 1 Average case:
  • 110. • For example, if p = 1 (the search must be successful), the average number of key comparisons made by sequential search is (n + 1)/2; that is, the algorithm will inspect, on average, about half of the list’s elements. • If p = 0 (the search must be unsuccessful), the average number of key comparisons will be n because the algorithm will inspect all n elements on all such inputs. • Cavg(n)=n
  • 111. Closest-Pair Problem And Convex-Hull Problems Step 1 : Divide the points given into two subsets S1 and S2 by a vertical line x = c so that half the points lie to the left or on the line and half the points lie to the right or on the line. Step 2 : Find recursively the closest pairs for the left and right subsets. Step 3 : Set d = min{d1, d2}.We can limit our attention to the points in the symmetric vertical strip of width 2d as possible closest pair. Let C1 and C2 be the subsets of points in the left subset S1 and of the right subset S2, respectively, that lie in this vertical strip. The points in C1 and C2 are stored in increasing order of their y coordinates, which is maintained by merging during the execution of the next step. Step 4 For every point P(x,y) in C1, we inspect points in C2 that may be closer to P than d. There can be no more than 6 such points (because d ≤ d2)!
  • 112. Quick Sort • Select a pivot (partitioning element) – here, the first element • Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm) • Exchange the pivot with the last element in the first (i.e., ) subarray — the pivot is now in its final position • Sort the two subarrays recursively. p A[i]p A[i]p
  • 113. Quick Sort • Algorithm Quick(A[0……n-1],low,high) • Pblm: sorting of array A[0…n-1] • i/p: An array A[0….n-1]in which unsorted elmts are given.low indicates leftmost elmt in the list and high indicates the rightmost elmt in the list. • o/p: sorted in ascending order. • If(low<high)then • //split the array into two sub arrays • m partition(A[low…high])//m is mid of the array. • Quick(A[low…m-1]) • Quick(A[mid+1….high])
  • 114. Quick Sort • Algorithm partition(A[low…high]) • Pblm: partition the subarray using the first element as pivot element. • i/p:subarray A with low as lower most index of the array and high as higher most index of the array. • o/p: partitioning of array A is done and pivot occupies its proper position. And the rightmost index of the list is returned. pivotA[low] ilow jhigh+1
  • 115. While(i<=j)do { While(A[i]<=pivot) do ii+1 While(A[j]>=pivot) do jj-1 If(i<=j)then Swap(A[i],A[j]) / /swap A[i] and A[j] } Swap(A[low],A[j] ) //when i crosses j swap A[low] and A[j] Return j //rightmost index of the list
  • 116.
  • 117.
  • 118. • o/p array is 1 2 3 4 5 7 8 9
  • 119. Best Case-split in the middle Recurrence relation C(n)=C(n/2)+C(n/2) + n for n>1 C(n/2)time required to sort left and right subarray. n time required for partitioning the subarray. With initial condition: C(1)=0 Analysis
  • 120. • In the worst case, one of the two subarrays will be empty arrays, i.e., for inputs for which the problem is already solved • if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach A[0], indicating the split at position 0. • So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with itself, the algorithm will be left with the strictly increasing array A[1..n − 1] to sort.
  • 121. • The total number of key comparisons made will be equal to • Cworst(n)=(n+1)+n+(n-1)+(n-2)+….2+1
  • 122. • Let Cavg(n) be the average number of key comparisons made by quicksort on a randomly ordered array of size n. • A partition can happen in any position s (0 ≤ s ≤ n−1) after n+1comparisons are made to achieve the partition. • After the partition, the left and right subarrays will have s and n − 1− s elements, respectively. • Assuming that the partition split can happen in each position s with the same probability 1/n, we get the following recurrence relation:
  • 123. • we get the following recurrence relation: m
  • 124. Dynamic programming • Invented by a U.S. mathematician, Richard Bellman in the 1950s. • Dynamic programming is a technique for solving problems with overlapping subproblems. • a given problem’s solution to solutions of its smaller subproblems. • Rather than solving overlapping subproblems again and again, • dynamic programming suggests solving each of the smaller subproblems only once and recording the results in a table from which a solution to the original problem can then be obtained. • Eg: The Fibonacci numbers are the elements of the sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . , • which can be defined by the simple recurrence • F(n) = F(n − 1) + F(n − 2) for n > 1 • Two initial conditions are F(0)=0, F(1)=1
  • 125. • dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, • solving each of those subproblems just once, and storing their solutions - using a memory-based data structure. • The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a modest expenditure in storage space. • The technique of storing solutions to subproblems instead of recomputing them is called "memorization".
  • 126. DIFFERENCE BETWEEN DC & DP DIVIDE AND CONQUER DYNAMIC PROGRAMMING 1. Divide the given problem into many Subproblems. Find the individual solutions and combine them to get the solution for the main problem. 1. Many decisions and sequences are guaranteed and all the overlapping subinstances are considered. 2. Follows top down technique. (recursive) 2. Follows bottom up technique. (iterative) 3. Split the input only at specific points (midpoint). 3. Split the input at every possible Points rather than at a particular point.After trying all split points it determines which split point is optimal. 4. Each problem is independent. 4. Sub-problems are dependent on the main problem.
  • 127. DIFFERENCE BETWEEN DC & DP DIVIDE AND CONQUER DYNAMIC PROGRAMMING Less efficient because of rework on solutions Efficient than Divide & conquer Duplicate sub solutions may be obtained(duplications are neglected) Duplications in solutions is avoided totally. principle of optimality • an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances. • an optimization problem is the problem of finding the best solution from all feasible solutions. Two major types of optimization problems: minimization or maximization
  • 128. Floyd’ algorithm • Floyd’s algorithm is for finding the shortest path between every pair of vertices of a graph. The algorithm works for both directed and undirected graphs. • The graph may contain negative edges but it should not contain negative cycles.
  • 129. Knapsack Problem and Memory Functions • dynamic programming algorithm for the knapsack problem: • given n items of known weights w1, . . . , wn and values v1, . . . , vn and a knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack. • Let us consider an instance defined by the first i items, 1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . . , vi , and knapsack capacity j, 1 ≤ j ≤ W. • Let F(i, j) be the value of an optimal solution to this instance, i.e., the value of the most valuable subset of the first i items that fit into the knapsack of capacity j.
  • 130. We can divide all the subsets of the first i items that fit the knapsack of capacity j into two categories: those that do not include the ith item and those that do. Note the following: 1. Among the subsets that do not include the ith item, the value of an optimal subset is, by definition, F(i − 1, j). 2. Among the subsets that do include the ith item (hence, j − w ≥ 0), an optimal subset is made up of this item and an optimal subset of the first i − 1 items that fits into the knapsack of capacity j − wi . The value of such an optimal subset is vi + F(i − 1, j − wi).
  • 131.
  • 132.
  • 133. i j 0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 2 0 3 0 4 0
  • 134. Knapsack Problem and Memory Functions Steps to select actual knapsack item: Let i = n and k = W then while (i>0 and k>0) { if(table [i,k] table[i-1,k]) then mark ith item as in knapsack
  • 135. Knapsack Problem and Memory Functions (Cont.…) i = i-1 and k=k-wi // selection of ith item. else i = i-1 // do not select of ith item. }
  • 136. Knapsack Problem and Memory Functions (Cont.…) Example:
  • 137. Greedy Technique • The greedy method uses the subset paradigm or ordering paradigm to obtain the solution. • In subset paradigm, at each stage the decision is made based on whether a particular input is in optimal solution or not. • Prim’s Algorithm Minimum Spanning Tree: • A minimum spanning tree of a weighted connected graph G is a minimum tree with minimum or smallest weight. Spanning tree: • A spanning tree of a graph G is a sub graph which is basically a tree and it contains all the vertices of G containing no circuit
  • 138. Prim’s Algorithm (Cont.….) • ALGORITHM Prim (G) //Prim’s algorithm for constructing a minimum spanning tree //Input: A weighted connected graph G = (V, E) //Output: ET, the set of edges composing a minimum spanning tree of G VT← {v0} //the set of tree vertices can be initialized with any vertex ET←∅for i ←1 to |V| − 1 do find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u) such that v is in VT and u is in V − VT VT←VT∪ {u∗} ET←ET∪ {e∗}
  • 139. Kruskal’s Algorithm Kruskal’s algorithm looks at a minimum spanning tree of a weighted connected graph G = (V, E) as an acyclic sub graph with |V| − 1 edges for which the sum of the edge weights is the smallest. Kruskal’s Algorithm Prim’s algorithm  This algorithm is for obtaining minimum spanning tree by selecting the adjacent vertices of already selected vertices.  This algorithm is for obtaining minimum spanning tree but it is not necessary to choose adjacent vertices of already selected vertices.
  • 140. Dijkstra’s Algorithm • Dijkstra’s algorithm is used to find shortest path. This algorithm is also known as single source shortest path algorithm. In this algorithm, for a given vertex called source the shortest path to all other vertices is obtained. • In this algorithm the main focus is not to find only one single path but to find the shortest paths from any vertex to all other remaining vertices. This algorithm is applicable to graphs with non-negative weights only. Algorithm DagShortestPaths (G, s) //Solves the single-source shortest paths problem for a dag //Input: A weighted dag G = [V,E] and its vertex s //Output: The length dv of a shortest path from s to v and // its ultimate vertex pv for every vertex v in V topologically sort the vertices of G
  • 141. Dijkstra’s Algorithm (Cont…) for every vertex v do dv ←∞; pv ← null ds ← 0 for every vertex v taken in topological order do for every vertex u adjacent to v do if dv + w(v, u) < du
  • 142.
  • 143.
  • 144.
  • 145. Graph coloring • Given an undirected graph and a number m, determine if the graph can be colored with at most m colors such that no two adjacent vertices of the graph are colored with same color. • Here coloring of a graph means assignment of colors to all vertices or vertex coloring. • The smallest number of colors needed to color a graph G is called its chromatic number.
  • 146. Limitations of Algorithm Power • we studied dozens of algorithms for solving a variety of different problems. • But the power of algorithms is limited, because of following reasons, • There are some problems cannot be solved by any algorithm. • Other problems can be solved algorithmically but not in polynomial time. • when a problem can be solved in polynomial time by some algorithms, there are usually lower bounds on their efficiency Lower bound Arguments • Lower bound- Estimating minimum amount of work needed to solve a given problem. Eg. Examples: • number of comparisons needed to find the largest element in a set of n numbers • number of comparisons needed to sort an array of size n
  • 147. • number of comparisons necessary for searching in a sorted array • number of multiplications needed to multiply two n-by-n matrices • To obtain efficiency of particular algorithm there are 2 ways. • Establish the asymptotic efficiency class for the given pblm. • What is the asymptotic efficiency class for finding lower bounds? • Check whether the pblm lies in linear, quadratic, logarithmic/exponential category of efficiency class. • Whenever we need to find an efficiency of an algm , it is better to compare one algm with other algms that are used to solve same type of pblms. • Eg.sorting of elmts. • Insertion sortefficiency n2(slower algm) • Quick sort, heap sortnlogn(faster algm)
  • 148. Contd.. • Lower bound can be – an exact count – an efficiency class () Tight lower bound: there exists an algorithm with the same efficiency as the lower bound. Problem Lower bound Tightness sorting (comparison-based) (nlog n) yes searching in a sorted array (log n) yes element uniqueness (nlog n) yes n-digit integer multiplication (n) unknown multiplication of n-by-n matrices (n2) unknown Methods for Establishing Lower Bounds • trivial lower bounds • information-theoretic arguments (decision trees) • adversary arguments • problem reduction
  • 149. Trivial Lower Bounds Trivial lower bounds: based on counting the number of items that must be processed in input and generated as output. Examples: • Trivial lower bound for generating all permutations of n number will be O(n!)..size of output is n! • Product of 2 n x n matrices the trivial lower bound is O(n2 ).Here 2n2 elements input are to be processed and n2 elements get produced as an output. • TSP- the trivial lower bound is (n2 ). Here n(n-1)/2 distances( input ) and n+1 cities as output. • Binary search. Conclusions • may and may not be useful • be careful in deciding how many elements must be processed.
  • 150. Information-theoretic arguments(decision trees) Decision tree — a convenient model of algorithms involving comparisons in which: • internal nodes represent comparisons • leaves represent outcomes (or input cases) Decision tree for 3-element insertion sort a < b b < c a < c yes yes no no yes no a < c b < c a < b < c c < a < b b < a < c b < c < a no yes abc abc bac bca acb yes a < c < b c < b < a no
  • 151. Decision Trees and Sorting Algorithms • Any comparison-based sorting algorithm can be represented by a decision tree (for each fixed n) • Number of leaves (outcomes)  n! • Height of binary tree with n! leaves  log2n! • Minimum number of comparisons in the worst case  log2n! for any comparison-based sorting algorithm, since the longest path represents the worst case and its length is the height. • log2n!  n log2n (by Sterling approximation)
  • 152. Decision Trees and Sorting Algorithms • This lower bound is tight (mergesort or heapsort) Ex. Prove that 5 (or 7) comparisons are necessary and sufficient for sorting 4 keys (or 5 keys, respectively). Adversary Arguments It’s a game between the adversary and the unknown algorithm. The adversary has the input and the algorithm asks questions to the adversary about the input. The adversary tries to make the algorithm work the hardest by adjusting the input (consistently). It wins the “game” after the lower bound time (lower bound proven) if it is able to come up with two different inputs. Example 1: “Guessing” a number between 1 and n using yes/no questions (Is it larger than x?) Adversary: Puts the number in a larger of the two subsets generated by last question.
  • 153. Example 2: Merging two sorted lists of size n a1 < a2 < … < an and b1 < b2 < … < bn Adversary: Keep the ordering b1 < a1 < b2 < a2 < … < bn < an in mind and answer comparisons consistently. Claim: Any algorithm requires at least 2n-1 comparisons to output the above ordering (because it has to compare each pair of adjacent elements in the ordering) Ex: Design an adversary to prove that finding the smallest element in a set of n elements requires at least n-1 comparisons. Problem Reduction • A is a difficult unsolvable pblm is to be reduced to another solvable pblm B with known algm. • If pblm A is at least as hard as pblm as pblm B, then a LB for B is also lower bound for A. • Hence we have to find pblm B with a known LB which is further used for finding LB of pblm A. This technique is called as Reduction.
  • 154. Strassen’s Matrix Multiplication • An algorithm was published by V. Strassen in 1969. • The principal of the algorithm lies in the discovery that we can find the product C of two 2 × 2 matrices A and B with just seven multiplications as opposed to the eight required by the brute-force algorithm . • This is accomplished by using the following formulas:
  • 155. • Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm makes seven multiplications and 18 additions/subtractions, • whereas the brute-force algorithm requires eight multiplications and four additions. • These numbers should not lead us to multiplying 2 × 2 matrices by Strassen’s algorithm. • Let A and B be two n × n matrices where n is a power of 2. (If n is not a power of 2, matrices can be padded with rows and columns of zeros.)
  • 156. • We can divide A,B, and their product C into four n/2 × n/2 submatrices each as follows: For example, C00 can be computed either as A00 ∗ B00 +A01 ∗ B10 or as M1 + M4 − M5 + M7 where M1, M4, M5, and M7 are found by Strassen’s formulas, with the numbers replaced by the corresponding submatrices. If the seven products of n/2 × n/2 matrices are computed recursively by the same method, we have Strassen’s algorithm for matrix multiplication. asymptotic efficiency of this algorithm. If M(n) is the number of multiplications made by Strassen’s algorithm in multiplying two n × n matrices (where n is a power of 2), we get the following recurrence relation for it:
  • 157.
  • 158. • Since this savings in the number of multiplications was achieved at the expense of making extra additions. • To multiply two matrices of order n>1, the algorithm needs to multiply seven, matrices of order n/2 and make 18 additions/subtractions of matrices of size n/2; when n = 1, no additions are made since two numbers are simply multiplied. • Strassen’s algorithm efficiency is θ(n log 2 7), which is a better efficiency class than θ(n3) of the brute-force method.
  • 159. Travelling Salesman Problem Using Branch and Bound • TSP can be stated as follows, consider that there are n cities and travelling salesman has to visit each city exactly once and has to return to the city from where he has started. • Finding shortest hamiltonian circuit of a graph. • Computing lower bounds using the formula. • LB=∑ (sum of costs of the two least cost edges vε V adjacent to v) / 2
  • 160. a b c d e LB = [(1+ 3) + (3 + 6) + (1+ 2) + (3 + 4) + (2 + 3)]/2 = 28/2=14.(this is the root of the state space tree) • Compute the distances at level 1a-b,a-c,a-d,a-e. • Compute the distances at level 2a-b-c,a-b-d, a-b-e. • Compute the distances at level 3a-b-c-d,a-b-c-e and a-b-d-c,a-b-d-e. • Thus the state space tree can be
  • 161.
  • 162. LB = [(1+ 3) + (3 + 6) + (1+ 2) + (3 + 4) + (2 + 3)]/2 = 28/2=14.(this is the root of the state space tree) Node 1:consider distance a-b in computation of the corresponding vertices along with 1 minimum distance. Find 2 least cost edges adjacent to V/2. a=a-b + a-c=3+1=4 (consider a-b here) b=a-b+b-c=3+6=9 (consider a-b here) c=a-c+c-e=1+2=3 LB=4+9+3+7+5=28/2=14 d=d-e+c-d=3+4=7 e=c-e+d-e=2+3=5
  • 163. Node 2:consider distance a-c in computation of the corresponding vertices along with 1 minimum distance. Find 2 least cost edges adjacent to V/2. a=a-b + a-c=3+1=4 (consider a-c here) b=a-b+b-c=3+6=9 c=a-c+c-e=1+2=3 (consider a-c here) d=d-e+c-d=3+4=7 e=c-e+d-e=2+3=5 LB=4+9+3+7+5=28/2=14
  • 164. Node 3:consider distance a-d in computation of the corresponding vertices along with 1 minimum distance. Find 2 least cost edges adjacent to V/2. a=a-c + a-d=1+5=6 (consider a-d here) b=a-b+b-c=3+6=9 c=a-c+c-e=1+2=3 d=d-e+a-d=3+5=8 (consider a-d here) e=c-e+d-e=2+3=5 LB=6+9+3+8+5=31/2=15.5=16
  • 165. Node 4:consider distance a-e in computation of the corresponding vertices along with 1 minimum distance. Find 2 least cost edges adjacent to V/2. a=a-c + a-e=1+8=9 (consider a-e here) b=a-b+b-c=3+6=9 c=a-c+c-e=1+2=3 d=c-d+d-e=4+3=7 e=c-e+a-e=2+8=10 (consider a-e here) LB=9+9+3+7+10=38/2=19
  • 166. Node 5:consider distance a-b-c .including edges (a-b),(b-c) wherever possible. Find 2 least cost edges adjacent to V/2. a=a-b + a-c=3+1=4 (consider a-b here) b=a-b+b-c=3+6=9 c=a-c+b-c=1+6=7 (consider b-c here) d=d-c+d-e=4+3=7 e=c-e+d-e=2+3=5 LB=4+9+7+7+5=32/2=16 Same as node 6 (a-b-d),node 7 (a-b-e)
  • 167. Node 8:consider distance a-b-c-d (e,a).including edges (a-b),(b-c) ,(c-d) (e,a)wherever possible. Find 2 least cost edges adjacent to V/2. a=a-b + a-e=3+8=11 (consider a-b & e,ahere) b=a-b+b-c=3+6=9 (consider a-b,b-c here) c=b-c+c-d=6+4=10 (consider b-c,c-d here) d=c-d+d-e=4+3=7 (consider c-d here) e=a-e+d-e=8+3=11(consider a-e here) LB=11+9+10+7+11=48/2=24 • Same as node 9 a-b-c-e(d,a) • Same as node 10 a-b-d-c (e,a)
  • 168. Node 11:consider distance a-b-d-e (c,a).including edges (a-b),(b-d) ,(d-e) (c,a)wherever possible. Find 2 least cost edges adjacent to V/2. a=a-b + a-c=3+1=4 (consider a-b & e,ahere) b=a-b+b-d=3+7=10 (consider a-b,b-c here) c=a-c+c-e=1+2=3 (consider b-c,c-d here) d=b-d+d-e=7+3=10 (consider c-d here) e=c-e+d-e=2+3=5(consider a-e here) LB=4+10+3+10+5=32/2=16 • At node 11 we get optimum tour . • Hence the optimal tour of TSP is a-b-d-e-c-a with cost 16.
  • 169.
  • 170. Approximation Algorithm for NP-Hard Problems • If an NP hard pblm can be solved in polynomial time then all NP- complete pblms can also be solved in polynomial time. • All NP-complete pblms are NP-hard but all NP-hard pblms cannot be NP-complete. • A pblm is NP-complete if it belongs to NP-class and also every pblm in NP can also be solved in polynomial time. • The Np class pblms are the decision pblms that can be solved by non- deterministic polynomial algorithms. • non-deterministic-no particular rule is defined. • Approximation Algorithm for TSP Decision version of this algm belongs to NP complete pblm. Optimization version of this algm belongs to NP-hard pblms. 2 Approximation algms are used for TSP.A NP-hard class of pblms and those are
  • 171. • Approximation algms for TSP. • Nearest Neighbour algm. • Twice around the tree algm. Nearest Neighbour algm • Idea-choose nearest neighbour while traversing from one city to another. Q1.using nearest neighbour algm,obtain the optimal solution for given travelling salesman pblm and also find the accuracy ratio for the same. Accuracy ratio r(Sa) = f(Sa)/f(S*) Where, • Sa approximate solution • f(Sa)value of objective function for soln given by approximation algm. • f(S*) value of objective function • Generally r(Sa) >=1. • When r(Sa) reaches close to 1 then it is a better approximate solution.
  • 172. Nearest Neighbour algm • We will apply Nearest Neighbour algm for the tour for given TSP. The output is a-b-c-d-a.., Sa =2+3+2+7=14. • Now the optimal solution is a-b-d-c -a S*=2+4+2+4=12. • Accuracy ratio r(Sa) = f(Sa)/f(S*)=14/12=1.16 • Hence the tour Sa is 16% longer than optimal tour S*. • Drawback: long path. • Important role is the distance d-a=7 • r(Sa) = f(Sa)/f(S*) = 7+w/12 (i.e w is the edge d-a) • As ‘w’ increases, r(Sa) increases.For longer value of w, r(Sa) tends to infinity.Hence our AA will fail to obtain the optimal soln. Twice around the tree algorithm-Algm: • Compute MST from the graph. • Start at any arbitrary city and walk around the tree and record nodes visited. • Eliminate duplicates from the generated node list
  • 173.
  • 174. Christofides Algorithm • There is an approximation algorithm with a better performance ratio for the Euclidean traveling salesman problem. The tour yields a − b − c − e − d − a of length 37.