SlideShare a Scribd company logo
1 of 454
Design & Analysis of
Algorithms
Syed Zaid Irshad
Lecturer, Department of Computer Science, MAJU
MS Software Engineering
BSc Computer System Engineering
Algorithm
• An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time.
• No matter what the input values may be, an algorithm terminates after executing a finite number of
instructions.
• In addition, every algorithm must satisfy the following criteria:
• Input: there are zero or more quantities, which are externally supplied; Output: at least one
quantity is produced
• Definiteness: each instruction must be clear and unambiguous
• Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps
Algorithm
• Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by
a person using only pencil and paper.
Areas of Study of Algorithms
• How to devise algorithms?
• Techniques – Incremental, Divide & Conquer, Branch and Bound , Dynamic Programming, Greedy
Algorithms, Randomized Algorithm, Backtracking
• How to analyze algorithms?
• Analysis of Algorithms or performance analysis refer to the task of determining how much computing
time & storage an algorithm requires
• How to test a program?
• Debugging - Debugging is the process of executing programs on sample data sets to determine
whether faulty results occur and, if so, to correct them.
• Profiling or performance measurement is the process of executing a correct program on data sets and
measuring the time and space it takes to compute the results
Areas of Study of Algorithms
• How to validate algorithms?
• Check for Algorithm that it computes the correct answer for all possible legal inputs.
algorithm validation, First Phase.
• Second phase: Algorithm to Program, Program Proving or Program Verification
Solution be stated in two forms:
• First Form: Program which is annotated by a set of assertions about the input and
output variables of the program, predicate calculus
• Second form: is called a specification
Performance of Programs
• The performance of a program is the amount of computer memory and
time needed to run a program.
• Time Complexity
• Space Complexity
Time Complexity
• The time needed by an algorithm expressed as a function of the size of a
problem is called the time complexity of the algorithm.
• The time complexity of a program is the amount of computer time it needs
to run to completion.
• The limiting behavior of the complexity as size increases is called the
asymptotic time complexity.
• It is the asymptotic complexity of an algorithm, which ultimately determines
the size of problems that can be solved by the algorithm.
Space Complexity
• The space complexity of a program is the amount of memory it needs to
run to completion.
• The space need by a program has the following components:
• Instruction space: Instruction space is the space needed to store the compiled version
of the program instructions.
• The compiler used to complete the program into machine code.
• The compiler options in effect at the time of compilation
• The target computer.
Space Complexity
• The space need by a program has the following components:
• Data space: Data space is the space needed to store all constant and variable values.
Data space has two components:
• Space needed by constants and simple variables in program.
• Space needed by dynamically allocated objects such as arrays and class instances.
• Environment stack space: The environment stack is used to save information needed to
resume execution of partially completed functions.
Algorithm Design Goals
• The three basic design goals that one should strive for in a program are:
• Try to save Time
• A program that runs faster is a better program, so saving time is an obvious goal.
• Try to save Space
• A program that saves space over a competing program is considered desirable.
• Try to save Face
• By preventing the program from locking up or generating reams of garbled data.
Classification of Algorithms
• If “n” is the number of data items to be processed or degree of polynomial or the
size of the file to be sorted or searched or the number of nodes in a graph etc.
• 1
• Log n
• n
• n log n
• n^2
• n^3
• 2^n
Classification of Algorithms
• 1 (Constant/Best case)
• Next instructions of most programs are executed once or at most only a few times.
• If all the instructions of a program have this property,
• We say that its running time is a constant.
• Log n (Logarithmic/Divide ignore part)
• When the running time of a program is logarithmic, the program gets slightly slower as n grows.
• This running time commonly occurs in programs that solve a big problem by transforming it into a
smaller problem, cutting the size by some constant fraction.
• When n is a million, log n is a doubled. Whenever n doubles, log n increases by a constant, but log n
does not double until n increases to n^2.
Classification of Algorithms
• n (Linear/Examine each)
• When the running time of a program is linear, it is generally the case that a small
amount of processing is done on each input element.
• This is the optimal situation for an algorithm that must process n inputs.
• n log n (Linear logarithmic/Divide use all parts)
• This running time arises for algorithms that solve a problem by breaking it up into
smaller sub-problems, solving then independently, and then combining the solutions.
• When n doubles, the running time more than doubles.
Classification of Algorithms
• n^2 (Quadratic/Nested loops)
• When the running time of an algorithm is quadratic, it is practical for use only on relatively
small problems.
• Quadratic running times typically arise in algorithms that process all pairs of data items
(perhaps in a double nested loop) whenever n doubles, the running time increases four-fold.
• n^3 (Cubic/Nested loops)
• Similarly, an algorithm that process triples of data items (perhaps in a triple–nested loop) has
a cubic running time and is practical for use only on small problems.
• Whenever n doubles, the running time increases eight-fold.
Classification of Algorithms
• 2^n (Exponential/All subsets)
• Few algorithms with exponential running time are likely to be appropriate for practical
use, such algorithms arise naturally as “brute–force” solutions to problems.
• Whenever n doubles, the running time squares.
Complexity of Algorithms
• The complexity of an algorithm M is the function f(n) which gives the
running time and/or storage space requirement of the algorithm in terms of
the size “n” of the input data.
• Mostly, the storage space required by an algorithm is simply a multiple of the
data size “n”.
• Complexity shall refer to the running time of the algorithm.
Complexity of Algorithms
• The function f(n), gives the running time of an algorithm, depends not only
on the size “n” of the input data but also on the data.
• The complexity function f(n) for certain cases are:
• Best Case : The minimum possible value of f(n) is called the best case.
• Average Case : The expected value of f(n).
• Worst Case : The maximum value of f(n) for any key possible input.
Rate of Growth
• The following notations are commonly use notations in performance analysis
and used to characterize the complexity of an algorithm:
• Big–OH (O) (Upper Bound)
• The growth rate of f(n) is less than or equal (<) that of g(n).
• Big–OMEGA (Ω) (Lower Bound)
• The growth rate of f(n) is greater than or equal to (>) that of g(n).
• Big–THETA (ϴ) (Same Order)
• The growth rate of f(n) equals (=) the growth rate of g(n).
Rate of Growth
• Little–OH (o)
• 𝑛→∞
𝑓(𝑛)
𝑔(𝑛)
= 0
• The growth rate of f(n) is less than that of g(n).
• Little-OMEGA (ω)
• The growth rate of f(n) is greater than that of g(n).
Analyzing Algorithms
n log n n*logn n^2 n^3 2^n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 ????????
128 7 896 16,384 2,097,152 ????????
256 8 2048 65,536 1,677,216 ????????
Amortized Analysis
• In an amortized analysis, we average the time required to perform a sequence
of data structure operations over all the operations performed.
• With amortized analysis, we can show that the average cost of an operation
is small, if we average over a sequence of operations, even though a single
operation within the sequence might be expensive.
• Amortized analysis differs from average-case analysis in that probability is
not involved; an amortized analysis guarantees the average performance of
each operation in the worst case.
Amortized Analysis
• Three most common techniques used in amortized analysis:
• Aggregate Analysis
• Accounting method
• Potential method
Aggregate Analysis
• In which we determine an upper bound T(n) on the total cost of a sequence
of n operations.
• The average cost per operation is then T(n)/n.
• We take the average cost as the amortized cost of each operation .
Accounting method
• When there is more than one type of operation, each type of operation may
have a different amortized cost.
• The accounting method overcharges some operations early in the sequence,
storing the overcharge as “prepaid credit” on specific objects in the data
structure.
• Later in the sequence, the credit pays for operations that are charged less
than they cost.
Potential method
• The potential method maintains the credit as the “potential energy” of the
data structure instead of associating the credit with individual objects within
the data structure.
• The potential method, which is like the accounting method in that we
determine the amortized cost of each operation and may overcharge
operations early on to compensate for undercharges later.
The Rule of Sums
• Suppose that T1(n) and T2(n) are the running times of two programs fragments P1
and P2, and that T1(n) is O(f(n)) and T2(n) is O(g(n)).
• Then T1(n) + T2(n), the running time of P1 followed by P2 is O(max f(n), g(n)),
this is called as rule of sums.
• For example, suppose that we have three steps whose running times are respectively
O(n^2), O(n^3) and O(n. log n).
• Then the running time of the first two steps executed sequentially is O (max(n^2,
n^3)) which is O(n^3).
• The running time of all three together is O(max (n^3, n. log n)) which is O(n^3).
The rule of products
• If T1(n) and T2(n) are O(f(n)) and O(g(n)) respectively.
• Then T1(n)*T2(n) is O(f(n) g(n)).
• It follows term the product rule that O(c f(n)) means the same thing as
O(f(n)) if “c‟ is any positive constant.
• For example, O(n^2/2) is same as O(n^2).
The Running time of a program
• When solving a problem, we are faced with a choice among algorithms.
• The basis for this can be any one of the following:
• We would like an algorithm that is easy to understand, code and debug.
• We would like an algorithm that makes efficient use of the computer’s resources,
especially, one that runs as fast as possible.
Measuring the running time of a program
• The running time of a program depends on factors such as:
• The input to the program.
• The quality of code generated by the compiler used to create the object program.
• The nature and speed of the instructions on the machine used to execute the program,
and
• The time complexity of the algorithm underlying the program.
Asymptotic Analysis of Algorithms
• This approach is based on the asymptotic complexity measure.
• This means that we don't try to count the exact number of steps of a
program, but how that number grows with the size of the input to the
program.
• That gives us a measure that will work for different operating systems,
compilers and CPUs.
• The asymptotic complexity is written using big-O notation.
Rules for using big-O
• The most important property is that big-O gives an upper bound only.
• If an algorithm is O(n^2), it doesn't have to take n^2 steps (or a constant
multiple of n^2). But it can't take more than n2.
• So, any algorithm that is O(n), is also an O(n^2) algorithm. If this seems
confusing, think of big-O as being like "<".
• Any number that is < n is also < n^2.
Rules for using big-O
• Ignoring constant factors: O(c f(n)) = O(f(n)), where c is a constant; e.g., O(20 n^3) =
O(n^3)
• Ignoring smaller terms: If a<b then O(a+b) = O(b), for example O(n^2+n) = O(n^2)
• Upper bound only: If a<b then an O(a) algorithm is also an O(b) algorithm.
• n and log n are "bigger" than any constant, from an asymptotic view (that means for large
enough n). So, if k is a constant, an O(n + k) algorithm is also O(n), by ignoring smaller
terms. Similarly, an O(log n + k) algorithm is also O(log n).
• Another consequence of the last item is that an O(n log n + n) algorithm, which is O(n(log
n + 1)), can be simplified to O(n log n).
Properties of Asymptotic Notations
• 1. General Properties:
• If f(n) is O(g(n)) then a*f(n) is also O(g(n)); where a is a constant.
• Similarly, this property satisfies both Θ and Ω notation.
• 2. Transitive Properties:
• If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n))
• Similarly, this property satisfies both Θ and Ω notation.
• We can say
• If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n))
• If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
Properties of Asymptotic Notations
• 3. Reflexive Properties:
• Reflexive properties are always easy to understand after transitive.
• If f(n) is given, then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
• Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
• Example: f(n) = n²; O(n²) i.e., O(f(n))
• Similarly, this property satisfies both Θ and Ω notation.
• We can say that:
• If f(n) is given, then f(n) is Θ(f(n)).
• If f(n) is given, then f(n) is Ω (f(n)).
Properties of Asymptotic Notations
• 4. Symmetric Properties:
• If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).
• Example: f(n) = n² and g(n) = n²
• then f(n) = Θ(n²) and g(n) = Θ(n²)
• This property only satisfies for Θ
notation.
• 5. Transpose Symmetric Properties:
• If f(n) is O(g(n)) then g(n) is Ω (f(n)).
• Example: f(n) = n, g(n) = n²
• then n is O(n²) and n² is Ω (n)
• This property only satisfies O and Ω
notations.
Properties of Asymptotic Notations
• 6. Some More Properties:
• If f(n) = O(g(n)) and f(n) = Ω(g(n))
then f(n) = Θ(g(n))
• If f(n) = O(g(n)) and d(n)=O(e(n))
• then f(n) + d(n) = O (max(g(n), e(n)))
• Example: f(n) = n i.e., O(n)
• d(n) = n² i.e., O(n²)
• then f(n) + d(n) = n + n² i.e., O(n²)
• If f(n)=O(g(n)) and d(n)=O(e(n))
• then f(n) * d(n) = O(g(n) * e(n))
• Example: f(n) = n i.e., O(n)
• d(n) = n² i.e., O(n²)
• then f(n) * d(n) = n * n² = n³ i.e., O(n³)
Calculating the running time of a program
• x = 3*y + 2;
• 5 n^3/100 n^2 = n/20
• for (i = 1; i<=n; i++)
v[i] = v[i] + 1;
• for (i = 1; i<=n; i++)
for (j = 1; j<=n; j++)
a[i,j] = b[i,j] * x;
• for (i = 1; i<=n; i++)
for (j = 1; j<=n; j++)
C[i, j] = 0;
for (k = 1; k<=n; k++)
C[i, j] = C[i, j] + A[i, k] *
B[k, j];
General rules for the analysis of programs
• The running time of each assignment read and write statement can usually be taken
to be O(1).
• The running time of a sequence of statements is determined by the sum rule.
• The running time of an if–statement is the cost of conditionally executed
statements, plus the time for evaluating the condition
• The time to execute a loop is the sum, over all times around the loop, the time to
execute the body and the time to evaluate the condition for termination.
Recurrence
• Many algorithms are recursive in nature.
• When we analyze them, we get a recurrence relation for time complexity.
• We get running time on an input of size n as a function of n and the running
time on inputs of smaller sizes.
• For example, in Merge Sort, to sort a given array, we divide it in two halves
and recursively repeat the process for the two halves.
Recurrence
• Time complexity of Merge Sort can be written as T(n) = 2T(n/2) + c*n.
• There are mainly three ways for solving recurrences.
• Substitution Method
• Recurrence Tree Method
• Master Method
• Iteration Method
Substitution Method
• One way to solve a divide-and-conquer recurrence equation is to use the iterative
substitution method.
• In using this method, we assume that the problem size n is fairly large and we than
substitute the general form of the recurrence for each occurrence of the function T on the
right-hand side.
• For example, consider the recurrence 𝑇(𝑛) = 2𝑇
𝒏
𝟐
+ 𝑛
• We guess the solution as 𝑇(𝑛) = 𝑂(𝑛 log 𝒏). Now we use induction to prove our guess.
• We need to prove that 𝑇(𝑛) <= 𝑐𝑛 log 𝒏. We can assume that it is true for values smaller
than n.
Substitution Method
• 𝑇
𝒏
𝟐
<= 𝑐
𝒏
𝟐
log
𝒏
𝟐
• <= 𝟐 𝑐
𝒏
𝟐
log
𝒏
𝟐
+ 𝑛
• = 𝑐𝑛 log 𝒏 − 𝑐𝑛 log 𝟐 + 𝑛
• = 𝑐𝑛 log 𝒏 − 𝑐𝑛 + 𝑛
• <= 𝑐𝑛 log 𝒏
Recurrence Tree Method
• Another way of characterizing recurrence equations is to use the recursion
tree method.
• Like the substitution method, this technique uses repeated substitution to
solve a recurrence equation, but it differs from the iterative substitution
method in that, rather than being an algebraic approach, it is a visual
approach.
• 𝑇(𝑛) = 3𝑇
𝑛
4
+ 𝑐𝒏𝟐
Master Method
• The master theorem is a formula for solving recurrences of the form
𝑇(𝑛) = 𝑎𝑇
𝑛
𝑏
+ 𝑓(𝑛), where 𝑎 ≥ 1 and 𝑏 > 1 and f(n) is
asymptotically positive.
• This recurrence describes an algorithm that divides a problem of size n into
a subproblems, each of size
𝑛
𝑏
, and solves them recursively.
Master Method
• To find which case the T(n) belongs we can find the log𝑏 𝑎 and if the value
is:
• Greater than c it lies in case 1
• Equal to c it lies in case 2
• Less than c it lies in case 3
• Were c being the power of n in f(n)
Syed Zaid Irshad
Example
• 𝑇(𝑛) = 4𝑇
𝑛
4
+ 5𝑛
• 𝑇(𝑛) = 4𝑇
𝑛
5
+ 5𝑛
• 𝑇(𝑛) = 5𝑇
𝑛
4
+ 5𝑛
𝑇(𝑛) = 5𝑇
𝑛
4
+ 5𝑛
𝑎 = 5, 𝑏 = 4, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0
Find:
log𝑏 𝑎 = log4 5 = 1.16
Because:
𝑐 < log𝑏 𝑎
It is case 1 of MT
𝑇 𝑛 = 𝜃(𝑛log𝑏 𝑎)
𝑇 𝑛 = 𝜃(𝑛1.16
)
𝑇(𝑛) = 4𝑇
𝑛
4
+ 5𝑛
𝑎 = 4, 𝑏 = 4, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0
Find:
log𝑏 𝑎 = log4 4 = 1
Because:
𝑐 = log𝑏 𝑎
It is case 2 of MT
𝑇 𝑛 = 𝜃(𝑛log𝑏 𝑎 𝑙𝑜𝑔𝐾+1 𝑛)
𝑇 𝑛 = 𝜃 𝑛1
𝑙𝑜𝑔0+1
𝑛 = 𝜃 𝑛 𝑙𝑜𝑔 𝑛
𝑇(𝑛) = 4𝑇
𝑛
5
+ 5𝑛
𝑎 = 4, 𝑏 = 5, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0
Find:
log𝑏 𝑎 = log5 4 = 0.86
Because:
𝑐 > log𝑏 𝑎
It is case 3 of MT
𝑇 𝑛 = 𝜃(𝑓(𝑛))
𝑇 𝑛 = 𝜃(𝑛)
Iteration Method
• The Iteration Method, is also known as the Iterative Method, Backwards
Substitution, Substitution Method, and Iterative Substitution.
• It is a technique or procedure in computational mathematics used to solve a
recurrence relation that uses an initial guess to generate a sequence of improving
approximate solutions for a class of problems, in which the nth approximation is
derived from the previous ones.
• A Closed-Form Solution is an equation that solves a given problem in terms of
functions and mathematical operations from a given generally-accepted set.
• 𝑻 𝒏 = 𝟐𝑻
𝒏
𝟐
+ 𝟕
• Find what 𝑻
𝒏
𝟐
is:
• 𝑻
𝒏
𝟐
= 𝟐𝑻
𝒏
𝟐
𝟐
+ 𝟕 = 𝟐𝑻
𝒏
𝟒
+ 𝟕
• Now put 𝑻
𝒏
𝟐
in initial equation
• 𝑻 𝒏 = 𝟐 𝟐𝑻
𝒏
𝟒
+ 𝟕 + 𝟕 = 𝟒𝑻
𝒏
𝟒
+ 𝟐𝟏
• By generalizing the equation
• 𝑻 𝒏 = 𝟐𝒊
𝑻
𝒏
𝟐𝒊 + 𝟕(𝟐𝒊
− 𝟏)
Example
• Recurrence stops when 𝑛 = 1 so we can say
that
•
𝒏
𝟐𝒊 = 𝟏
• 𝟐𝒊
= 𝒏
• log 𝟐𝒊
= log 𝒏
• 𝒊 log 𝟐 = log 𝒏
• 𝒊 = log 𝒏
• Now replace i in general equation
• 𝑻 𝒏 = 𝟐log 𝒏
𝑻
𝒏
𝟐log 𝒏 + 𝟕(𝟐log 𝒏
− 𝟏)
• 𝑻 𝒏 = 𝒏𝑻
𝒏
𝑛
+ 𝟕(𝑛 − 𝟏)
• 𝑻 𝒏 = 𝒏𝑻 𝟏 + 𝟕(𝑛 − 𝟏)
• 𝑻 𝒏 = 𝟐𝒏 + 𝟕(𝑛 − 𝟏)
• 𝑻 𝒏 = 𝟗𝒏 − 𝟕
• 𝑻 𝒏 = 𝑶(𝒏)
Solution
• 𝑻 𝒏 = 𝟐𝑻
𝒏
𝟐
+ 𝟒𝒏
• Find what 𝑻
𝒏
𝟐
is:
• 𝑻
𝒏
𝟐
= 𝟐𝑻
𝒏
𝟐
𝟐
+ 𝟒
𝒏
𝟐
= 𝟐𝑻
𝒏
𝟒
+ 𝟐𝒏
• Now put 𝑻
𝒏
𝟐
in initial equation
• 𝑻 𝒏 = 𝟐 𝟐𝑻
𝒏
𝟒
+ 𝟐𝒏 + 𝟒𝒏 = 𝟒𝑻
𝒏
𝟒
+ 𝟖𝒏
• By generalizing the equation
• 𝑻 𝒏 = 𝟐𝒊
𝑻
𝒏
𝟐𝒊 + 𝟒𝒊𝒏
• Recurrence stops when 𝑛 = 1 so we can say that
•
𝒏
𝟐𝒊 = 𝟏
• After some time
• 𝟐𝒊
= 𝒏
• log 𝟐𝒊
= log 𝒏
• 𝒊 log 𝟐 = log 𝒏
• 𝒊 = log 𝒏
• Now replace I in general equation
• 𝑻 𝒏 = 𝟐log 𝒏
𝑻
𝒏
𝟐log 𝒏 + 𝟒𝒏 log 𝒏
Solution
• 𝑻 𝒏 = 𝒏𝑻
𝒏
𝑛
+ 𝟒𝒏 log 𝒏
• 𝑻 𝒏 = 𝒏𝑻 𝟏 + 𝟒𝒏 log 𝒏
• 𝑻 𝒏 = 𝒏 + 𝟒𝒏 log 𝒏
• 𝑻 𝒏 = 𝒏 log 𝒏
• 𝑻 𝒏 = 𝑶(𝒏log 𝒏)
Incremental Technique
• An incremental algorithm is given a sequence of input and finds a sequence
of solutions that build incrementally while adapting to the changes in the
input.
• Insertion Sort
Insertion Sort
Iterate from arr[1] to
arr[n] over the array.
Compare the current
element (key) to its
predecessor.
If the key element is
smaller than its
predecessor, compare it
to the elements before.
Move the greater
elements one position up
to make space for the
swapped element.
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
Execution
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
Checking either index[1] is less than index[0]
Which is, in this case.
We than Swap the elements with each other.
Index 0 1 2 3 4 5 6 7 8 9
Element 3 4 2 10 12 1 5 6 7 9
Execution
Index 0 1 2 3 4 5 6 7 8 9
Element 3 4 2 10 12 1 5 6 7 9
Now checking either index[2] is less than index[1]
Which is, in this case.
So now we check if it is also index[2] is less than index[0]
Which is, in this case.
We than Swap the elements with each other.
Index 0 1 2 3 4 5 6 7 8 9
Element 2 3 4 10 12 1 5 6 7 9
Execution
Index 0 1 2 3 4 5 6 7 8 9
Element 2 3 4 10 12 1 5 6 7 9
Now checking either index[3] is less than index[2]
Which is not, in this case.
So now we skip it.
Index 0 1 2 3 4 5 6 7 8 9
Element 2 3 4 10 12 1 5 6 7 9
Execution
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
Iteration 1 3 4 2 10 12 1 5 6 7 9
Iteration 2 2 3 4 10 12 1 5 6 7 9
Iteration 3 2 3 4 10 12 1 5 6 7 9
Iteration 4 2 3 4 10 12 1 5 6 7 9
Iteration 5 1 2 3 4 10 12 5 6 7 9
Iteration 6 1 2 3 4 5 10 12 6 7 9
Iteration 7 1 2 3 4 5 6 10 12 7 9
Iteration 8 1 2 3 4 5 6 7 10 12 9
Iteration 9 1 2 3 4 5 6 7 9 10 12
Algorithm Design
• INSERTION-SORT(index)
• for i = 1 to n
• key ← index [1]
• j ← 1 – 1
• while j > = 0 and index[j] > key
• index[j+1] ← index[j]
• j ← j – 1
• End while
• index[j+1] ← key
• End for
Algorithm Design
• INSERTION-SORT(index)
• for i = 1 to 9
• key ← 3
• j ← 0
• while 0 > = 0 and 4 > 3
• index[0+1] = index[1] ← 4
• j ← 0 – 1 = -1
• End while
• A[-1+1] = A[0] ← 3
• End for
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
For(i=1) 3 4 2 10 12 1 5 6 7 9
Algorithm Design
• INSERTION-SORT(index)
• for i = 1 to 9
• key ← 2
• j ← 1
• while 1 > = 0 and 4 > 2
• index[1+1] = index[2] ← 4
• j ← 1 – 1 = 0
• //End while
• //A[-1+1] = A[0] ← 3
• //End for
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
For(i=2) 2 3 4 10 12 1 5 6 7 9
INSERTION-SORT(index)
for i = 1 to 9
key ← 2
j ← 1
while 0 > = 0 and 3 > 2
index[0+1] = index[1] ← 3
j ← 0 – 1 = -1
End while
A[-1+1] = A[0] ← 2
End for
Algorithm Analysis
𝐣 𝐰𝐡𝐢𝐥𝐞 𝐥𝐨𝐨𝐩 (𝐭𝐣) 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭# 𝟔 𝐨𝐫 𝟕 𝐟𝐨𝐫 𝐥𝐨𝐨𝐩
0 2 1 1
1 3 2 1
2 1 0 1
3 1 0 1
4 6 5 1
5 3 2 1
6 3 2 1
7 3 2 1
8 3 2 1
Total 25 16 9
Algorithm Analysis
• INSERTION-SORT(A) Cost Times
• for i = 1 to n 𝑐0 𝑛
• key ← A [i] 𝑐1 𝑛 − 1
• j ← i – 1 𝑐2 𝑛 − 1
• while j > = 0 and A[j] > key 𝑐3 𝑡
• A[j+1] ← A[j] 𝑐4 (𝑡 − 1)
• j ← j – 1 𝑐5 (𝑡 − 1)
• End while
• A[j+1] ← key 𝑐6 𝑛 − 1
• End for
Algorithm Analysis
• General Case
• 𝑇 𝑛 = 𝑐0𝑛 + 𝑐1 𝑛 − 1 + 𝑐2 𝑛 − 1 + 𝑐3 𝑡𝑗 + 𝑐4 𝑡𝑗 − 1 + 𝑐5 𝑡𝑗 −
Algorithm Analysis
• Worst Case (Reversed Sorted)
• 𝑡𝑗 =
𝑛(𝑛+1)
2
− 1 , 𝑡𝑗 − 1 =
𝑛(𝑛−1)
2
• 𝑇 𝑛 = 𝑐0𝑛 + 𝑐1 𝑛 − 1 + 𝑐2 𝑛 − 1 + 𝑐3
𝑛(𝑛+1)
2
− 1 + 𝑐4
𝑛(𝑛−1)
2
+ 𝑐5
𝑛(𝑛−1)
2
+ 𝑐6 𝑛 − 1
• 𝑇 𝑛 =
𝑐3
2
+
𝑐4
2
+
𝑐5
2
𝑛2
+ 𝑐0 + 𝑐1 + 𝑐2 + 𝑐6 +
𝑐3
2
−
𝑐4
2
−
𝑐5
2
𝑛 − 𝑐1 + 𝑐2 + 𝑐3 + 𝑐6
• 𝑇 𝑛 = 𝑎𝑛2
+ 𝑏𝑛 − 𝑐
• Rate of Growth
• 𝜃(𝑛2
)
Properties
Time Complexity Big-O: O 𝑛2
, Big-Omega: Ω 𝑛 , Big-Theta: θ 𝑛2
Auxiliary Space O(1)
Boundary Cases
Insertion sort takes maximum time to sort if elements are sorted in reverse
order. And it takes minimum time (Order of n) when elements are already
sorted.
Algorithmic Paradigm Incremental Approach
Sorting In Place Yes
Stable Yes
Online Yes
Uses
Insertion sort is used when number of elements is small. It can also be
useful when input array is almost sorted, only few elements are misplaced in
complete big array.
Divide-and-Conquer approach
• A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-
problems of the same or related type, until these become simple enough to be solved directly.
• The solutions to the sub-problems are then combined to give a solution to the original problem.
• Merge Sort
• A typical Divide and Conquer algorithm solves a problem using the following three steps.
• Divide: Break the given problem into subproblems of same type. This step involves breaking the problem into
smaller sub-problems.
• Conquer: Recursively solve these sub-problems.
• Combine: Appropriately combine the answers.
Merge Sort
Divide array
into two parts
Sort each part
of array
Combine
results into
single array
Index 0 1 2 3 4 5 6 7 8 9
Element 4 3 2 10 12 1 5 6 7 9
Execution Example
4 3 2 10 12 1 5 6 7 9
4 3 2 10 12 1 5 6 7 9
4 3 2 10 12 1 5 6 7 9
2 10 12 6 7 9
4 3 1 5
3 4 2 10 12 1 5 6 7 9
2 3 4 10 12 1 5 6 7 9
2 3 4 10 12 1 5 6 7 9
1 2 3 4 5 6 7 9 10 12
Algorithm Design
• mergeSort(A, p, r):
• if p > r
• return
• q = (p+r)/2
• mergeSort(A, p, q)
• mergeSort(A, q+1, r)
• merge(A, p, q, r)
• //A = array, p = starting index, r = ending index
Algorithm Analysis cost times
𝑐1 1
𝑐2 1
𝑐3 1
𝑐4 𝑛1 + 1 = 𝑛 2 + 1
𝑐5 𝑛 2
𝑐6 𝑛2 + 1 = 𝑛 2 + 1
𝑐7 𝑛 2
𝑐8 1
𝑐9 1
𝑐10 1
𝑐11 1
𝑐12 𝑛 + 1
𝑐13 𝑛
𝑐14 𝑚
𝑐15 𝑚
𝑐16 𝑛 − 𝑚
𝑐17 𝑛 − 𝑚
Algorithm Analysis
• General Case:
• 𝐓 𝐧 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ 1 + 𝑐4 ∗
𝑛
2
+ 1 + 𝑐5 ∗
𝑛
2
+ 𝑐6 ∗
𝑛
2
+ 1 + 𝑐7 ∗
𝑛
2
+
𝑐8 ∗ 1 + 𝑐9 ∗ 1 + 𝑐10 ∗ 1 + 𝑐11 ∗ 1 + 𝑐12 ∗ 𝑛 + 1 + 𝑐13 ∗ 𝑛 + 𝑐14 ∗ 𝑚 + 𝑐15 ∗ 𝑚 +
𝑐16 ∗ 𝑛 − 𝑚 + 𝑐17 ∗ 𝑛 − 𝑚 =
𝑐4
2
+
𝑐5
2
+
𝑐6
2
+
𝑐7
2
+ 𝑐12 + 𝑐13 + 𝑐16 + 𝑐17 𝑛 +
(𝑐4 + 𝑐15 − 𝑐16 − 𝑐17)𝑚 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐4 + 𝑐6 + 𝑐8 + 𝑐9 + 𝑐10 + 𝑐11 + 𝑐12 )
• = 𝑎𝑛 + 𝑏𝑚 + 𝑐
• = 𝚯(𝒏)
Algorithm Analysis
• Best/Worst/Average Case:
• Array division: log 𝑛
• Number of steps: log(𝑛 + 1)
• Middle point: 𝑂(1)
• Merge required: 𝑂(𝑛)
• Now by multiplying: n log(𝑛 + 1) = 𝑛 𝑙𝑜𝑔 𝑛 = 𝑂(𝑛 log 𝑛)
Properties
Time Complexity 𝑂(𝑛𝐿𝑜𝑔𝑛)
Auxiliary Space 𝑂(𝑛)
Boundary Cases
Algorithmic Paradigm Divide and Conquer
Sorting In Place No
Stable Yes
Online
Uses Merge Sort is useful for sorting linked lists in 𝑂(𝑛𝐿𝑜𝑔𝑛) time.
Heap Sort
• Heap
• Data Structure that manages information
• Array represented as a Near Complete Binary Tree: Each level, except possibly the
last, is filled, and all nodes in the last level are as far left as possible
• A.length and A.heap-size
Algorithm Design
• Heap
• height (tree) and height (node)
Algorithm Design
• Max-Heap
, except for the Root
• Min-Heap
, except for the Root
Algorithm Design
Algorithm Design
• BUILD-MAX-HEAP
Algorithm Design
• MAX-HEAPIFY
Algorithm Design
𝑖 = 2
𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 2
𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 2 = 2𝑖 = 2 ∗ 2 = 4
𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 2 = 2𝑖 + 1 = 2 ∗ 2 + 1 = 5
𝑖𝑓 𝑙 = 4 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑙 = 𝐴 4 = 14 > 𝐴 𝑖 = 𝐴 2 = 4
𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑙 = 4
𝑖𝑓 𝑟 = 5 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑟 = 𝐴 5 = 7 > 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4 = 14 
𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 4 ≠ 𝑖 = 2
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 𝐴 𝑖 = 𝐴 2 𝑤𝑖𝑡ℎ 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4
𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 4
Algorithm Design
𝑖 = 4
𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 4
𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 4 = 2𝑖 = 2 ∗ 4 = 8
𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 4 = 2𝑖 + 1 = 2 ∗ 4 + 1 = 9
𝑖𝑓 𝑙 = 8 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑙 = 𝐴 8 = 2 > 𝐴 𝑖 = 𝐴 4 = 4 
𝑒𝑙𝑠𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 = 4
𝑖𝑓 𝑟 = 9 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑟 = 𝐴 9 = 8 > 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4 = 4
𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑟 = 9
𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9 ≠ 𝑖 = 4
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 𝐴 𝑖 = 𝐴 4 𝑤𝑖𝑡ℎ 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 9
𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9
Algorithm Design
𝑖 = 9
𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 9
𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 9 = 2𝑖 = 2 ∗ 9 = 18
𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 9 = 2𝑖 + 1 = 2 ∗ 9 + 1 = 19
𝑖𝑓 𝑙 = 18 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 
𝑒𝑙𝑠𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 = 9
𝑖𝑓 𝑟 = 19 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 
𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9 ≠ 𝑖 = 9
Algorithm Design
• MAX-HEAPIFY
Heap Sort: Analysis
• MAX-HEAPIFY
cost times (Worst Case)
𝑐1 1
𝑐2 1
𝑐3 1
𝑐4 1/2
𝑐5 0
𝑐6 1
𝑐7 1 2
𝑐8 1
𝑐9 1
𝑐10 ?
Heap Sort: Analysis
• MAX-HEAPIFY (Worst Case)
𝑇 𝑛 ≤ 𝑇 2𝑛 3 + 𝜃 1
= 𝑂(𝑙𝑜𝑔2𝑛)
Algorithm Analysis
• Worst Case occurs when the last level of the Heap is exactly (or atleast) half-full:
• Heap is a Near Complete Binary Tree (left sub-tree of any node is always larger or equal
in size than its right sub-tree)
• In the Worst Case, recursion would take place as many times as possible, which is only
possible if atleast the left sub-tree is completely filled
• In order to find an upper bound on the size of the sub-trees (maximum number of times
that recursion would take place), we need to observe the maximum size of the left sub-tree
only
• Proof
Heap Sort: Design
Algorithm Analysis
• MAX-HEAPIFY
cost times (Best
Case)
times (Average
Case)
𝑐1 1 1
𝑐2 1 1
𝑐3 1 1
𝑐4 0 1 2
𝑐5 1 1 2
𝑐6 1 1
𝑐7 0 1 2
𝑐8 1 1
𝑐9 0 1 2
𝑐10 0 ?
Algorithm Analysis
• Probabilities of Mutually Exclusive Events get summed up
𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 ≠ 𝑖 OR 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 =
1
2
+
1
2
= 1
• Probabilities of Independent Events get multiplied
𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 ≠ 𝑖 =
1
2
= 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑙 AND/OR 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑟
=
1
2
×
1
2
Algorithm Analysis
• MAX-HEAPIFY (Best Case)
𝑇 𝑛 = 𝜃 1
• MAX-HEAPIFY (Average Case)
𝑇 𝑛 ≤ 𝑇(
2𝑛 3
2
) + 𝜃 1
= 𝑂(
𝑙𝑜𝑔2𝑛
2
)
= 𝑂(𝑙𝑜𝑔2𝑛)
= 𝑂 ℎ
Quick Sort
• Efficient algorithm for sorting many elements via comparisons
• Divide-and-Conquer approach
Quick Sort
• It picks an element as pivot and partitions the given array around
the picked pivot. There are many different versions of quicksort that
pick pivot in different ways.
1.Always pick first element as pivot.
2.Always pick last element as pivot (implemented below)
3.Pick a random element as pivot.
4.Pick median as pivot.
Algorithm Design
• To sort an entire Array, the initial call is
• QUICKSORT (𝑨, 𝟏, 𝑨. 𝒍𝒆𝒏𝒈𝒕𝒉)
Algorithm Design
Algorithm Design
𝑨 = {𝟐, 𝟖, 𝟕, 𝟏, 𝟑, 𝟓, 𝟔, 𝟒}
1 2 3 4 5 6 7 8
𝑝 = 1, 𝑟 = 8
𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁 𝐴, 𝑝, 𝑟 = 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁 𝐴, 1, 8
𝑥 = 𝐴 𝑟 = 𝐴 8 = 4
𝑖 = 𝑝 − 1 = 1 − 1 = 0
Iteration 1:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 𝑝 = 1
𝑖𝑓 𝐴 𝑗 = 𝐴 1 = 2 ≤ 𝑥 = 4
2 8 7 1 3 5 6 4
Algorithm Design
𝑖 = 𝑖 + 1 = 0 + 1 = 1
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 1 = 2 with A j = A 1 = 2
Iteration 2:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 2
𝑖𝑓 𝐴 𝑗 = 𝐴 2 = 8 ≤ 𝑥 = 4
Iteration 3:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 3
𝑖𝑓 𝐴 𝑗 = 𝐴 3 = 7 ≤ 𝑥 = 4
2 8 7 1 3 5 6 4
Algorithm Design
Iteration 4:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 4
𝑖𝑓 𝐴 𝑗 = 𝐴 4 = 1 ≤ 𝑥 = 4
𝑖 = 𝑖 + 1 = 1 + 1 = 2
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 2 = 8 with A j = A 4 = 1
Iteration 5:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 5
𝑖𝑓 𝐴 𝑗 = 𝐴 5 = 3 ≤ 𝑥 = 4
𝑖 = 𝑖 + 1 = 2 + 1 = 3
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 3 = 7 with A j = A 5 = 3
2 1 3 8 7 5 6 4
2 1 7 8 3 5 6 4
Algorithm Design
Iteration 6:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 6
𝑖𝑓 𝐴 𝑗 = 𝐴 6 = 5 ≤ 𝑥 = 4
Iteration 7:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 7
𝑖𝑓 𝐴 𝑗 = 𝐴 7 = 6 ≤ 𝑥 = 4
Iteration 8:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 8
𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i + 1 = A 3 + 1 = A 4 = 8 with A r = A 8 = 4
𝑟𝑒𝑡𝑢𝑟𝑛 𝑖 + 1 = 3 + 1 = 4
2 1 3 4 7 5 6 8
Algorithm Design
Algorithm Analysis
cost times
𝑐1 1
𝑐2 1
𝑐3 𝑛
𝑐4 𝑛 − 1
𝑐5 (𝑛 − 1) 2
𝑐6 (𝑛 − 1) 2
𝑐7 1
𝑐8 1
Algorithm Analysis
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ 𝑛 + 𝑐4 ∗ (𝑛 − 1) + 𝑐5 ∗
𝑛 − 1
2
+ 𝑐6 ∗
𝑛 − 1
2
+ 𝑐7 ∗ 1 + 𝑐8 ∗ 1
= (𝑐3 + 𝑐4 +
𝑐5
2
+
𝑐6
2
)𝑛 + 𝑐1 + 𝑐2 − 𝑐4 −
𝑐5
2
−
𝑐6
2
+ 𝑐7 + 𝑐8
= 𝑎𝑛 + 𝑏
= 𝜣(𝒏)
• Best Case, Worst Case, or Average Case?
Algorithm Analysis
cost times (Worst Case:
Pivot is largest)
times (Best Case:
Pivot is smallest)
𝑐1 1 1
𝑐2 1 1
𝑐3 𝑛 𝑛
𝑐4 𝑛 − 1 𝑛 − 1
𝑐5 𝑛 − 1 0
𝑐6 𝑛 − 1 0
𝑐7 1 1
𝑐8 1 1
Algorithm Analysis
• Worst Case:
• The worst case occurs when the partition process always picks greatest or smallest element as
pivot.
• If we consider above partition strategy where last element is always picked as pivot, the worst
case will occur when the array is already sorted in increasing or decreasing order.
• Following is recurrence for worst case.
• T(n) = T(0) + T(n-1) + theta(n)
• which is equivalent to
• T(n) = T(n-1) + theta(n)
• The solution of above recurrence is theta (n^2).
Algorithm Analysis
• Best Case:
• The best case occurs when the partition process always picks the middle element as
pivot. Following is recurrence for best case.
• T(n) = 2T(n/2) + theta(n)
• The solution of above recurrence is theta (nLogn).
• It can be solved using case 2 of Master Theorem.
Algorithm Analysis
• Average Case:
• To do average case analysis, we need to consider all possible permutation of array and
calculate time taken by every permutation which doesn’t look easy.
• We can get an idea of average case by considering the case when partition puts O(n/9)
elements in one set and O(9n/10) elements in other set. Following is recurrence for this
case.
• T(n) = T(n/9) + T(9n/10) + theta(n)
• Solution of above recurrence is also O(nLogn)
Randomized Quick Sort: Analysis
• Average-case partitioning (Unbalanced partitioning)
• Random Sampling
𝑇(𝑛) = 𝛩(𝑛)
Randomized Quick Sort: Analysis
• Average-case partitioning (Unbalanced partitioning)
𝑇(𝑛) = O 𝑛𝑙𝑜𝑔2𝑛
Counting Sort
• Assumptions:
• Each of the 𝑛 input elements is an integer in the range: 0 𝑡𝑜 𝑘, where 𝑘 is an integer
• When 𝑘 = 𝑂(𝑛), 𝑻(𝒏) = 𝜣(𝒏)
• Determines for each input element 𝑥, the number of elements less than 𝑥
• Places element 𝑥 into correct position in array
• External Arrays required:
• 𝐵[1 … 𝑛]: Sorted Output
• 𝐶[0 … 𝑘]: Temporary Storage
Algorithm Design
Algorithm Design
𝑨 = {𝟐, 𝟓, 𝟑, 𝟎, 𝟐, 𝟑, 𝟎, 𝟑}
𝑘 = 5
𝐶𝑂𝑈𝑁𝑇𝐼𝑁𝐺 − 𝑆𝑂𝑅𝑇 𝐴, 𝐵, 𝑘 = 𝐶𝑂𝑈𝑁𝑇𝐼𝑁𝐺 − 𝑆𝑂𝑅𝑇 𝐴, 𝐵, 5
𝑙𝑒𝑡 𝐶 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 C
𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 5 0 1 2 3 4 5
𝐶 𝑖 = 0 C
0 1 2 3 4 5
2 5 3 0 2 3 0 3
0 0 0 0 0 0
Algorithm Design
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ = 8
𝐶 A j = 𝐶 𝐴 1 = 𝐶 2 = 𝐶 A j + 1 = 𝐶 2 + 1 = 0 + 1 = 1
𝐶 A j = 𝐶 𝐴 2 = 𝐶 5 = 𝐶 A j + 1 = 𝐶 5 + 1 = 0 + 1 = 1
𝐶 A j = 𝐶 𝐴 3 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 0 + 1 = 1
𝐶 A j = 𝐶 𝐴 4 = 𝐶 0 = 𝐶 A j + 1 = 𝐶 0 + 1 = 0 + 1 = 1
𝐶 A j = 𝐶 𝐴 5 = 𝐶 2 = 𝐶 A j + 1 = 𝐶 2 + 1 = 1 + 1 = 2
𝐶 A j = 𝐶 𝐴 6 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 1 + 1 = 2
𝐶 A j = 𝐶 𝐴 7 = 𝐶 0 = 𝐶 A j + 1 = 𝐶 0 + 1 = 1 + 1 = 2
𝐶 A j = 𝐶 𝐴 8 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 2 + 1 = 3
C
0 1 2 3 4 5
2 0 2 3 0 1
Algorithm Design
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑘 = 5
𝐶 𝑖 = 𝐶 1 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 1 + C 0 = 0 + 2 = 2
𝐶 𝑖 = 𝐶 2 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 2 + C 1 = 2 + 2 = 4
𝐶 𝑖 = 𝐶 3 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 3 + C 2 = 3 + 4 = 7
𝐶 𝑖 = 𝐶 4 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 4 + C 3 = 0 + 7 = 7
𝐶 𝑖 = 𝐶 5 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 5 + C 4 = 1 + 7 = 8
C
0 1 2 3 4 5
2 2 4 7 7 8
Algorithm Design
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 7 = 𝐵 𝐶 0 = 𝐵 2 = 𝐴 𝑗 = 𝐴 7 = 0
𝐶 𝐴 𝑗 = 𝐶 𝐴 7 = 𝐶 0 = 𝐶 𝐴 𝑗 − 1 = 𝐶 0 − 1 = 2 − 1 = 1
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 6 = 𝐵 𝐶 3 = 𝐵 6 = 𝐴 𝑗 = 𝐴 6 = 3
𝐶 𝐴 𝑗 = 𝐶 𝐴 6 = 𝐶 3 = 𝐶 𝐴 𝑗 − 1 = 𝐶 3 − 1 = 6 − 1 = 5
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 5 = 𝐵 𝐶 2 = 𝐵 4 = 𝐴 𝑗 = 𝐴 5 = 2
𝐶 𝐴 𝑗 = 𝐶 𝐴 5 = 𝐶 2 = 𝐶 𝐴 𝑗 − 1 = 𝐶 2 − 1 = 4 − 1 = 3
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 4 = 𝐵 𝐶 0 = 𝐵 1 = 𝐴 𝑗 = 𝐴 4 = 0
𝐶 𝐴 𝑗 = 𝐶 𝐴 4 = 𝐶 0 = 𝐶 𝐴 𝑗 − 1 = 𝐶 0 − 1 = 1 − 1 = 0
Algorithm Design
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 3 = 𝐵 𝐶 3 = 𝐵 5 = 𝐴 𝑗 = 𝐴 3 = 3
𝐶 𝐴 𝑗 = 𝐶 𝐴 3 = 𝐶 3 = 𝐶 𝐴 𝑗 − 1 = 𝐶 3 − 1 = 5 − 1 = 4
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 2 = 𝐵 𝐶 5 = 𝐵 8 = 𝐴 𝑗 = 𝐴 2 = 5
𝐶 𝐴 𝑗 = 𝐶 𝐴 2 = 𝐶 5 = 𝐶 𝐴 𝑗 − 1 = 𝐶 5 − 1 = 8 − 1 = 7
𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 1 = 𝐵 𝐶 2 = 𝐵 3 = 𝐴 𝑗 = 𝐴 1 = 2
𝐶 𝐴 𝑗 = 𝐶 𝐴 1 = 𝐶 2 = 𝐶 𝐴 𝑗 − 1 = 𝐶 2 − 1 = 3 − 1 = 2
B
1 2 3 4 5 6 7 8
C
0 1 2 3 4 5
0 0 2 2 3 3 3 5
0 2 2 4 7 7
Algorithm Design
Algorithm Analysis
cost times
𝑐1 1
𝑐2 𝑘 + 2
𝑐3 𝑘 + 1
𝑐4 𝑛 + 1
𝑐5 𝑛
0 1
𝑐7 𝑘 + 1
𝑐8 𝑘
0 1
𝑐10 𝑛 + 1
𝑐11 𝑛
𝑐12 𝑛
Algorithm Analysis
• General Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ k + 2 + 𝑐3 ∗ k + 1 + 𝑐4 ∗ n + 1 + 𝑐5 ∗ 𝑛 + 𝑐7 ∗ k + 1 +
𝑐8 ∗ 𝑘 + 𝑐10 ∗ n + 1 + 𝑐11 ∗ 𝑛 + 𝑐12 ∗ 𝑛
= 𝑐4 + 𝑐5 + 𝑐10 + 𝑐11 + 𝑐12 𝑛 + 𝑐2 + 𝑐3 + 𝑐7 + 𝑐8 𝑘 + (𝑐1 + 2𝑐2 + 𝑐3 + 𝑐4 +
Radix Sort
• Assumptions:
• Each of the 𝑛 input elements is a (maximum) 𝑑 − 𝑑𝑖𝑔𝑖𝑡 integer in the range: 0 𝑡𝑜 𝑘, where 𝑘 is an
integer
• When 𝑑 ≪ 𝑛, 𝑻(𝒏) = 𝜣(𝒏)
• Sorts recursively on each digit column, starting from the Least Significant digit
• Requires 𝑑 passes to sort all elements
• Application:
• Sort records using multiple fields
Algorithm Design
Algorithm Design
𝑑 Range of Values (𝟎 → 𝟏𝟎𝒅 − 𝟏)
1 0 → 9
2 0 → 99
3 0 → 999
. …
. …
Maximum Valu𝑒 = 𝑘 = 10𝑑 − 1
log10 𝑘 = 𝑑 − 0
log10 𝑘 = 𝑑
⇒ 𝑑 ≪ 𝑘
Algorithm Analysis
• General Case
𝑇 𝑛 = 𝑐1 𝑑 + 1 + 𝛩 𝑛 ∗ 𝑑
= 𝑑𝑐1 + 𝑐1 + 𝑑𝛩 𝑛
= 𝛩 𝑛 , 𝑖𝑓 𝑑 ≪ 𝑛 (which is true since 𝑑 ≪ 𝑘 for Radix Sort, and 𝑘 ≤ 𝑛 for Counting Sort)
𝑇(𝑛) = 𝛩(𝑛) (based on Counting Sort; see Table for other fields too)
• Best Case, Worst Case, or Average Case?
cost times
𝑐1 𝑑 + 1
𝛩(𝑛) 𝑑
Bucket Sort
• Assumptions:
• Input is drawn from a Uniform distribution
• Input is distributed uniformly and independently over the interval [0, 1]
• Divides the [0, 1) half-open interval into 𝑛 equal-sized sub-intervals or Buckets and distributes 𝑛 keys into
Buckets
• Bucket 𝑖 holds values in the interval [𝑖 𝑛 , 𝑖 + 1 𝑛)
• Sorts the keys in each Bucket in order
• External Array required:
• 𝐵[0 … 𝑛 − 1] of Linked Lists (Buckets): Temporary Storage
• Without assumption of Uniform distribution, Bucket Sort may still run in Linear time if
𝑖=0
𝑛−1
𝛩 𝐸 𝑛𝑖2 = 𝛩(1)
Algorithm Design
Algorithm Design
𝑨 = {𝟎. 𝟕𝟖, 𝟎. 𝟏𝟕, 𝟎. 𝟑𝟗, 𝟎. 𝟐𝟔, 𝟎. 𝟕𝟐, 𝟎. 𝟗𝟒, 𝟎. 𝟐𝟏, 𝟎. 𝟏𝟐, 𝟎. 𝟐𝟑, 𝟎. 𝟔𝟖}
1 2 3 4 5 6 7 8 9 10
𝑛 = 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ = 10
𝑙𝑒𝑡 𝐵 0 … 9 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 B
𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 𝑛 − 1 = 10 − 1 = 9 0 1 2 3 4 5 6 7 8 9
𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡 B
Iteration 1: 0 1 2 3 4 5 6 7 8 9
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 1
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 1 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 1 = 𝐵 10 ∗ 0.78 = 𝐵 7.8 = 𝐵 7
0.78 0.17 0.39 0.26 0.72 0.94 0.21 0.12 0.23 0.68
/ / / / / / / / / /
Algorithm Design
Iteration 2:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 2
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 2 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 2 = 𝐵 10 ∗ 0.17 =
𝐵 1.7 = 𝐵 1
Iteration 3:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 3
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 3 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 3 = 𝐵 10 ∗ 0.39 =
𝐵 3.9 = 𝐵 3
Algorithm Design
Iteration 4:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 4
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 4 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 4 = 𝐵 10 ∗ 0.26 =
𝐵 2.6 = 𝐵 2
Iteration 5:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 5
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 5 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 5 = 𝐵 10 ∗ 0.72 =
𝐵 7.2 = 𝐵 7
Algorithm Design
Iteration 6:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 6
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 6 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 6 = 𝐵 10 ∗ 0.94 =
𝐵 9.4 = 𝐵 9
Iteration 7:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 7
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 7 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 7 = 𝐵 10 ∗ 0.21 =
𝐵 2.1 = 𝐵 2
Algorithm Design
Iteration 8:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 8
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 8 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 8 = 𝐵 10 ∗ 0.12 =
𝐵 1.2 = 𝐵 1
Iteration 9:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 9
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 9 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 9 = 𝐵 10 ∗ 0.23 =
𝐵 2.3 = 𝐵 2
Algorithm Design
Iteration 10:
𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 10
𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 10 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 10 = 𝐵 10 ∗ 0.68 =
𝐵 6.8 = 𝐵 6
Algorithm Design
Algorithm Design
Algorithm Design
𝑐𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡𝑠 𝐵 0 , 𝐵 1 , … , 𝐵 𝑛 − 1 𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟
B
1 2 3 4 5 6 7 8 9 10
0.12 0.17 0.21 0.23 0.26 0.39 0.68 0.72 0.78 0.94
Algorithm Analysis
cost (Best
Case)
cost (Worst
Case)
cost (Average
Case)
times
𝑐1 𝑐1 𝑐1 1
𝑐2 𝑐2 𝑐2 1
𝑐3 𝑐3 𝑐3 𝑛 + 1
𝑐4 𝑐4 𝑐4 𝑛
𝑐5 𝑐5 𝑐5 𝑛 + 1
𝑐6 𝑐6 𝑐6 𝑛
𝑐7 𝑐7 𝑐7 𝑛 + 1
𝛩(𝑛) Θ(𝑛2) Θ(𝑛2) 𝑛
𝑐9 𝑐9 𝑐9 1
Algorithm Analysis
• Best Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (𝑛 + 1) + 𝑐4∗ 𝑛 + 𝑐5 ∗ (𝑛 + 1) +
𝑐6 ∗ 𝑛 + 𝑐7 ∗ 𝑛 + 1 + 𝛩(𝑛) ∗ 𝑛 + 𝑐9 ∗ 1
= 𝛩 𝑛2 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑐7 𝑛 + 𝑐1 + 𝑐2 + 𝑐3 + 𝑐5 + 𝑐7 + 𝑐9
= 𝛩 𝑛2 + 𝑎𝑛 + 𝑏
= 𝛩 𝑛2
Algorithm Analysis
• Worst Case/ Average Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (𝑛 + 1) + 𝑐4∗ 𝑛 + 𝑐5 ∗ (𝑛 + 1) +
𝑐6 ∗ 𝑛 + 𝑐7 ∗ 𝑛 + 1 + 𝛩(𝑛2
) ∗ 𝑛 + 𝑐9 ∗ 1
= 𝛩 𝑛3 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑐7 𝑛 + 𝑐1 + 𝑐2 + 𝑐3 + 𝑐5 + 𝑐7 + 𝑐9
= 𝛩 𝑛3 + 𝑎𝑛 + 𝑏
= 𝛩 𝑛3
Algorithm Analysis
• Best Case: 𝑇 𝑛 = 𝛩(𝑛)
i. Each Bucket contains exactly one element
ii. One Bucket contains all elements in ascending order
• Average Case: 𝑇 𝑛 = 𝛩(𝑛2)
i. One Bucket contains all elements in non-ascending order
ii. Each Bucket contains from 2 to 𝑛 − 2 elements in ascending order
• Worst Case: 𝑇 𝑛 = 𝛩(𝑛3
)
i. Each Bucket contains from 2 to 𝑛 − 2 elements in non-ascending order
Comparison
Name Best Case Average Case Worst Case Space Complexity
Insertion O(n) O(n^2) O(n^2) O(n)
Merge O(n log n) O(n log n) O(n log n) O(n)
Heap O(n log n) O(n log n) O(n log n) O(n)
Quick O(log n) O(n log n) O(n^2) O(n+log n)
Counting O(n) O(n) O(n) O(n)
Radix O(n) O(n) O(n)
Bucket O(n) O(n^2) O(n^3)
Dynamic Programming
• “Programming” refers to a tabular method, and not computer code
• Application:
• Optimization Problems: Multiple solutions might exist, out of which “an” instead of “the”
optimal solution is acquired
• Sorting: Optimization Problem?
• Core Components of Optimization Problem for Dynamic Programming to be
applied upon:
1. Optimal Substructure
2. Overlapping Sub-problems
Dynamic Programming
1. Optimal Substructure: Optimal solution(s) to a problem incorporate
optimal solutions to related sub-problems, which we may solve
independently
• How to discover Optimal Substructure in a problem:
i. Show that a solution to the problem consists of making a choice
ii. Suppose that an optimal solution to the problem consists of making a choice
iii. Determine which sub-problems result due to Step 2
iv. Show that solutions to the sub-problems used within an optimal solution to the problem
must themselves be optimal
Dynamic Programming
• Optimal Substructure varies in two ways:
i. How many sub-problems an optimal solution to the problem uses?
ii. How many choices we have in determining which sub-problems to use in an optimal
solution?
Dynamic Programming
2. Overlapping Sub-problems: The space of sub-problems must be “small”
in the sense that a recursive algorithm for the problem solves the same sub-
problems over and over, rather than always generating new sub-problems
• Total number of distinct sub-problems is polynomial in 𝑛
• Divide-and-Conquer approach generates brand-new (non-overlapping) problems at
each step of the recursion
• Dynamic Programming approach takes advantage of overlapping sub-problems by
solving each sub-problem once and then storing its solution in a table where it can be
looked up when needed, using constant time per lookup
Dynamic Programming
S# Characteristic Divide-and-Conquer Dynamic Programming
1 Problems Non-Optimization Optimization
2 Sub-problems (Divide) Disjoint Overlapping
3 Solves sub-problems (Conquer)
Recursively and
Repeatedly
Recursively but
Only once
4 Saves solutions to sub-problems No Table
5
Combines solutions to sub-problems
(Combine)
Yes Yes
6 Time Efficient Less More
7 Space Efficient More Less
Dynamic Programming
• When developing a Dynamic Programming algorithm, we follow a sequence of four steps:
1. Characterize the structure of an optimal solution
• Find the Optimal Substructure
2. Recursively define the value of an optimal solution
• Define the cost of an optimal solution recursively in terms of the optimal solutions to sub-problems
3. Compute the value of an optimal solution, typically in a bottom-up fashion
• Write an algorithm to compute the value of an optimal solution
4. Construct an optimal solution from computed information
• An optional step
Dynamic Programming
• Total Running Time:
• Depends on the product of two factors:
1. Total number of sub-problems
2. Number of choices for each sub-problem
Rod Cutting
• Cutting a Steel Rod into rods of smaller length in a way that maximizes
their total value
• Serling Enterprises buys long steel rods and cuts them into shorter rods,
which it then sells. Each cut is free. The management of Serling Enterprises
wants to know the best way to cut up the rods.
• We assume that we know, for 𝑖 = 1, 2, … the price 𝑝𝑖 in dollars that Serling
Enterprises charges for a rod of length 𝑖 inches. Rod lengths are always an
integral number of inches.
Rod Cutting
Rod Cutting: Design
• Method 𝟏: Possible Combinations
• Consider the case when 𝒏 = 𝟒
• Figure shows all the unique ways (8) to cut up a rod of 4 inches in length, including
the way with no cuts at all
• Cutting a 4-inch rod into two 2-inch pieces produces revenue 𝑝2 + 𝑝2 = 5 + 5 = 10,
which is optimal
• Total Possible Combinations of cutting up a rod of length 𝑛 = 𝟐𝒏−𝟏
• 𝑇 𝑛 = 𝛩(𝟐𝒏−𝟏) = 𝛩(𝟐𝒏)
Rod Cutting: Design
Rod Cutting: Design
Rod Cutting: Design
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
Rod Cutting: Design
• Method 𝟐: Equation 1
• Top-down
• Recursive
Rod Cutting: Design
• 𝑝𝑛 corresponds to making no cuts at all and selling the rod of length 𝑛 as is
• Other 𝑛 − 1 arguments correspond to the revenue obtained by making an initial
cut of the rod into two pieces of size 𝑖 and 𝑛 − 𝑖, for each 𝑖 = 1,2, … , 𝑛 − 1,
and then optimally cutting up those pieces further, obtaining revenues 𝑟𝑖 and 𝑟𝑛−𝑖
from those two pieces
• Since we don’t know ahead of time which value of 𝑖 optimizes revenue, we must
consider all possible values of 𝑖 and pick the one that maximizes revenue
• We also have the option of picking no 𝑖 at all if we can obtain more revenue by
selling the rod uncut
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝑟5 = max 𝑝5, 𝑟1 + 𝑟4, 𝑟2 + 𝑟3, 𝑟3 + 𝑟2, 𝑟4 + 𝑟1
𝑟1 = max 𝑝1 = max 1 = 1
𝑟2 = max 𝑝2, 𝑟1 + 𝑟1 = max 5, 1 + 1 = max 5, 2 = 5
𝑟3 = max 𝑝3, 𝑟1 + 𝑟2, 𝑟2 + 𝑟1 = max 8, 1 + 5, 5 + 1 = max 8, 6, 6 = 8
𝑟4 = max 𝑝4, 𝑟1 + 𝑟3, 𝑟2 + 𝑟2, 𝑟3 + 𝑟1 = max 9, 1 + 8, 5 + 5, 8 + 1 = max 9, 9, 10, 9 = 10
𝑟5 = max 10, 1 + 10, 5 + 8, 8 + 5, 10 + 1 = max 10, 11, 13, 13, 11 = 13
• Tracing back the optimal solution 5 = 2 + 3
Rod Cutting: Design
• To solve the original problem of size 𝒏, we solve problems of the same
type, but of smaller sizes
• Once we make the first cut, we may consider the two pieces as independent
instances of the rod-cutting problem
• Which Method is better?
Rod Cutting: Analysis
5
1 2 3 4
1 1 1 2
1 1
+ +
+
2 1
+
1 1
+
1 3
+ 2 2
+ 3 1
+
1 2
1 1
+
2 1
+
1 1
+
1 2
1 1
+
+
2 1
+
1 1
+
1 1
+ 1 1
+
1
+
+
+
+ 4 3 2
+
1 1
+
Rod Cutting: Analysis
• For 𝑛 = 5, total problems = 1 + 78 = 79 = 𝛩 2𝑛+1 = 𝛩(2𝑛)
Node size Number of sub-problems
1 0
2 2
3 8
4 25
5 8 + 2 ∗ 2 + 8 ∗ 2 + 25 ∗ 2 = 78
Rod Cutting: Analysis
• Optimal solution for cutting up a rod of length 𝑛 (if we make any cuts at all)
uses just one sub-problem (of size 𝑛 − 𝑖), but we must consider 𝒏 − 𝟏
choices for 𝑖 in order to determine which one yields an optimal solution
• Optimal way of cutting up a rod of length 𝑛 (if we make any cuts at all)
involves optimally cutting up the two pieces resulting from the first cut
• Overall optimal solution incorporates optimal solutions to the two related
sub-problems, maximizing revenue from each of those two pieces
• Rod-cutting problem exhibits Optimal Substructure
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
Rod Cutting: Design
• Method 𝟑: Equation 2
• Top-down
• Recursive
• A decomposition is viewed as consisting of a first piece of length 𝑖 cut off the left end,
and then a remainder of length 𝑛 − 𝑖
• Only the remainder, and not the first piece, may be further divided
• An optimal solution represents the solution based on only one related sub-problem;
the remainder, instead of two sub-problems
• Simpler than Methods 1 and 2
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝑟5 = max
1≤𝑖≤5
𝑝𝑖 + 𝑟5−𝑖 = max(𝑝1 + 𝑟4, 𝑝2 + 𝑟3, 𝑝3 + 𝑟2, 𝑝4 + 𝑟1, 𝑝5 + 𝑟0)
𝑟0 = 0
𝑟1 = max 𝑝1 + 𝑟0 = max 1 + 0 = max 1 = 1
𝑟2 = max 𝑝1 + 𝑟1, 𝑝2 + 𝑟0 = max 1 + 1, 5 + 0 = max 2, 5 = 5
𝑟3 = max 𝑝1 + 𝑟2, 𝑝2 + 𝑟1, 𝑝3 + 𝑟0 = max 1 + 5, 5 + 1, 8 + 0 = max 6, 6, 8 = 8
𝑟4
= max(𝑝1 + 𝑟3, 𝑝2 + 𝑟2, 𝑝3 + 𝑟1, 𝑝4
Rod Cutting: Design
𝑟5 = max 𝑝1 + 𝑟4, 𝑝2 + 𝑟3, 𝑝3 + 𝑟2, 𝑝4 + 𝑟1, 𝑝5 + 𝑟0
= max 1 + 10, 5 + 8, 8 + 5, 9 + 1 , 10 + 0 = max 11, 13, 13, 10, 10 = 13
• Tracing back the optimal solution 5 = 2 + 3
• Which Method is the best?
Rod Cutting: Analysis
5
4 2 1
3 0
2 1
3 0
2 1 0
1 0
0
1 0
0
0
0
2 1 0 1 0 0
Rod Cutting: Analysis
• For 𝑛 = 5, total problems = 1 + 31 = 32 = 𝛩(2𝑛)
Node size Number of sub-problems
0 0
1 1
2 3
3 7
4 15
5 5 + 15 + 7 + 3 + 1 = 31
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
Rod Cutting: Design
• Method 𝟒: Automation of Method 𝟑
• Top-down
• Recursive
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 5)
𝑖𝑓 𝑛 == 0 
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 5
𝑞 = max −∞, p 1 + CUT − ROD p, 5 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 4 =
max −∞, 1 + 10 = max −∞, 11 = 11
𝑞 = max 11, p 2 + CUT − ROD p, 5 − 2 = max 11, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 3 =
max 11, 5 + 8 = max 11, 13 = 13
Rod Cutting: Design
𝑞 = max 13, p 3 + CUT − ROD p, 5 − 3 = max 13, 8 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 2 = max 13, 8 + 5 =
max 13, 13 = 13
𝑞 = max 13, p 4 + CUT − ROD p, 5 − 4 = max 13, 9 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 = max 13, 9 + 1 =
max 13, 10 = 13
𝑞 = max 13, p 5 + CUT − ROD p, 5 − 5 = max 13, 10 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 = max(13, 10 +
Rod Cutting: Design
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 4)
𝑖𝑓 𝑛 == 0 
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 4
𝑞 = max −∞, p 1 + CUT − ROD p, 4 − 1 = max −∞, 1 + 𝐶𝑈𝑇 −
Rod Cutting: Design
𝑞 = max 10, p 3 + CUT − ROD p, 4 − 3 = max 10, 8 + 𝐶𝑈𝑇 −
Rod Cutting: Design
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 3)
𝑖𝑓 𝑛 == 0 
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 3
𝑞 = max −∞, p 1 + CUT − ROD p, 3 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 2 = max −∞, 1 + 5 =
max −∞, 6 = 6
𝑞 = max 6, p 2 + CUT − ROD p, 3 − 2 = max 6, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 = max 6, 5 + 1 = max 6, 6 = 6
𝑞 = max 6, p 3 + CUT − ROD p, 3 − 3 = max 6, 8 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 = max 6, 8 + 0 = max 6, 8 = 8
𝑟𝑒𝑡𝑢𝑟𝑛 8
Rod Cutting: Design
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 2)
𝑖𝑓 𝑛 == 0 
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 2
𝑞 = max −∞, p 1 + CUT − ROD p, 2 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 =
max −∞, 1 + 1 = max −∞, 2 = 2
𝑞 = max 2, p 2 + CUT − ROD p, 2 − 2 = max 2, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 =
max 2, 5 + 0 = max 2, 5 = 5
𝑟𝑒𝑡𝑢𝑟𝑛 5
Rod Cutting: Design
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 1)
𝑖𝑓 𝑛 == 0 
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 1
𝑞 = max −∞, p 1 + CUT − ROD p, 1 − 1 = max −∞, 1 + 𝐶𝑈𝑇 −
Rod Cutting: Design
𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 0)
𝑖𝑓 𝑛 == 0
𝑟𝑒𝑡𝑢𝑟𝑛 0
Rod Cutting: Analysis
Rod Cutting: Analysis
• General Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐3 ∗ 1 + 𝑐4 ∗ n + 1 + 𝑐5 ∗ 2𝑛 − 1 + 𝑐6 ∗ 1
= 𝑐52𝑛 + 𝑐4 𝑛 + (𝑐1 + 𝑐3 + 𝑐4 − 𝑐5 + 𝑐6)
= 𝑎2𝑛
+ 𝑏𝑛 + 𝑐
= 𝛩(2𝑛)
cost times
𝑐1 1
𝑐2 0
𝑐3 1
𝑐4 𝑛 + 1
𝑐5 2𝑛
− 1
𝑐6 1
Rod Cutting: Analysis
• Each node label gives the size 𝒏 of the corresponding immediate sub-problems
• An edge from parent 𝒔 to child 𝒕 corresponds to cutting off an initial piece of size
𝒔 − 𝒕, and leaving a remaining sub-problem of size 𝒕
• Total nodes = 𝟐𝒏
• Total leaves = 𝟐𝒏−𝟏
• Total number of paths from root to a leaf = 𝟐𝒏−𝟏
• Total ways of cutting up a rod of length 𝑛 = 𝟐𝒏−𝟏 = Possible Combinations
(Method 1)
Rod Cutting: Analysis
• Each node label represents the number of immediate calls made to CUT-ROD by that node
• 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 calls 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 − 𝒊 for 𝑖 = 1, 2, … . , 𝑛 (top to down, left to right in graph)
• 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 calls 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒋 for 𝑗 = 0, 1, … . , 𝑛 − 1 (top to down, right to left in graph)
• Let 𝑻(𝒏) denote the total number of calls made to CUT-ROD, when called with its second parameter equal
to 𝑛
• 𝑻(𝒏) equals the sum of number of nodes in sub-trees whose root is labeled 𝑛 in the recursion tree
• One call to CUT-ROD is made at the root:
𝑇(0) = 1
Rod Cutting: Analysis
• 𝑻(𝒋) denotes the total number of calls (including recursive calls) made due to 𝑪𝑼𝑻 − 𝑹𝑶𝑫(𝒑, 𝒏 − 𝒊), where 𝑗 = 𝑛 − 𝑖
𝑇 𝑛 = 1 +
𝑗=0
𝑛−1
𝑇(𝑗)
𝑇 𝑛 = 1 +
𝑗=0
𝑛−1
2𝑗
= 1 + 20
+ 21
+ ⋯ + 2𝑛−1
= 1 + 2𝑛
− 1
= 𝛩(2𝑛
)
• Running Time of CUT-ROD is exponential
• For each unit increment in 𝑛, program’s running time doubles
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
4 Automation of Method 𝟑 𝜣(𝟐𝒏
) - - -
Rod Cutting: Design
• Dynamic Programming
• Each sub-problem is solved only once, and the solution is saved
• Look up the solution in constant time, rather than re-compute it
• Time-Space trade-off
• Additional memory is used to save computation time
• Exponential-time solution may be transformed into Polynomial-time solution
i. Total number of distinct sub-problems is polynomial in 𝑛
ii. Each sub-problem can be solved in polynomial time
1. Top-down with Memoization
2. Bottom-up Method
Rod Cutting: Design
• Method 𝟓: Top-down with Memoization
• Top-down
• Recursive
• Saves solutions of all sub-problems
• Solves each sub-problem only once
• Memoized:
• Solutions initially contain special values to indicate that the solutions need to be computed
• Remembers solutions computed earlier
• Checks whether solutions of sub-problems have been saved earlier
• Memoized version of Method 4
Rod Cutting: Design
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝑀𝐸𝑀𝑂𝐼𝑍𝐸𝐷 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5)
𝑙𝑒𝑡 𝑟 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 r
𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 𝑛 = 5 0 1 2 3 4 5
𝑟 0 = −∞
𝑟 1 = −∞
𝑟 2 = −∞
𝑟 3 = −∞ r
𝑟 4 = −∞ 0 1 2 3 4 5
𝑟 5 = −∞
𝑟𝑒𝑡𝑢𝑟𝑛 MEMOIZED−CUT−ROD−AUX (p, 5, r) = 13
−∞ −∞ −∞ −∞ −∞ −∞
Rod Cutting: Design
• Tracing back the optimal solution 5 = 2 + 3
• Which Method is the best?
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 5, r)
𝑖𝑓 𝑟 5 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0 
𝑒𝑙𝑠𝑒 𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 5
𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 4, r)) = max( − ∞, 1+10) = max( − ∞, 11 = 11
𝑞 = max 11, p[2] + MEMOIZED−CUT−ROD−AUX (p, 3, r)) = max(11, 5+8) = max(11, 13 = 13
𝑞 = max 13, p[3] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max(13, 8+5) = max(13, 13 = 13
Rod Cutting: Design
𝑞 = max(13, p[4] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) =
0 1 5 8 10 13
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 4, r)
𝑖𝑓 𝑟 4 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0 
𝑒𝑙𝑠𝑒 𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 4
𝑞 = max(−∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 3, r)) = max( − ∞, 1+8) = max( − ∞,
9) = 9
𝑞 = max(9, p[2] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max(9, 5+5) = max(9, 10) = 10
𝑞 = max(10, p[3] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max(10, 8+1) = max(10, 9) =
10
𝑞 = max(10, p[4] + MEMOIZED−CUT−ROD−AUX
(p, 0, r)) = max(10, 9+0) = max(10, 9)=10
𝑟 4 = 𝑞 = 10 r
𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 10 0 1 2 3 4 5
Rod Cutting: Design
0 1 5 8 10 −∞
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 3, r)
𝑖𝑓 𝑟 3 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0 
𝑒𝑙𝑠𝑒 𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 3
𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max( − ∞, 1 + 5 =
max −∞, 6 = 6
𝑞 = max 6, p[2] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max(6, 5 + 1 =
max 6, 6 = 6
𝑞 = max(6, p[3] + MEMOIZED−CUT−ROD−AUX (p, 0, r))
Rod Cutting: Design
0 1 5 8 −∞ −∞
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 2, r)
𝑖𝑓 𝑟 2 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0 
𝑒𝑙𝑠𝑒 𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 2
𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max( − ∞, 1 + 1 = max −∞, 2 = 2
𝑞 = max 2, p[2] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) = max(2, 5 + 0 = max 2, 5 = 5
𝑟 2 = 𝑞 = 5 r
𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 5 0 1 2 3 4 5
0 1 5 −∞ −∞ −∞
0 1 −∞ −∞ −∞ −∞
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 1, r)
𝑖𝑓 𝑟 1 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0 
𝑒𝑙𝑠𝑒 𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 1
𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) = max( − ∞, 1 + 0 =
max −∞, 1 = max 1 = 1
𝑟 1 = 𝑞 = 1 r
𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 1 0 1 2 3 4 5
0 −∞ −∞ −∞ −∞ −∞
Rod Cutting: Design
MEMOIZED−CUT−ROD−AUX (p, 0, r)
𝑖𝑓 𝑟 0 = −∞ ≥ 0 
𝑖𝑓 𝑛 == 0
𝑞 = 0
𝑟 0 = 𝑞 = 0 r
𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 0 0 1 2 3 4 5
Rod Cutting: Analysis
• Top-down
• Left-to-Right
5
4 2 1
3 0
2 1
3 0
2 1 0
1 0
0
Rod Cutting: Analysis
 𝑇(𝑛
cost times
𝑐1 1
𝑐2 𝑛 + 2
𝑐3 𝑛 + 1
? 1
cost times
𝑐1 1
𝑐2 0
𝑐3 1
𝑐4 0
𝑐5 1
𝑐6 𝑛 + 1
𝑐7 𝑛(𝑛 + 1 )
2
𝑐8 1
𝑐9 1
Rod Cutting: Analysis
• General Case
MEMOIZED−CUT−ROD−AUX (p, n, r)
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐3 ∗ 1 + 𝑐5 ∗ 1 + 𝑐6 ∗ (n + 1) + 𝑐7 ∗
n(n+1)
2
+𝑐8 ∗ 1 + 𝑐9 ∗ 1
=
𝑐7
2
𝑛2 + (𝑐6 +
𝑐7
2
)𝑛 + (𝑐1+ 𝑐3 + 𝑐5 + 𝑐6 + 𝑐8 + 𝑐9)
= 𝑎𝑛2 + 𝑏𝑛 + 𝑐
= 𝛩(𝑛2)
MEMOIZED−CUT−ROD (p, n)
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ n + 2 + 𝑐3 ∗ n + 1 + 𝛩 𝑛2 ∗ 1
= 𝛩 𝑛2 + 𝑐2 + 𝑐3 𝑛 + (𝑐1 + 2𝑐2 + 𝑐3)
= 𝛩 𝑛2 + 𝑎𝑛 + 𝑏
= 𝛩(𝑛2)
Rod Cutting: Analysis
𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem
= 𝑛 ∗ 𝑛
= 𝜣(𝒏𝟐)
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
4 Automation of Method 𝟑 𝜣(𝟐𝒏
) - - -
5 Top-down with Memoization 𝜣(𝒏𝟐) - - -
Rod Cutting: Design
• Method 𝟔: Bottom-up Method
• Bottom-up
• Non-Recursive or Iterative
• Sorts all sub-problems by size and solves them in that order (smallest first)
• When solving a particular sub-problem, all of the smaller sub-problems its solution
depends upon, have already been solved
• Saves solutions of all sub-problems
• Solves each sub-problem only once
Rod Cutting: Design
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5)
𝑙𝑒𝑡 𝑟 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 r
𝑟[0] = 0 0 1 2 3 4 5
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 = 5
𝑞 = −∞ r
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 1 0 1 2 3 4 5
𝑞 = max(−∞, 𝑝 1 + 𝑟[0]) = max −∞, 1 + 0 = max −∞, 1 = 1
𝑟 1 = 𝑞 = 1
r
0 1 2 3 4 5
0
0 1
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 2
𝑞 = max(−∞, 𝑝 1 + 𝑟[1]) = max −∞, 1 + 1 = max −∞, 2 = 2
𝑞 = max(2, 𝑝 2 + 𝑟[0]) = max 2, 5 + 0 = max 2, 5 = 5
𝑟 2 = 𝑞 = 5
r
𝑞 = −∞ 0 1 2 3 4 5
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 3
𝑞 = max(−∞, 𝑝 1 + 𝑟[2]) = max −∞, 1 + 5 = max −∞, 6 = 6
𝑞 = max(6, 𝑝 2 + 𝑟[1]) = max 6, 5 + 1 = max 6, 6 = 6
𝑞 = max(6, 𝑝 3 + 𝑟[0]) = max 6, 8 + 0 = max 6, 8 = 8
𝑟 3 = 𝑞 = 8
0 1 5
Rod Cutting: Design
r
𝑞 = −∞ 0 1 2 3 4 5
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 4
𝑞 = max(−∞, 𝑝 1 + 𝑟[3]) = max −∞, 1 + 8 = max −∞, 9 = 9
𝑞 = max(9, 𝑝 2 + 𝑟[2]) = max 9, 5 + 5 = max 9, 10 = 10
𝑞 = max(10, 𝑝 3 + 𝑟[1]) = max 10, 8 + 1 = max 10, 9 = 10
𝑞 = max(10, 𝑝 4 + 𝑟[0]) = max 10, 9 + 0 = max 10, 9 = 10
𝑟 4 = 𝑞 = 10
r
0 1 2 3 4 5
0 1 5 8
0 1 5 8 10
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 5
𝑞 = max(−∞, 𝑝 1 + 𝑟[4]) = max −∞, 1 + 10 = max −∞, 11 = 11
𝑞 = max(11, 𝑝 2 + 𝑟[3]) = max 11, 5 + 8 = max 11, 13 = 13
𝑞 = max(13, 𝑝 3 + 𝑟[2]) = max 13, 8 + 5 = max 13, 13 = 13
𝑞 = max(13, 𝑝 4 + 𝑟[1]) = max 13, 9 + 1 = max 13, 10 = 13
𝑞 = max(13, 𝑝 5 + 𝑟[0]) = max 13, 10 + 0 = max 13, 10 = 13
𝑟 5 = 𝑞 = 13
𝑟𝑒𝑡𝑢𝑟𝑛 𝑟 5 = 13 r
0 1 2 3 4 5
• Tracing back the optimal solution 5 = 2 + 3
• Which Method is the best?
0 1 5 8 1013
Rod Cutting: Analysis
• Top-down
• Right-to-Left
5
4 2 1
3 0
2 1
3 0 2 1 0 1 0 0
Rod Cutting: Analysis
cost times
𝑐1 1
𝑐2 1
𝑐3 𝑛 + 1
𝑐4 𝑛
𝑐5 𝑛(𝑛 + 1 )
2
+ 𝑛
𝑐6 𝑛(𝑛 + 1 )
2
𝑐7 𝑛
𝑐8 1
Rod Cutting: Analysis
• General Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗
n n+1
2
+ 𝑛 +𝑐6 ∗
n n+1
2
+ 𝑐7 ∗
𝑛 + 𝑐8 ∗ 1
=
𝑐5
2
+
𝑐6
2
𝑛2 + 𝑐3 + 𝑐4 +
3𝑐5
2
+
𝑐6
2
+ 𝑐7 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐8)
= 𝑎𝑛2 + 𝑏𝑛 + 𝑐
= 𝚯(𝒏𝟐)
Rod Cutting: Analysis
𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem
= 𝑛 ∗ 𝑛
= 𝜣(𝒏𝟐)
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
4 Automation of Method 𝟑 𝜣(𝟐𝒏
) - - -
5 Top-down with Memoization 𝜣(𝒏𝟐) - - -
6 Bottom-Up Method 𝜣(𝒏𝟐) - - -
Rod Cutting: Analysis
S# Characteristic Top-down with Memoization Bottom-up Method
1 Strategy Top-down Bottom-up
2 Type Recursive Iterative
3 Memoized Yes No
4 Sorts all sub-problems No Yes
5 Solves sub-problems Top-down, Left-to-Right Top-down, Right-to-Left
6 Running Time Θ(𝑛2) Θ(𝑛2)
Rod Cutting: Analysis
• Sub-problem Graphs
• 𝐺 = (𝑉, 𝐸)
• Reduced or Collapsed version of Recursion Tree
• All nodes with the same label are collapsed into a single vertex
• All edges go from parent to child
• Each vertex label represents the size of the corresponding sub-problem, and each directed edge (𝒙, 𝒚) indicates the
need for an optimal solution to sub-problem 𝒚, when determining an optimal solution to sub-problem 𝒙
• Each vertex corresponds to a distinct sub-problem, and the choices for a sub-problem are the edges incident to
that sub-problem
• 𝑇 𝑛 = 𝑉 ∗ 𝐸 = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐
)
Rod Cutting: Analysis
Rod Cutting: Design
• Method 7: Bottom-Up Method with Optimal Solution
• Determines Optimal Solution (along with Optimal Value)
• Determines Optimal Size of the first piece to cut off
• Extension of Method 6
Rod Cutting: Design
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝐸𝑋𝑇𝐸𝑁𝐷𝐸𝐷 − 𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5)
𝑙𝑒𝑡 𝑟 0 … 5 𝑎𝑛𝑑 𝑠 0 … 5 𝑏𝑒 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦𝑠
𝑟[0] = 0, 𝑠[0] = 0
𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 = 5
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 1
𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[0] = 1 + 0 = 1
𝑞 = 1
𝑠 1 = 1
𝑟 1 = 𝑞 = 1
0
0 1
0
0 1
r
0 1 2 3 4 5
s
0 1 2 3 4 5
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 2
𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[1] = 1 + 1 = 2
𝑞 = 2 s 𝑠 2 = 1 0 1 2 3 4 5
𝑖𝑓 𝑞 = 2 < 𝑝 2 + 𝑟[0] = 5 + 0 = 5
𝑞 = 5 s 𝑠 2 = 2 0 1 2 3 4 5
𝑟 2 = 𝑞 = 5
r
0 1 2 3 4 5
0 1 5
0 1 2
0 1 1
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 3
𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[2] = 1 + 5 = 6
𝑞 = 6 s 𝑠 3 = 1 0 1 2 3 4 5
𝑖𝑓 𝑞 = 6 < 𝑝 2 + 𝑟[1] = 5 + 1 = 6 
𝑖𝑓 𝑞 = 6 < 𝑝 3 + 𝑟[0] = 8 + 0 = 8 𝑞 = 8 s
𝑠 3 = 3 0 1 2 3 4 5
𝑟 3 = 𝑞 = 8
r
0 1 2 3 4 5 0 1 5 8
0 1 2 1
0 1 2 3
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 4
𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[3] = 1 + 8 = 9
𝑞 = 9 s 𝑠 4 = 1 0 1 2 3 4 5
𝑖𝑓 𝑞 = 9 < 𝑝 2 + 𝑟[2] = 5 + 5 = 10
𝑞 = 10
𝑠 4 = 2 s
𝑖𝑓 𝑞 = 10 < 𝑝 3 + 𝑟[1] = 8 + 1 = 9  0 1 2 3 4 5
𝑖𝑓 𝑞 = 10 < 𝑝 4 + 𝑟[0] = 9 + 0 = 9  r
𝑟 4 = 𝑞 = 10 0 1 2 3 4 5
0 1 5 8 10
0 1 2 3 1
0 1 2 3 2
Rod Cutting: Design
𝑞 = −∞
𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 5
𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[4] = 1 + 10 = 11
𝑞 = 11 s 𝑠 5 = 1 0 1 2 3 4 5
𝑖𝑓 𝑞 = 11 < 𝑝 2 + 𝑟[3] = 5 + 8 = 13
𝑞 = 13
𝑠 5 = 2 s
𝑖𝑓 𝑞 = 13 < 𝑝 3 + 𝑟[2] = 8 + 5 = 13  0 1 2 3 4 5
𝑖𝑓 𝑞 = 13 < 𝑝 4 + 𝑟[1] = 9 + 1 = 10 
𝑖𝑓 𝑞 = 13 < 𝑝 5 + 𝑟[0] = 10 + 0 = 10 
𝑟 5 = 𝑞 = 13 r
𝑟𝑒𝑡𝑢𝑟𝑛 𝑟 𝑎𝑛𝑑 𝑠 0 1 2 3 4 5
0 1 5 8 1013
0 1 2 3 2 1
0 1 2 3 2 2
Rod Cutting: Analysis
• Top-down
• Right-to-Left
5
4 2 1
3 0
2 1
3 0 2 1 0 1 0 0
Rod Cutting: Analysis
cost times
𝑐1 1
𝑐2 1
𝑐3 𝑛 + 1
𝑐4 𝑛
𝑐5 𝑛(𝑛 + 1 )
2
+ 𝑛
𝑐6 𝑛(𝑛 + 1 )
2
𝑐7 𝑛(𝑛 + 1 )
4
𝑐8 𝑛(𝑛 + 1 )
4
𝑐9 𝑛
𝑐10 1
Rod Cutting: Analysis
• General Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ (
n n+1
2
+ 𝑛) +𝑐6 ∗
n n+1
2
+ 𝑐7 ∗
n n+1
4
+ 𝑐8 ∗
n n+1
4
+ 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1
=
𝑐5
2
+
𝑐6
2
+
𝑐7
4
+
𝑐8
4
𝑛2
+ 𝑐3 + 𝑐4 +
3𝑐5
2
+
𝑐6
2
+
𝑐7
4
+
𝑐8
4
+ 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐10)
= 𝑎𝑛2
+ 𝑏𝑛 + 𝑐
= 𝚯 𝒏𝟐
• Best Case, Worst Case, or Average Case?
Rod Cutting: Analysis
cost times (Best
Case)
times (Worst
Case)
𝑐1 1 1
𝑐2 1 1
𝑐3 𝑛 + 1 𝑛 + 1
𝑐4 𝑛 𝑛
𝑐5 𝑛(𝑛 + 1 )
2
+ 𝑛
𝑛(𝑛 + 1 )
2
+ 𝑛
𝑐6 𝑛(𝑛 + 1 )
2
𝑛(𝑛 + 1 )
2
𝑐7 1 𝑛(𝑛 + 1 )
2
𝑐8 1 𝑛(𝑛 + 1 )
2
𝑐9 𝑛 𝑛
𝑐10 1 1
Rod Cutting: Analysis
• Best Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ (
n n+1
2
+ 𝑛) +𝑐6 ∗
n n+1
2
+ 𝑐7 ∗
1 + 𝑐8 ∗ 1 + 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1
=
𝑐5
2
+
𝑐6
2
+
𝑐7
4
+
𝑐8
4
𝑛2
+ 𝑐3 + 𝑐4 +
3𝑐5
2
+
𝑐6
2
+ 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐7 + 𝑐8
+ 𝑐10)
= 𝑎𝑛2
+ 𝑏𝑛 + 𝑐
= 𝚯 𝒏𝟐
Rod Cutting: Analysis
• Worst Case
𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ (
n n+1
2
+ 𝑛) +𝑐6 ∗
n n+1
2
+ 𝑐7 ∗
n n+1
2
+ 𝑐8 ∗
n n+1
2
+ 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1
=
𝑐5
2
+
𝑐6
2
+
𝑐7
2
+
𝑐8
2
𝑛2
+ 𝑐3 + 𝑐4 +
3𝑐5
2
+
𝑐6
2
+
𝑐7
2
+
𝑐8
2
+ 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐10)
= 𝑎𝑛2
+ 𝑏𝑛 + 𝑐
= 𝚯 𝒏𝟐
Rod Cutting: Analysis
𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem
= 𝑛 ∗ 𝑛
= 𝜣(𝒏𝟐)
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
4 Automation of Method 𝟑 𝜣(𝟐𝒏
) - - -
5 Top-down with Memoization 𝜣(𝒏𝟐) - - -
6 Bottom-Up Method 𝜣(𝒏𝟐) - - -
7
Bottom-Up Method with Optimal
Solution
𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐)
Rod Cutting: Design
• Method 8: Bottom-Up Method with Optimal Decomposition
• Determines First Optimal Decomposition (alongwith Optimal Value)
• Determines Optimal Sizes of all the pieces to cut off
• Extension of Method 7
Rod Cutting: Design
• Consider the case when 𝒏 = 𝟓
𝑃𝑅𝐼𝑁𝑇 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 − 𝑆𝑂𝐿𝑈𝑇𝐼𝑂𝑁 𝑝, 5
𝑟, 𝑠 = 𝐸𝑋𝑇𝐸𝑁𝐷𝐸𝐷 − 𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 5 = ( 0, 1, 5, 8, 10, 13 , 0, 1, 2, 3, 2, 2 )
𝑤ℎ𝑖𝑙𝑒 𝑛 = 5 > 0
𝑝𝑟𝑖𝑛𝑡 𝑠 5 = 2
𝑛 = 𝑛 − 𝑠 𝑛 = 5 − 2 = 3
𝑝𝑟𝑖𝑛𝑡 𝑠 3 = 3
𝑛 = 𝑛 − 𝑠 𝑛 = 3 − 3 = 0
• Which Method is the best?
𝑠[5] 2
𝑠[3] 3
Rod Cutting: Design
Rod Cutting: Analysis
• Top-down
• Right-to-Left
5
4 2 1
3 0
2 1
3 0 2 1 0 1 0 0
Rod Cutting: Analysis
cost times
𝛩(𝑛2) 1
𝑐2 𝑛 2 + 1
𝑐3 𝑛 2
𝑐4 𝑛 2
Rod Cutting: Analysis
• General Case
𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ (
𝑛
2
+ 1) + 𝑐3 ∗
𝑛
2
+ 𝑐4 ∗
𝑛
2
= 𝛩(𝑛2) +
𝑐2
2
+
𝑐3
2
+
𝑐4
2
𝑛 + 𝑐2
= 𝛩(𝑛2) + 𝑎𝑛 + 𝑏
= 𝚯(𝒏𝟐)
• Best Case, Worst Case, or Average Case?
Rod Cutting: Analysis
cost times (Best
Case)
times (Worst
Case)
𝛩(𝑛2) 1 1
𝑐2 2 𝑛 + 1
𝑐3 1 𝑛
𝑐4 1 𝑛
Rod Cutting: Analysis
• Best Case
𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ 2 + 𝑐3 ∗ 1 + 𝑐4 ∗ 1
= 𝛩(𝑛2) + 2𝑐2 + 𝑐3 + 𝑐4
= 𝛩(𝑛2) + 𝑎
= 𝚯(𝒏𝟐)
• Worst Case
𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ (𝑛 + 1) + 𝑐3 ∗ 𝑛 + 𝑐4 ∗ 𝑛
= 𝛩(𝑛2) + 𝑐2 + 𝑐3 + 𝑐4 𝑛 + 𝑐2
= 𝛩 𝑛2 + 𝑎𝑛 + 𝑏
= 𝚯(𝒏𝟐)
Rod Cutting: Analysis
𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem
= 𝑛 ∗ 𝑛
= 𝜣(𝒏𝟐)
Comparative Analysis of Methods
S# Method/ Case General Best Average Worst
1 Possible Combinations 𝜣(𝟐𝒏
) - - -
2 Equation 1 𝜣(𝟐𝒏
) - - -
3 Equation 2 𝜣(𝟐𝒏
) - - -
4 Automation of Method 𝟑 𝜣(𝟐𝒏
) - - -
5 Top-down with Memoization 𝜣(𝒏𝟐) - - -
6 Bottom-Up Method 𝜣(𝒏𝟐) - - -
7
Bottom-Up Method with Optimal
Solution
𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐)
8
Bottom-Up Method with Optimal
Decomposition
𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐)
Greedy Algorithm
• A greedy algorithm is a simple, intuitive algorithm that is used in
optimization problems.
• The algorithm makes the optimal choice at each step as it attempts to find
the overall optimal way to solve the entire problem.
• Greedy algorithms are quite successful in some problems, such as Huffman
encoding which is used to compress data, or Dijkstra's algorithm, which is
used to find the shortest path through a graph.
Activity-Selection Problem
• The Activity Selection Problem is an optimization problem which deals with the
selection of non-conflicting activities that needs to be executed by a single person
or machine in each time frame.
• Each activity is marked by a start and finish time. Greedy technique is used for
finding the solution since this is an optimization problem.
• Let's consider that you have n activities with their start and finish times, the
objective is to find solution set having maximum number of non-conflicting
activities that can be executed in a single time frame, assuming that only one person
or machine is available for execution.
Activity-Selection Problem
• Some points to note here:
• It might not be possible to complete all the activities, since their timings can collapse.
• Two activities, say i and j, are said to be non-conflicting if si >= fj or sj >= fi where si
and sj denote the starting time of activities i and j respectively, and fi and fj refer to the
finishing time of the activities i and j respectively.
• Greedy approach can be used to find the solution since we want to maximize
the count of activities that can be executed. This approach will greedily
choose an activity with earliest finish time at every step, thus yielding an
optimal solution.
Steps for Activity Selection Problem
• Following are the steps we will be following to solve the activity selection problem,
• Step 1: Sort the given activities in ascending order according to their finishing time.
• Step 2: Select the first activity from sorted array act[] and add it to sol[] array.
• Step 3: Repeat steps 4 and 5 for the remaining activities in act[].
• Step 4: If the start time of the currently selected activity is greater than or equal to the finish
time of previously selected activity, then add it to the sol[] array.
• Step 5: Select the next activity in act[] array.
• Step 6: Print the sol[] array.
Algorithm
• GREEDY- ACTIVITY SELECTOR (s, f)
• n ← length [s]
• A ← {1}
• j ← 1.
• for i ← 2 to n
• do if si ≥ fi
• then A ← A ∪ {i}
• j ← i
• return A
Example
• S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10)
• Si = (1,2,3,4,7,8,9,9,11,12)
• fi = (3,5,4,7,10,9,11,13,12,14)
Example
• Now, schedule A1
• Next schedule A3 as A1 and A3 are non-interfering.
• Next skip A2 as it is interfering.
• Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as A1 A3 A4 and A6
are non-interfering.
• Skip A5 as it is interfering.
• Next, schedule A7 as A1 A3 A4 A6 and A7 are non-interfering.
• Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.
Example
• Skip A8 as it is interfering.
• Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering.
• Thus, the final Activity schedule is:
• Activity Selection Problem
Time Complexity Analysis
• Following are the scenarios for computing the time complexity of Activity
Selection Algorithm:
• Case 1: When a given set of activities are already sorted according to their finishing
time, then there is no sorting mechanism involved, in such a case the complexity of the
algorithm will be O(n)
• Case 2: When a given set of activities is unsorted, then we will have to use the sort()
method defined in bits/stdc++ header file for sorting the activities list. The time
complexity of this method will be O(nlogn), which also defines complexity of the
algorithm.
Real-life Applications of Activity Selection
Problem
• Following are some of the real-life applications of this problem:
• Scheduling multiple competing events in a room, such that each event has its own start
and end time.
• Scheduling manufacturing of multiple products on the same machine, such that each
product has its own production timelines.
• Activity Selection is one of the most well-known generic problems used in Operations
Research for dealing with real-life business problems.
Huffman Codes
• Every information in computer science is encoded as strings of 1s and 0s.
• The objective of information theory is to usually transmit information using fewest
number of bits in such a way that every encoding is unambiguous.
• This tutorial discusses about fixed-length and variable-length encoding along with
Huffman Encoding which is the basis for all data encoding schemes
• Encoding, in computers, can be defined as the process of transmitting or storing
sequence of characters efficiently.
• Fixed-length and variable length are two types of encoding schemes
Syed Zaid Irshad
Encoding Schemes
• Fixed-Length encoding - Every character is assigned a binary code using
same number of bits. Thus, a string like “aabacdad” can require 64 bits (8
bytes) for storage or transmission, if each character uses 8 bits.
• Variable- Length encoding - As opposed to Fixed-length encoding, this
scheme uses variable number of bits for encoding the characters depending
on their frequency in the given text. Thus, for a given string like “aabacdad”,
frequency of characters ‘a’, ‘b’, ‘c’ and ‘d’ is 4,1,1 and 2 respectively. Since ‘a’
occurs more frequently than ‘b’, ‘c’ and ‘d’, it uses least number of bits,
followed by ‘d’, ‘b’ and ‘c’.
Example
• Suppose we randomly assign binary codes to each character as follows- a 0 b 011 c 111 d 11
• Thus, the string “aabacdad” gets encoded to 00011011111011 (0 | 0 | 011 | 0 | 111 | 11 | 0 | 11),
using fewer number of bits compared to fixed-length encoding scheme.
• But the real problem lies with the decoding phase. If we try and decode the string 00011011111011, it
will be quite ambiguous since, it can be decoded to the multiple strings, few of which are-
• aaadacdad (0 | 0 | 0 | 11 | 0 | 111 | 11 | 0 | 11) aaadbcad (0 | 0 | 0 | 11 | 011 | 111 | 0 | 11)
aabbcb (0 | 0 | 011 | 011 | 111 | 011)
Example
• To prevent such ambiguities during decoding, the encoding phase should satisfy the “prefix rule”
which states that no binary code should be a prefix of another code. This will produce uniquely
decodable codes. The above codes for ‘a’, ‘b’, ‘c’ and ‘d’ do not follow prefix rule since the binary code
for a, i.e., 0, is a prefix of binary code for b i.e 011, resulting in ambiguous decodable codes.
• Let's reconsider assigning the binary codes to characters ‘a’, ‘b’, ‘c’ and ‘d’.
• a 0 b 11 c 101 d 100
• Using the above codes, string “aabacdad” gets encoded to 001101011000100 (0 | 0 | 11 | 0 | 101 |
100 | 0 | 100). Now, we can decode it back to string “aabacdad”.
Huffman Encoding
• Huffman Encoding can be used for finding solution to the given problem statement.
• Developed by David Huffman in 1951, this technique is the basis for all data compression
and encoding schemes
• It is a famous algorithm used for lossless data encoding
• It follows a Greedy approach, since it deals with generating minimum length prefix-free
binary codes
• It uses variable-length encoding scheme for assigning binary codes to characters depending
on how frequently they occur in the given text. The character that occurs most frequently is
assigned the smallest code and the one that occurs least frequently gets the largest code
Algorithm Steps
• Step 1- Create a leaf node for each character and build a min heap using all the
nodes (The frequency value is used to compare two nodes in min heap)
• Step 2- Repeat Steps 3 to 5 while heap has more than one node
• Step 3- Extract two nodes, say x and y, with minimum frequency from the heap
• Step 4- Create a new internal node z with x as its left child and y as its right child.
Also, frequency(z)= frequency(x)+frequency(y)
• Step 5- Add z to min heap
• Step 6- Last node in the heap is the root of Huffman tree
Algorithm
• Huffman (C)
• n=|C|
• Q ← C
• for i=1 to n-1
• do
• z= allocate-Node ()
• x= left[z]=Extract-Min(Q)
• y= right[z] =Extract-Min(Q)
• f [z]=f[x]+f[y]
• Insert (Q, z)
• return Extract-Min (Q)
Example
• Characters Frequencies
• a 10
• e 15
• i 12
• o 3
• u 4
• s 13
• t 1
Example
Example
Example
Example
• Characters Binary Codes
• i 00
• s 01
• e 10
• u 1100
• t 11010
• o 11011
• a 111
Time Complexity
• Since Huffman coding uses min Heap data structure for implementing
priority queue, the complexity is O(nlogn). This can be explained as follows-
• Building a min heap takes O(nlogn) time (Moving an element from root to leaf node
requires O(logn) comparisons and this is done for n/2 elements, in the worst case).
• Building a min heap takes O(nlogn) time (Moving an element from root to leaf node
requires O(logn) comparisons and this is done for n/2 elements, in the worst case).
• Since building a min heap and sorting it are executed in sequence, the
algorithmic complexity of entire process computes to O(nlogn)
Graph
• A Graph is a non-linear data structure
consisting of nodes and edges.
• The nodes are sometimes also referred to as
vertices and the edges are lines or arcs that
connect any two nodes in the graph.
• More formally a Graph can be defined as, A
Graph consists of a finite set of vertices(or
nodes) and set of Edges which connect a
pair of nodes.
Breadth First Search
• Breadth-First Traversal (or Search) for a graph is like Breadth-First Traversal
of a tree.
• The only catch here is, unlike trees, graphs may contain cycles, so we may
come to the same node again.
• To avoid processing a node more than once, we use a Boolean visited array.
• For simplicity, it is assumed that all vertices are reachable from the starting
vertex.
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Syed Zaid Irshad
Example
Time Complexity
• Following is Breadth First Traversal (starting from vertex 2):
• 2 0 3 1
• Time Complexity: O(V+E) where V is several vertices in the graph and E is
several edges in the graph.
Depth First Search
• Depth First Traversal (or Search) for a graph is like Depth First Traversal of
a tree.
• The only catch here is, unlike trees, graphs may contain cycles (a node may
be visited twice).
• To avoid processing a node more than once, use a Boolean visited array.
Example
Complexity Analysis
• Time complexity: O(V + E), where V is the number of vertices and E is the
number of edges in the graph.
• Space Complexity: O(V), since an extra visited array of size V is required.
Topological Sorting
• Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of
vertices such that for every directed edge u v, vertex u comes before v in the
ordering.
• Topological Sorting for a graph is not possible if the graph is not a DAG.
• In DFS, we print a vertex and then recursively call DFS for its adjacent vertices.
• In topological sorting, we need to print a vertex before its adjacent vertices.
• So Topological sorting is different from DFS.
DFS vs TS
• In DFS, we start from a vertex, we first print it and then recursively call DFS
for its adjacent vertices.
• In topological sorting, we use a temporary stack.
• We don’t print the vertex immediately, we first recursively call topological
sorting for all its adjacent vertices, then push it to a stack. Finally, print
contents of the stack.
• Note that a vertex is pushed to stack only when all its adjacent vertices (and
their adjacent vertices and so on) are already in the stack.
Example
Complexity Analysis
• Time Complexity: O(V+E).
• The above algorithm is simply DFS with an extra stack. So, time complexity is the same
as DFS which is.
• Auxiliary space: O(V).
• The extra space is needed for the stack.
Strongly
Connected
Components
• A directed graph is
strongly connected if
there is a path between all
pairs of vertices.
• A strongly connected
component (SCC) of a
directed graph is a
maximal strongly
connected subgraph.
Kosaraju’s Algorithm
For each vertex u of the graph, mark u as unvisited. Let L be empty.
For each vertex u of the graph do Visit(u), where Visit(u) is the recursive subroutine:
If u is unvisited then:
Mark u as visited.
For each out-neighbour v of u, do Visit(v).
Prepend u to L.
Otherwise do nothing.
Kosaraju’s Algorithm
For each element u of L in order, do Assign(u,u) where Assign(u,root) is the recursive
subroutine:
If u has not been assigned to a component, then:
Assign u as belonging to the component whose root is root.
For each in-neighbour v of u, do Assign(v,root).
Otherwise do nothing.
Steps
• Create an empty stack ‘S’ and do DFS traversal of a graph. In DFS traversal,
after calling recursive DFS for adjacent vertices of a vertex, push the vertex
to stack.
• Reverse directions of all arcs to obtain the transpose graph.
• One by one pop a vertex from S while S is not empty. Let the popped vertex
be ‘v’. Take v as source and do DFS. The DFS starting from v prints strongly
connected component of v.
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx

More Related Content

What's hot

daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
hodcsencet
 

What's hot (20)

Analysis of algorithm
Analysis of algorithmAnalysis of algorithm
Analysis of algorithm
 
Algorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIAlgorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms II
 
Complexity analysis in Algorithms
Complexity analysis in AlgorithmsComplexity analysis in Algorithms
Complexity analysis in Algorithms
 
Time and Space Complexity
Time and Space ComplexityTime and Space Complexity
Time and Space Complexity
 
Algorithm And analysis Lecture 03& 04-time complexity.
 Algorithm And analysis Lecture 03& 04-time complexity. Algorithm And analysis Lecture 03& 04-time complexity.
Algorithm And analysis Lecture 03& 04-time complexity.
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Introduction to algorithms
Introduction to algorithmsIntroduction to algorithms
Introduction to algorithms
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
Breadth First Search & Depth First Search
Breadth First Search & Depth First SearchBreadth First Search & Depth First Search
Breadth First Search & Depth First Search
 
Design and analysis of algorithms
Design and analysis of algorithmsDesign and analysis of algorithms
Design and analysis of algorithms
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)
 
Greedy Algorihm
Greedy AlgorihmGreedy Algorihm
Greedy Algorihm
 
Quick sort
Quick sortQuick sort
Quick sort
 
Algorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms IAlgorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms I
 
LR(1) and SLR(1) parsing
LR(1) and SLR(1) parsingLR(1) and SLR(1) parsing
LR(1) and SLR(1) parsing
 
Big o notation
Big o notationBig o notation
Big o notation
 
Compiler Chapter 1
Compiler Chapter 1Compiler Chapter 1
Compiler Chapter 1
 
Floyd Warshall Algorithm
Floyd Warshall AlgorithmFloyd Warshall Algorithm
Floyd Warshall Algorithm
 

Similar to Design and Analysis of Algorithms.pptx

FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
AntareepMajumder
 
Unit II_Searching and Sorting Algorithms.ppt
Unit II_Searching and Sorting Algorithms.pptUnit II_Searching and Sorting Algorithms.ppt
Unit II_Searching and Sorting Algorithms.ppt
HODElex
 

Similar to Design and Analysis of Algorithms.pptx (20)

2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx
 
Unit 1, ADA.pptx
Unit 1, ADA.pptxUnit 1, ADA.pptx
Unit 1, ADA.pptx
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
 
Unit II_Searching and Sorting Algorithms.ppt
Unit II_Searching and Sorting Algorithms.pptUnit II_Searching and Sorting Algorithms.ppt
Unit II_Searching and Sorting Algorithms.ppt
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
 
Analysis and Design of Algorithms
Analysis and Design of AlgorithmsAnalysis and Design of Algorithms
Analysis and Design of Algorithms
 
Algorithm and C code related to data structure
Algorithm and C code related to data structureAlgorithm and C code related to data structure
Algorithm and C code related to data structure
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity Calculation
 
Slides [DAA] Unit 2 Ch 2.pdf
Slides [DAA] Unit 2 Ch 2.pdfSlides [DAA] Unit 2 Ch 2.pdf
Slides [DAA] Unit 2 Ch 2.pdf
 
Chap5 slides
Chap5 slidesChap5 slides
Chap5 slides
 
Python algorithm
Python algorithmPython algorithm
Python algorithm
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
 
Lec1.ppt
Lec1.pptLec1.ppt
Lec1.ppt
 
Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
 
Intro to Data Structure & Algorithms
Intro to Data Structure & AlgorithmsIntro to Data Structure & Algorithms
Intro to Data Structure & Algorithms
 
Cs 331 Data Structures
Cs 331 Data StructuresCs 331 Data Structures
Cs 331 Data Structures
 
Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis Introduction to Algorithms Complexity Analysis
Introduction to Algorithms Complexity Analysis
 
Algorithm.pptx
Algorithm.pptxAlgorithm.pptx
Algorithm.pptx
 

More from Syed Zaid Irshad

Basic Concept of Information Technology
Basic Concept of Information TechnologyBasic Concept of Information Technology
Basic Concept of Information Technology
Syed Zaid Irshad
 
Introduction to ICS 1st Year Book
Introduction to ICS 1st Year BookIntroduction to ICS 1st Year Book
Introduction to ICS 1st Year Book
Syed Zaid Irshad
 

More from Syed Zaid Irshad (20)

Operating System.pdf
Operating System.pdfOperating System.pdf
Operating System.pdf
 
DBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_SolutionDBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_Solution
 
Data Structure and Algorithms.pptx
Data Structure and Algorithms.pptxData Structure and Algorithms.pptx
Data Structure and Algorithms.pptx
 
Professional Issues in Computing
Professional Issues in ComputingProfessional Issues in Computing
Professional Issues in Computing
 
Reduce course notes class xi
Reduce course notes class xiReduce course notes class xi
Reduce course notes class xi
 
Reduce course notes class xii
Reduce course notes class xiiReduce course notes class xii
Reduce course notes class xii
 
Introduction to Database
Introduction to DatabaseIntroduction to Database
Introduction to Database
 
C Language
C LanguageC Language
C Language
 
Flowchart
FlowchartFlowchart
Flowchart
 
Algorithm Pseudo
Algorithm PseudoAlgorithm Pseudo
Algorithm Pseudo
 
Computer Programming
Computer ProgrammingComputer Programming
Computer Programming
 
ICS 2nd Year Book Introduction
ICS 2nd Year Book IntroductionICS 2nd Year Book Introduction
ICS 2nd Year Book Introduction
 
Security, Copyright and the Law
Security, Copyright and the LawSecurity, Copyright and the Law
Security, Copyright and the Law
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
 
Data Communication
Data CommunicationData Communication
Data Communication
 
Information Networks
Information NetworksInformation Networks
Information Networks
 
Basic Concept of Information Technology
Basic Concept of Information TechnologyBasic Concept of Information Technology
Basic Concept of Information Technology
 
Introduction to ICS 1st Year Book
Introduction to ICS 1st Year BookIntroduction to ICS 1st Year Book
Introduction to ICS 1st Year Book
 
Using the set operators
Using the set operatorsUsing the set operators
Using the set operators
 
Using subqueries to solve queries
Using subqueries to solve queriesUsing subqueries to solve queries
Using subqueries to solve queries
 

Recently uploaded

Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
AnaAcapella
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
EADTU
 
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lessonQUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
httgc7rh9c
 

Recently uploaded (20)

Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
What is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptxWhat is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptx
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
Our Environment Class 10 Science Notes pdf
Our Environment Class 10 Science Notes pdfOur Environment Class 10 Science Notes pdf
Our Environment Class 10 Science Notes pdf
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lessonQUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
dusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningdusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learning
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 

Design and Analysis of Algorithms.pptx

  • 1. Design & Analysis of Algorithms Syed Zaid Irshad Lecturer, Department of Computer Science, MAJU MS Software Engineering BSc Computer System Engineering
  • 2. Algorithm • An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be performed with a finite amount of effort in a finite length of time. • No matter what the input values may be, an algorithm terminates after executing a finite number of instructions. • In addition, every algorithm must satisfy the following criteria: • Input: there are zero or more quantities, which are externally supplied; Output: at least one quantity is produced • Definiteness: each instruction must be clear and unambiguous • Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will terminate after a finite number of steps
  • 3. Algorithm • Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by a person using only pencil and paper.
  • 4. Areas of Study of Algorithms • How to devise algorithms? • Techniques – Incremental, Divide & Conquer, Branch and Bound , Dynamic Programming, Greedy Algorithms, Randomized Algorithm, Backtracking • How to analyze algorithms? • Analysis of Algorithms or performance analysis refer to the task of determining how much computing time & storage an algorithm requires • How to test a program? • Debugging - Debugging is the process of executing programs on sample data sets to determine whether faulty results occur and, if so, to correct them. • Profiling or performance measurement is the process of executing a correct program on data sets and measuring the time and space it takes to compute the results
  • 5. Areas of Study of Algorithms • How to validate algorithms? • Check for Algorithm that it computes the correct answer for all possible legal inputs. algorithm validation, First Phase. • Second phase: Algorithm to Program, Program Proving or Program Verification Solution be stated in two forms: • First Form: Program which is annotated by a set of assertions about the input and output variables of the program, predicate calculus • Second form: is called a specification
  • 6. Performance of Programs • The performance of a program is the amount of computer memory and time needed to run a program. • Time Complexity • Space Complexity
  • 7. Time Complexity • The time needed by an algorithm expressed as a function of the size of a problem is called the time complexity of the algorithm. • The time complexity of a program is the amount of computer time it needs to run to completion. • The limiting behavior of the complexity as size increases is called the asymptotic time complexity. • It is the asymptotic complexity of an algorithm, which ultimately determines the size of problems that can be solved by the algorithm.
  • 8. Space Complexity • The space complexity of a program is the amount of memory it needs to run to completion. • The space need by a program has the following components: • Instruction space: Instruction space is the space needed to store the compiled version of the program instructions. • The compiler used to complete the program into machine code. • The compiler options in effect at the time of compilation • The target computer.
  • 9. Space Complexity • The space need by a program has the following components: • Data space: Data space is the space needed to store all constant and variable values. Data space has two components: • Space needed by constants and simple variables in program. • Space needed by dynamically allocated objects such as arrays and class instances. • Environment stack space: The environment stack is used to save information needed to resume execution of partially completed functions.
  • 10. Algorithm Design Goals • The three basic design goals that one should strive for in a program are: • Try to save Time • A program that runs faster is a better program, so saving time is an obvious goal. • Try to save Space • A program that saves space over a competing program is considered desirable. • Try to save Face • By preventing the program from locking up or generating reams of garbled data.
  • 11. Classification of Algorithms • If “n” is the number of data items to be processed or degree of polynomial or the size of the file to be sorted or searched or the number of nodes in a graph etc. • 1 • Log n • n • n log n • n^2 • n^3 • 2^n
  • 12. Classification of Algorithms • 1 (Constant/Best case) • Next instructions of most programs are executed once or at most only a few times. • If all the instructions of a program have this property, • We say that its running time is a constant. • Log n (Logarithmic/Divide ignore part) • When the running time of a program is logarithmic, the program gets slightly slower as n grows. • This running time commonly occurs in programs that solve a big problem by transforming it into a smaller problem, cutting the size by some constant fraction. • When n is a million, log n is a doubled. Whenever n doubles, log n increases by a constant, but log n does not double until n increases to n^2.
  • 13. Classification of Algorithms • n (Linear/Examine each) • When the running time of a program is linear, it is generally the case that a small amount of processing is done on each input element. • This is the optimal situation for an algorithm that must process n inputs. • n log n (Linear logarithmic/Divide use all parts) • This running time arises for algorithms that solve a problem by breaking it up into smaller sub-problems, solving then independently, and then combining the solutions. • When n doubles, the running time more than doubles.
  • 14. Classification of Algorithms • n^2 (Quadratic/Nested loops) • When the running time of an algorithm is quadratic, it is practical for use only on relatively small problems. • Quadratic running times typically arise in algorithms that process all pairs of data items (perhaps in a double nested loop) whenever n doubles, the running time increases four-fold. • n^3 (Cubic/Nested loops) • Similarly, an algorithm that process triples of data items (perhaps in a triple–nested loop) has a cubic running time and is practical for use only on small problems. • Whenever n doubles, the running time increases eight-fold.
  • 15. Classification of Algorithms • 2^n (Exponential/All subsets) • Few algorithms with exponential running time are likely to be appropriate for practical use, such algorithms arise naturally as “brute–force” solutions to problems. • Whenever n doubles, the running time squares.
  • 16. Complexity of Algorithms • The complexity of an algorithm M is the function f(n) which gives the running time and/or storage space requirement of the algorithm in terms of the size “n” of the input data. • Mostly, the storage space required by an algorithm is simply a multiple of the data size “n”. • Complexity shall refer to the running time of the algorithm.
  • 17. Complexity of Algorithms • The function f(n), gives the running time of an algorithm, depends not only on the size “n” of the input data but also on the data. • The complexity function f(n) for certain cases are: • Best Case : The minimum possible value of f(n) is called the best case. • Average Case : The expected value of f(n). • Worst Case : The maximum value of f(n) for any key possible input.
  • 18. Rate of Growth • The following notations are commonly use notations in performance analysis and used to characterize the complexity of an algorithm: • Big–OH (O) (Upper Bound) • The growth rate of f(n) is less than or equal (<) that of g(n). • Big–OMEGA (Ω) (Lower Bound) • The growth rate of f(n) is greater than or equal to (>) that of g(n). • Big–THETA (ϴ) (Same Order) • The growth rate of f(n) equals (=) the growth rate of g(n).
  • 19. Rate of Growth • Little–OH (o) • 𝑛→∞ 𝑓(𝑛) 𝑔(𝑛) = 0 • The growth rate of f(n) is less than that of g(n). • Little-OMEGA (ω) • The growth rate of f(n) is greater than that of g(n).
  • 20.
  • 21.
  • 22.
  • 23. Analyzing Algorithms n log n n*logn n^2 n^3 2^n 1 0 0 1 1 2 2 1 2 4 8 4 4 2 8 16 64 16 8 3 24 64 512 256 16 4 64 256 4096 65,536 32 5 160 1024 32,768 4,294,967,296 64 6 384 4096 2,62,144 ???????? 128 7 896 16,384 2,097,152 ???????? 256 8 2048 65,536 1,677,216 ????????
  • 24.
  • 25. Amortized Analysis • In an amortized analysis, we average the time required to perform a sequence of data structure operations over all the operations performed. • With amortized analysis, we can show that the average cost of an operation is small, if we average over a sequence of operations, even though a single operation within the sequence might be expensive. • Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.
  • 26. Amortized Analysis • Three most common techniques used in amortized analysis: • Aggregate Analysis • Accounting method • Potential method
  • 27. Aggregate Analysis • In which we determine an upper bound T(n) on the total cost of a sequence of n operations. • The average cost per operation is then T(n)/n. • We take the average cost as the amortized cost of each operation .
  • 28. Accounting method • When there is more than one type of operation, each type of operation may have a different amortized cost. • The accounting method overcharges some operations early in the sequence, storing the overcharge as “prepaid credit” on specific objects in the data structure. • Later in the sequence, the credit pays for operations that are charged less than they cost.
  • 29. Potential method • The potential method maintains the credit as the “potential energy” of the data structure instead of associating the credit with individual objects within the data structure. • The potential method, which is like the accounting method in that we determine the amortized cost of each operation and may overcharge operations early on to compensate for undercharges later.
  • 30. The Rule of Sums • Suppose that T1(n) and T2(n) are the running times of two programs fragments P1 and P2, and that T1(n) is O(f(n)) and T2(n) is O(g(n)). • Then T1(n) + T2(n), the running time of P1 followed by P2 is O(max f(n), g(n)), this is called as rule of sums. • For example, suppose that we have three steps whose running times are respectively O(n^2), O(n^3) and O(n. log n). • Then the running time of the first two steps executed sequentially is O (max(n^2, n^3)) which is O(n^3). • The running time of all three together is O(max (n^3, n. log n)) which is O(n^3).
  • 31. The rule of products • If T1(n) and T2(n) are O(f(n)) and O(g(n)) respectively. • Then T1(n)*T2(n) is O(f(n) g(n)). • It follows term the product rule that O(c f(n)) means the same thing as O(f(n)) if “c‟ is any positive constant. • For example, O(n^2/2) is same as O(n^2).
  • 32. The Running time of a program • When solving a problem, we are faced with a choice among algorithms. • The basis for this can be any one of the following: • We would like an algorithm that is easy to understand, code and debug. • We would like an algorithm that makes efficient use of the computer’s resources, especially, one that runs as fast as possible.
  • 33. Measuring the running time of a program • The running time of a program depends on factors such as: • The input to the program. • The quality of code generated by the compiler used to create the object program. • The nature and speed of the instructions on the machine used to execute the program, and • The time complexity of the algorithm underlying the program.
  • 34. Asymptotic Analysis of Algorithms • This approach is based on the asymptotic complexity measure. • This means that we don't try to count the exact number of steps of a program, but how that number grows with the size of the input to the program. • That gives us a measure that will work for different operating systems, compilers and CPUs. • The asymptotic complexity is written using big-O notation.
  • 35. Rules for using big-O • The most important property is that big-O gives an upper bound only. • If an algorithm is O(n^2), it doesn't have to take n^2 steps (or a constant multiple of n^2). But it can't take more than n2. • So, any algorithm that is O(n), is also an O(n^2) algorithm. If this seems confusing, think of big-O as being like "<". • Any number that is < n is also < n^2.
  • 36. Rules for using big-O • Ignoring constant factors: O(c f(n)) = O(f(n)), where c is a constant; e.g., O(20 n^3) = O(n^3) • Ignoring smaller terms: If a<b then O(a+b) = O(b), for example O(n^2+n) = O(n^2) • Upper bound only: If a<b then an O(a) algorithm is also an O(b) algorithm. • n and log n are "bigger" than any constant, from an asymptotic view (that means for large enough n). So, if k is a constant, an O(n + k) algorithm is also O(n), by ignoring smaller terms. Similarly, an O(log n + k) algorithm is also O(log n). • Another consequence of the last item is that an O(n log n + n) algorithm, which is O(n(log n + 1)), can be simplified to O(n log n).
  • 37. Properties of Asymptotic Notations • 1. General Properties: • If f(n) is O(g(n)) then a*f(n) is also O(g(n)); where a is a constant. • Similarly, this property satisfies both Θ and Ω notation. • 2. Transitive Properties: • If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)) • Similarly, this property satisfies both Θ and Ω notation. • We can say • If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) • If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
  • 38. Properties of Asymptotic Notations • 3. Reflexive Properties: • Reflexive properties are always easy to understand after transitive. • If f(n) is given, then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF! • Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always. • Example: f(n) = n²; O(n²) i.e., O(f(n)) • Similarly, this property satisfies both Θ and Ω notation. • We can say that: • If f(n) is given, then f(n) is Θ(f(n)). • If f(n) is given, then f(n) is Ω (f(n)).
  • 39. Properties of Asymptotic Notations • 4. Symmetric Properties: • If f(n) is Θ(g(n)) then g(n) is Θ(f(n)). • Example: f(n) = n² and g(n) = n² • then f(n) = Θ(n²) and g(n) = Θ(n²) • This property only satisfies for Θ notation. • 5. Transpose Symmetric Properties: • If f(n) is O(g(n)) then g(n) is Ω (f(n)). • Example: f(n) = n, g(n) = n² • then n is O(n²) and n² is Ω (n) • This property only satisfies O and Ω notations.
  • 40. Properties of Asymptotic Notations • 6. Some More Properties: • If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n)) • If f(n) = O(g(n)) and d(n)=O(e(n)) • then f(n) + d(n) = O (max(g(n), e(n))) • Example: f(n) = n i.e., O(n) • d(n) = n² i.e., O(n²) • then f(n) + d(n) = n + n² i.e., O(n²) • If f(n)=O(g(n)) and d(n)=O(e(n)) • then f(n) * d(n) = O(g(n) * e(n)) • Example: f(n) = n i.e., O(n) • d(n) = n² i.e., O(n²) • then f(n) * d(n) = n * n² = n³ i.e., O(n³)
  • 41. Calculating the running time of a program • x = 3*y + 2; • 5 n^3/100 n^2 = n/20 • for (i = 1; i<=n; i++) v[i] = v[i] + 1; • for (i = 1; i<=n; i++) for (j = 1; j<=n; j++) a[i,j] = b[i,j] * x; • for (i = 1; i<=n; i++) for (j = 1; j<=n; j++) C[i, j] = 0; for (k = 1; k<=n; k++) C[i, j] = C[i, j] + A[i, k] * B[k, j];
  • 42. General rules for the analysis of programs • The running time of each assignment read and write statement can usually be taken to be O(1). • The running time of a sequence of statements is determined by the sum rule. • The running time of an if–statement is the cost of conditionally executed statements, plus the time for evaluating the condition • The time to execute a loop is the sum, over all times around the loop, the time to execute the body and the time to evaluate the condition for termination.
  • 43. Recurrence • Many algorithms are recursive in nature. • When we analyze them, we get a recurrence relation for time complexity. • We get running time on an input of size n as a function of n and the running time on inputs of smaller sizes. • For example, in Merge Sort, to sort a given array, we divide it in two halves and recursively repeat the process for the two halves.
  • 44. Recurrence • Time complexity of Merge Sort can be written as T(n) = 2T(n/2) + c*n. • There are mainly three ways for solving recurrences. • Substitution Method • Recurrence Tree Method • Master Method • Iteration Method
  • 45. Substitution Method • One way to solve a divide-and-conquer recurrence equation is to use the iterative substitution method. • In using this method, we assume that the problem size n is fairly large and we than substitute the general form of the recurrence for each occurrence of the function T on the right-hand side. • For example, consider the recurrence 𝑇(𝑛) = 2𝑇 𝒏 𝟐 + 𝑛 • We guess the solution as 𝑇(𝑛) = 𝑂(𝑛 log 𝒏). Now we use induction to prove our guess. • We need to prove that 𝑇(𝑛) <= 𝑐𝑛 log 𝒏. We can assume that it is true for values smaller than n.
  • 46. Substitution Method • 𝑇 𝒏 𝟐 <= 𝑐 𝒏 𝟐 log 𝒏 𝟐 • <= 𝟐 𝑐 𝒏 𝟐 log 𝒏 𝟐 + 𝑛 • = 𝑐𝑛 log 𝒏 − 𝑐𝑛 log 𝟐 + 𝑛 • = 𝑐𝑛 log 𝒏 − 𝑐𝑛 + 𝑛 • <= 𝑐𝑛 log 𝒏
  • 47. Recurrence Tree Method • Another way of characterizing recurrence equations is to use the recursion tree method. • Like the substitution method, this technique uses repeated substitution to solve a recurrence equation, but it differs from the iterative substitution method in that, rather than being an algebraic approach, it is a visual approach. • 𝑇(𝑛) = 3𝑇 𝑛 4 + 𝑐𝒏𝟐
  • 48.
  • 49. Master Method • The master theorem is a formula for solving recurrences of the form 𝑇(𝑛) = 𝑎𝑇 𝑛 𝑏 + 𝑓(𝑛), where 𝑎 ≥ 1 and 𝑏 > 1 and f(n) is asymptotically positive. • This recurrence describes an algorithm that divides a problem of size n into a subproblems, each of size 𝑛 𝑏 , and solves them recursively.
  • 50.
  • 51. Master Method • To find which case the T(n) belongs we can find the log𝑏 𝑎 and if the value is: • Greater than c it lies in case 1 • Equal to c it lies in case 2 • Less than c it lies in case 3 • Were c being the power of n in f(n) Syed Zaid Irshad
  • 52. Example • 𝑇(𝑛) = 4𝑇 𝑛 4 + 5𝑛 • 𝑇(𝑛) = 4𝑇 𝑛 5 + 5𝑛 • 𝑇(𝑛) = 5𝑇 𝑛 4 + 5𝑛
  • 53. 𝑇(𝑛) = 5𝑇 𝑛 4 + 5𝑛 𝑎 = 5, 𝑏 = 4, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0 Find: log𝑏 𝑎 = log4 5 = 1.16 Because: 𝑐 < log𝑏 𝑎 It is case 1 of MT 𝑇 𝑛 = 𝜃(𝑛log𝑏 𝑎) 𝑇 𝑛 = 𝜃(𝑛1.16 )
  • 54. 𝑇(𝑛) = 4𝑇 𝑛 4 + 5𝑛 𝑎 = 4, 𝑏 = 4, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0 Find: log𝑏 𝑎 = log4 4 = 1 Because: 𝑐 = log𝑏 𝑎 It is case 2 of MT 𝑇 𝑛 = 𝜃(𝑛log𝑏 𝑎 𝑙𝑜𝑔𝐾+1 𝑛) 𝑇 𝑛 = 𝜃 𝑛1 𝑙𝑜𝑔0+1 𝑛 = 𝜃 𝑛 𝑙𝑜𝑔 𝑛
  • 55. 𝑇(𝑛) = 4𝑇 𝑛 5 + 5𝑛 𝑎 = 4, 𝑏 = 5, 𝑓 𝑛 = 5𝑛, 𝑐 = 1, 𝑘 = 0 Find: log𝑏 𝑎 = log5 4 = 0.86 Because: 𝑐 > log𝑏 𝑎 It is case 3 of MT 𝑇 𝑛 = 𝜃(𝑓(𝑛)) 𝑇 𝑛 = 𝜃(𝑛)
  • 56. Iteration Method • The Iteration Method, is also known as the Iterative Method, Backwards Substitution, Substitution Method, and Iterative Substitution. • It is a technique or procedure in computational mathematics used to solve a recurrence relation that uses an initial guess to generate a sequence of improving approximate solutions for a class of problems, in which the nth approximation is derived from the previous ones. • A Closed-Form Solution is an equation that solves a given problem in terms of functions and mathematical operations from a given generally-accepted set.
  • 57. • 𝑻 𝒏 = 𝟐𝑻 𝒏 𝟐 + 𝟕 • Find what 𝑻 𝒏 𝟐 is: • 𝑻 𝒏 𝟐 = 𝟐𝑻 𝒏 𝟐 𝟐 + 𝟕 = 𝟐𝑻 𝒏 𝟒 + 𝟕 • Now put 𝑻 𝒏 𝟐 in initial equation • 𝑻 𝒏 = 𝟐 𝟐𝑻 𝒏 𝟒 + 𝟕 + 𝟕 = 𝟒𝑻 𝒏 𝟒 + 𝟐𝟏 • By generalizing the equation • 𝑻 𝒏 = 𝟐𝒊 𝑻 𝒏 𝟐𝒊 + 𝟕(𝟐𝒊 − 𝟏)
  • 58. Example • Recurrence stops when 𝑛 = 1 so we can say that • 𝒏 𝟐𝒊 = 𝟏 • 𝟐𝒊 = 𝒏 • log 𝟐𝒊 = log 𝒏 • 𝒊 log 𝟐 = log 𝒏 • 𝒊 = log 𝒏 • Now replace i in general equation • 𝑻 𝒏 = 𝟐log 𝒏 𝑻 𝒏 𝟐log 𝒏 + 𝟕(𝟐log 𝒏 − 𝟏) • 𝑻 𝒏 = 𝒏𝑻 𝒏 𝑛 + 𝟕(𝑛 − 𝟏) • 𝑻 𝒏 = 𝒏𝑻 𝟏 + 𝟕(𝑛 − 𝟏) • 𝑻 𝒏 = 𝟐𝒏 + 𝟕(𝑛 − 𝟏) • 𝑻 𝒏 = 𝟗𝒏 − 𝟕 • 𝑻 𝒏 = 𝑶(𝒏)
  • 59.
  • 60.
  • 61. Solution • 𝑻 𝒏 = 𝟐𝑻 𝒏 𝟐 + 𝟒𝒏 • Find what 𝑻 𝒏 𝟐 is: • 𝑻 𝒏 𝟐 = 𝟐𝑻 𝒏 𝟐 𝟐 + 𝟒 𝒏 𝟐 = 𝟐𝑻 𝒏 𝟒 + 𝟐𝒏 • Now put 𝑻 𝒏 𝟐 in initial equation • 𝑻 𝒏 = 𝟐 𝟐𝑻 𝒏 𝟒 + 𝟐𝒏 + 𝟒𝒏 = 𝟒𝑻 𝒏 𝟒 + 𝟖𝒏 • By generalizing the equation • 𝑻 𝒏 = 𝟐𝒊 𝑻 𝒏 𝟐𝒊 + 𝟒𝒊𝒏 • Recurrence stops when 𝑛 = 1 so we can say that • 𝒏 𝟐𝒊 = 𝟏 • After some time • 𝟐𝒊 = 𝒏 • log 𝟐𝒊 = log 𝒏 • 𝒊 log 𝟐 = log 𝒏 • 𝒊 = log 𝒏 • Now replace I in general equation • 𝑻 𝒏 = 𝟐log 𝒏 𝑻 𝒏 𝟐log 𝒏 + 𝟒𝒏 log 𝒏
  • 62. Solution • 𝑻 𝒏 = 𝒏𝑻 𝒏 𝑛 + 𝟒𝒏 log 𝒏 • 𝑻 𝒏 = 𝒏𝑻 𝟏 + 𝟒𝒏 log 𝒏 • 𝑻 𝒏 = 𝒏 + 𝟒𝒏 log 𝒏 • 𝑻 𝒏 = 𝒏 log 𝒏 • 𝑻 𝒏 = 𝑶(𝒏log 𝒏)
  • 63. Incremental Technique • An incremental algorithm is given a sequence of input and finds a sequence of solutions that build incrementally while adapting to the changes in the input. • Insertion Sort
  • 64. Insertion Sort Iterate from arr[1] to arr[n] over the array. Compare the current element (key) to its predecessor. If the key element is smaller than its predecessor, compare it to the elements before. Move the greater elements one position up to make space for the swapped element. Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9
  • 65. Execution Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9 Checking either index[1] is less than index[0] Which is, in this case. We than Swap the elements with each other. Index 0 1 2 3 4 5 6 7 8 9 Element 3 4 2 10 12 1 5 6 7 9
  • 66. Execution Index 0 1 2 3 4 5 6 7 8 9 Element 3 4 2 10 12 1 5 6 7 9 Now checking either index[2] is less than index[1] Which is, in this case. So now we check if it is also index[2] is less than index[0] Which is, in this case. We than Swap the elements with each other. Index 0 1 2 3 4 5 6 7 8 9 Element 2 3 4 10 12 1 5 6 7 9
  • 67. Execution Index 0 1 2 3 4 5 6 7 8 9 Element 2 3 4 10 12 1 5 6 7 9 Now checking either index[3] is less than index[2] Which is not, in this case. So now we skip it. Index 0 1 2 3 4 5 6 7 8 9 Element 2 3 4 10 12 1 5 6 7 9
  • 68. Execution Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9 Iteration 1 3 4 2 10 12 1 5 6 7 9 Iteration 2 2 3 4 10 12 1 5 6 7 9 Iteration 3 2 3 4 10 12 1 5 6 7 9 Iteration 4 2 3 4 10 12 1 5 6 7 9 Iteration 5 1 2 3 4 10 12 5 6 7 9 Iteration 6 1 2 3 4 5 10 12 6 7 9 Iteration 7 1 2 3 4 5 6 10 12 7 9 Iteration 8 1 2 3 4 5 6 7 10 12 9 Iteration 9 1 2 3 4 5 6 7 9 10 12
  • 69. Algorithm Design • INSERTION-SORT(index) • for i = 1 to n • key ← index [1] • j ← 1 – 1 • while j > = 0 and index[j] > key • index[j+1] ← index[j] • j ← j – 1 • End while • index[j+1] ← key • End for
  • 70. Algorithm Design • INSERTION-SORT(index) • for i = 1 to 9 • key ← 3 • j ← 0 • while 0 > = 0 and 4 > 3 • index[0+1] = index[1] ← 4 • j ← 0 – 1 = -1 • End while • A[-1+1] = A[0] ← 3 • End for Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9 For(i=1) 3 4 2 10 12 1 5 6 7 9
  • 71. Algorithm Design • INSERTION-SORT(index) • for i = 1 to 9 • key ← 2 • j ← 1 • while 1 > = 0 and 4 > 2 • index[1+1] = index[2] ← 4 • j ← 1 – 1 = 0 • //End while • //A[-1+1] = A[0] ← 3 • //End for Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9 For(i=2) 2 3 4 10 12 1 5 6 7 9 INSERTION-SORT(index) for i = 1 to 9 key ← 2 j ← 1 while 0 > = 0 and 3 > 2 index[0+1] = index[1] ← 3 j ← 0 – 1 = -1 End while A[-1+1] = A[0] ← 2 End for
  • 72. Algorithm Analysis 𝐣 𝐰𝐡𝐢𝐥𝐞 𝐥𝐨𝐨𝐩 (𝐭𝐣) 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭# 𝟔 𝐨𝐫 𝟕 𝐟𝐨𝐫 𝐥𝐨𝐨𝐩 0 2 1 1 1 3 2 1 2 1 0 1 3 1 0 1 4 6 5 1 5 3 2 1 6 3 2 1 7 3 2 1 8 3 2 1 Total 25 16 9
  • 73. Algorithm Analysis • INSERTION-SORT(A) Cost Times • for i = 1 to n 𝑐0 𝑛 • key ← A [i] 𝑐1 𝑛 − 1 • j ← i – 1 𝑐2 𝑛 − 1 • while j > = 0 and A[j] > key 𝑐3 𝑡 • A[j+1] ← A[j] 𝑐4 (𝑡 − 1) • j ← j – 1 𝑐5 (𝑡 − 1) • End while • A[j+1] ← key 𝑐6 𝑛 − 1 • End for
  • 74. Algorithm Analysis • General Case • 𝑇 𝑛 = 𝑐0𝑛 + 𝑐1 𝑛 − 1 + 𝑐2 𝑛 − 1 + 𝑐3 𝑡𝑗 + 𝑐4 𝑡𝑗 − 1 + 𝑐5 𝑡𝑗 −
  • 75. Algorithm Analysis • Worst Case (Reversed Sorted) • 𝑡𝑗 = 𝑛(𝑛+1) 2 − 1 , 𝑡𝑗 − 1 = 𝑛(𝑛−1) 2 • 𝑇 𝑛 = 𝑐0𝑛 + 𝑐1 𝑛 − 1 + 𝑐2 𝑛 − 1 + 𝑐3 𝑛(𝑛+1) 2 − 1 + 𝑐4 𝑛(𝑛−1) 2 + 𝑐5 𝑛(𝑛−1) 2 + 𝑐6 𝑛 − 1 • 𝑇 𝑛 = 𝑐3 2 + 𝑐4 2 + 𝑐5 2 𝑛2 + 𝑐0 + 𝑐1 + 𝑐2 + 𝑐6 + 𝑐3 2 − 𝑐4 2 − 𝑐5 2 𝑛 − 𝑐1 + 𝑐2 + 𝑐3 + 𝑐6 • 𝑇 𝑛 = 𝑎𝑛2 + 𝑏𝑛 − 𝑐 • Rate of Growth • 𝜃(𝑛2 )
  • 76. Properties Time Complexity Big-O: O 𝑛2 , Big-Omega: Ω 𝑛 , Big-Theta: θ 𝑛2 Auxiliary Space O(1) Boundary Cases Insertion sort takes maximum time to sort if elements are sorted in reverse order. And it takes minimum time (Order of n) when elements are already sorted. Algorithmic Paradigm Incremental Approach Sorting In Place Yes Stable Yes Online Yes Uses Insertion sort is used when number of elements is small. It can also be useful when input array is almost sorted, only few elements are misplaced in complete big array.
  • 77. Divide-and-Conquer approach • A divide-and-conquer algorithm recursively breaks down a problem into two or more sub- problems of the same or related type, until these become simple enough to be solved directly. • The solutions to the sub-problems are then combined to give a solution to the original problem. • Merge Sort • A typical Divide and Conquer algorithm solves a problem using the following three steps. • Divide: Break the given problem into subproblems of same type. This step involves breaking the problem into smaller sub-problems. • Conquer: Recursively solve these sub-problems. • Combine: Appropriately combine the answers.
  • 78. Merge Sort Divide array into two parts Sort each part of array Combine results into single array Index 0 1 2 3 4 5 6 7 8 9 Element 4 3 2 10 12 1 5 6 7 9
  • 79. Execution Example 4 3 2 10 12 1 5 6 7 9 4 3 2 10 12 1 5 6 7 9 4 3 2 10 12 1 5 6 7 9 2 10 12 6 7 9 4 3 1 5 3 4 2 10 12 1 5 6 7 9 2 3 4 10 12 1 5 6 7 9 2 3 4 10 12 1 5 6 7 9 1 2 3 4 5 6 7 9 10 12
  • 80. Algorithm Design • mergeSort(A, p, r): • if p > r • return • q = (p+r)/2 • mergeSort(A, p, q) • mergeSort(A, q+1, r) • merge(A, p, q, r) • //A = array, p = starting index, r = ending index
  • 81. Algorithm Analysis cost times 𝑐1 1 𝑐2 1 𝑐3 1 𝑐4 𝑛1 + 1 = 𝑛 2 + 1 𝑐5 𝑛 2 𝑐6 𝑛2 + 1 = 𝑛 2 + 1 𝑐7 𝑛 2 𝑐8 1 𝑐9 1 𝑐10 1 𝑐11 1 𝑐12 𝑛 + 1 𝑐13 𝑛 𝑐14 𝑚 𝑐15 𝑚 𝑐16 𝑛 − 𝑚 𝑐17 𝑛 − 𝑚
  • 82. Algorithm Analysis • General Case: • 𝐓 𝐧 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ 1 + 𝑐4 ∗ 𝑛 2 + 1 + 𝑐5 ∗ 𝑛 2 + 𝑐6 ∗ 𝑛 2 + 1 + 𝑐7 ∗ 𝑛 2 + 𝑐8 ∗ 1 + 𝑐9 ∗ 1 + 𝑐10 ∗ 1 + 𝑐11 ∗ 1 + 𝑐12 ∗ 𝑛 + 1 + 𝑐13 ∗ 𝑛 + 𝑐14 ∗ 𝑚 + 𝑐15 ∗ 𝑚 + 𝑐16 ∗ 𝑛 − 𝑚 + 𝑐17 ∗ 𝑛 − 𝑚 = 𝑐4 2 + 𝑐5 2 + 𝑐6 2 + 𝑐7 2 + 𝑐12 + 𝑐13 + 𝑐16 + 𝑐17 𝑛 + (𝑐4 + 𝑐15 − 𝑐16 − 𝑐17)𝑚 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐4 + 𝑐6 + 𝑐8 + 𝑐9 + 𝑐10 + 𝑐11 + 𝑐12 ) • = 𝑎𝑛 + 𝑏𝑚 + 𝑐 • = 𝚯(𝒏)
  • 83. Algorithm Analysis • Best/Worst/Average Case: • Array division: log 𝑛 • Number of steps: log(𝑛 + 1) • Middle point: 𝑂(1) • Merge required: 𝑂(𝑛) • Now by multiplying: n log(𝑛 + 1) = 𝑛 𝑙𝑜𝑔 𝑛 = 𝑂(𝑛 log 𝑛)
  • 84. Properties Time Complexity 𝑂(𝑛𝐿𝑜𝑔𝑛) Auxiliary Space 𝑂(𝑛) Boundary Cases Algorithmic Paradigm Divide and Conquer Sorting In Place No Stable Yes Online Uses Merge Sort is useful for sorting linked lists in 𝑂(𝑛𝐿𝑜𝑔𝑛) time.
  • 85. Heap Sort • Heap • Data Structure that manages information • Array represented as a Near Complete Binary Tree: Each level, except possibly the last, is filled, and all nodes in the last level are as far left as possible • A.length and A.heap-size
  • 86. Algorithm Design • Heap • height (tree) and height (node)
  • 87. Algorithm Design • Max-Heap , except for the Root • Min-Heap , except for the Root
  • 91. Algorithm Design 𝑖 = 2 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 2 𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 2 = 2𝑖 = 2 ∗ 2 = 4 𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 2 = 2𝑖 + 1 = 2 ∗ 2 + 1 = 5 𝑖𝑓 𝑙 = 4 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑙 = 𝐴 4 = 14 > 𝐴 𝑖 = 𝐴 2 = 4 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑙 = 4 𝑖𝑓 𝑟 = 5 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑟 = 𝐴 5 = 7 > 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4 = 14  𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 4 ≠ 𝑖 = 2 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 𝐴 𝑖 = 𝐴 2 𝑤𝑖𝑡ℎ 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 4
  • 92. Algorithm Design 𝑖 = 4 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 4 𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 4 = 2𝑖 = 2 ∗ 4 = 8 𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 4 = 2𝑖 + 1 = 2 ∗ 4 + 1 = 9 𝑖𝑓 𝑙 = 8 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑙 = 𝐴 8 = 2 > 𝐴 𝑖 = 𝐴 4 = 4  𝑒𝑙𝑠𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 = 4 𝑖𝑓 𝑟 = 9 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10 𝑎𝑛𝑑 𝐴 𝑟 = 𝐴 9 = 8 > 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 4 = 4 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑟 = 9 𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9 ≠ 𝑖 = 4 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 𝐴 𝑖 = 𝐴 4 𝑤𝑖𝑡ℎ 𝐴 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝐴 9 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9
  • 93. Algorithm Design 𝑖 = 9 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 𝐴, 9 𝑙 = 𝐿𝐸𝐹𝑇 𝑖 = 𝐿𝐸𝐹𝑇 9 = 2𝑖 = 2 ∗ 9 = 18 𝑟 = 𝑅𝐼𝐺𝐻𝑇 𝑖 = 𝑅𝐼𝐺𝐻𝑇 9 = 2𝑖 + 1 = 2 ∗ 9 + 1 = 19 𝑖𝑓 𝑙 = 18 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10  𝑒𝑙𝑠𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 = 9 𝑖𝑓 𝑟 = 19 ≤ 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 10  𝑖𝑓 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 9 ≠ 𝑖 = 9
  • 95. Heap Sort: Analysis • MAX-HEAPIFY cost times (Worst Case) 𝑐1 1 𝑐2 1 𝑐3 1 𝑐4 1/2 𝑐5 0 𝑐6 1 𝑐7 1 2 𝑐8 1 𝑐9 1 𝑐10 ?
  • 96. Heap Sort: Analysis • MAX-HEAPIFY (Worst Case) 𝑇 𝑛 ≤ 𝑇 2𝑛 3 + 𝜃 1 = 𝑂(𝑙𝑜𝑔2𝑛)
  • 97. Algorithm Analysis • Worst Case occurs when the last level of the Heap is exactly (or atleast) half-full: • Heap is a Near Complete Binary Tree (left sub-tree of any node is always larger or equal in size than its right sub-tree) • In the Worst Case, recursion would take place as many times as possible, which is only possible if atleast the left sub-tree is completely filled • In order to find an upper bound on the size of the sub-trees (maximum number of times that recursion would take place), we need to observe the maximum size of the left sub-tree only • Proof
  • 99. Algorithm Analysis • MAX-HEAPIFY cost times (Best Case) times (Average Case) 𝑐1 1 1 𝑐2 1 1 𝑐3 1 1 𝑐4 0 1 2 𝑐5 1 1 2 𝑐6 1 1 𝑐7 0 1 2 𝑐8 1 1 𝑐9 0 1 2 𝑐10 0 ?
  • 100. Algorithm Analysis • Probabilities of Mutually Exclusive Events get summed up 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 ≠ 𝑖 OR 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑖 = 1 2 + 1 2 = 1 • Probabilities of Independent Events get multiplied 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 ≠ 𝑖 = 1 2 = 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑙 AND/OR 𝑃 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 = 𝑟 = 1 2 × 1 2
  • 101. Algorithm Analysis • MAX-HEAPIFY (Best Case) 𝑇 𝑛 = 𝜃 1 • MAX-HEAPIFY (Average Case) 𝑇 𝑛 ≤ 𝑇( 2𝑛 3 2 ) + 𝜃 1 = 𝑂( 𝑙𝑜𝑔2𝑛 2 ) = 𝑂(𝑙𝑜𝑔2𝑛) = 𝑂 ℎ
  • 102. Quick Sort • Efficient algorithm for sorting many elements via comparisons • Divide-and-Conquer approach
  • 103. Quick Sort • It picks an element as pivot and partitions the given array around the picked pivot. There are many different versions of quicksort that pick pivot in different ways. 1.Always pick first element as pivot. 2.Always pick last element as pivot (implemented below) 3.Pick a random element as pivot. 4.Pick median as pivot.
  • 104. Algorithm Design • To sort an entire Array, the initial call is • QUICKSORT (𝑨, 𝟏, 𝑨. 𝒍𝒆𝒏𝒈𝒕𝒉)
  • 106. Algorithm Design 𝑨 = {𝟐, 𝟖, 𝟕, 𝟏, 𝟑, 𝟓, 𝟔, 𝟒} 1 2 3 4 5 6 7 8 𝑝 = 1, 𝑟 = 8 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁 𝐴, 𝑝, 𝑟 = 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁 𝐴, 1, 8 𝑥 = 𝐴 𝑟 = 𝐴 8 = 4 𝑖 = 𝑝 − 1 = 1 − 1 = 0 Iteration 1: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 𝑝 = 1 𝑖𝑓 𝐴 𝑗 = 𝐴 1 = 2 ≤ 𝑥 = 4 2 8 7 1 3 5 6 4
  • 107. Algorithm Design 𝑖 = 𝑖 + 1 = 0 + 1 = 1 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 1 = 2 with A j = A 1 = 2 Iteration 2: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 2 𝑖𝑓 𝐴 𝑗 = 𝐴 2 = 8 ≤ 𝑥 = 4 Iteration 3: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 3 𝑖𝑓 𝐴 𝑗 = 𝐴 3 = 7 ≤ 𝑥 = 4 2 8 7 1 3 5 6 4
  • 108. Algorithm Design Iteration 4: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 4 𝑖𝑓 𝐴 𝑗 = 𝐴 4 = 1 ≤ 𝑥 = 4 𝑖 = 𝑖 + 1 = 1 + 1 = 2 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 2 = 8 with A j = A 4 = 1 Iteration 5: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 5 𝑖𝑓 𝐴 𝑗 = 𝐴 5 = 3 ≤ 𝑥 = 4 𝑖 = 𝑖 + 1 = 2 + 1 = 3 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i = A 3 = 7 with A j = A 5 = 3 2 1 3 8 7 5 6 4 2 1 7 8 3 5 6 4
  • 109. Algorithm Design Iteration 6: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 6 𝑖𝑓 𝐴 𝑗 = 𝐴 6 = 5 ≤ 𝑥 = 4 Iteration 7: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 7 𝑖𝑓 𝐴 𝑗 = 𝐴 7 = 6 ≤ 𝑥 = 4 Iteration 8: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑗 = 8 𝑒𝑥𝑐ℎ𝑎𝑛𝑔𝑒 A i + 1 = A 3 + 1 = A 4 = 8 with A r = A 8 = 4 𝑟𝑒𝑡𝑢𝑟𝑛 𝑖 + 1 = 3 + 1 = 4 2 1 3 4 7 5 6 8
  • 111. Algorithm Analysis cost times 𝑐1 1 𝑐2 1 𝑐3 𝑛 𝑐4 𝑛 − 1 𝑐5 (𝑛 − 1) 2 𝑐6 (𝑛 − 1) 2 𝑐7 1 𝑐8 1
  • 112. Algorithm Analysis 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ 𝑛 + 𝑐4 ∗ (𝑛 − 1) + 𝑐5 ∗ 𝑛 − 1 2 + 𝑐6 ∗ 𝑛 − 1 2 + 𝑐7 ∗ 1 + 𝑐8 ∗ 1 = (𝑐3 + 𝑐4 + 𝑐5 2 + 𝑐6 2 )𝑛 + 𝑐1 + 𝑐2 − 𝑐4 − 𝑐5 2 − 𝑐6 2 + 𝑐7 + 𝑐8 = 𝑎𝑛 + 𝑏 = 𝜣(𝒏) • Best Case, Worst Case, or Average Case?
  • 113. Algorithm Analysis cost times (Worst Case: Pivot is largest) times (Best Case: Pivot is smallest) 𝑐1 1 1 𝑐2 1 1 𝑐3 𝑛 𝑛 𝑐4 𝑛 − 1 𝑛 − 1 𝑐5 𝑛 − 1 0 𝑐6 𝑛 − 1 0 𝑐7 1 1 𝑐8 1 1
  • 114. Algorithm Analysis • Worst Case: • The worst case occurs when the partition process always picks greatest or smallest element as pivot. • If we consider above partition strategy where last element is always picked as pivot, the worst case will occur when the array is already sorted in increasing or decreasing order. • Following is recurrence for worst case. • T(n) = T(0) + T(n-1) + theta(n) • which is equivalent to • T(n) = T(n-1) + theta(n) • The solution of above recurrence is theta (n^2).
  • 115. Algorithm Analysis • Best Case: • The best case occurs when the partition process always picks the middle element as pivot. Following is recurrence for best case. • T(n) = 2T(n/2) + theta(n) • The solution of above recurrence is theta (nLogn). • It can be solved using case 2 of Master Theorem.
  • 116. Algorithm Analysis • Average Case: • To do average case analysis, we need to consider all possible permutation of array and calculate time taken by every permutation which doesn’t look easy. • We can get an idea of average case by considering the case when partition puts O(n/9) elements in one set and O(9n/10) elements in other set. Following is recurrence for this case. • T(n) = T(n/9) + T(9n/10) + theta(n) • Solution of above recurrence is also O(nLogn)
  • 117. Randomized Quick Sort: Analysis • Average-case partitioning (Unbalanced partitioning) • Random Sampling 𝑇(𝑛) = 𝛩(𝑛)
  • 118. Randomized Quick Sort: Analysis • Average-case partitioning (Unbalanced partitioning) 𝑇(𝑛) = O 𝑛𝑙𝑜𝑔2𝑛
  • 119. Counting Sort • Assumptions: • Each of the 𝑛 input elements is an integer in the range: 0 𝑡𝑜 𝑘, where 𝑘 is an integer • When 𝑘 = 𝑂(𝑛), 𝑻(𝒏) = 𝜣(𝒏) • Determines for each input element 𝑥, the number of elements less than 𝑥 • Places element 𝑥 into correct position in array • External Arrays required: • 𝐵[1 … 𝑛]: Sorted Output • 𝐶[0 … 𝑘]: Temporary Storage
  • 121. Algorithm Design 𝑨 = {𝟐, 𝟓, 𝟑, 𝟎, 𝟐, 𝟑, 𝟎, 𝟑} 𝑘 = 5 𝐶𝑂𝑈𝑁𝑇𝐼𝑁𝐺 − 𝑆𝑂𝑅𝑇 𝐴, 𝐵, 𝑘 = 𝐶𝑂𝑈𝑁𝑇𝐼𝑁𝐺 − 𝑆𝑂𝑅𝑇 𝐴, 𝐵, 5 𝑙𝑒𝑡 𝐶 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 C 𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 5 0 1 2 3 4 5 𝐶 𝑖 = 0 C 0 1 2 3 4 5 2 5 3 0 2 3 0 3 0 0 0 0 0 0
  • 122. Algorithm Design 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ = 8 𝐶 A j = 𝐶 𝐴 1 = 𝐶 2 = 𝐶 A j + 1 = 𝐶 2 + 1 = 0 + 1 = 1 𝐶 A j = 𝐶 𝐴 2 = 𝐶 5 = 𝐶 A j + 1 = 𝐶 5 + 1 = 0 + 1 = 1 𝐶 A j = 𝐶 𝐴 3 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 0 + 1 = 1 𝐶 A j = 𝐶 𝐴 4 = 𝐶 0 = 𝐶 A j + 1 = 𝐶 0 + 1 = 0 + 1 = 1 𝐶 A j = 𝐶 𝐴 5 = 𝐶 2 = 𝐶 A j + 1 = 𝐶 2 + 1 = 1 + 1 = 2 𝐶 A j = 𝐶 𝐴 6 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 1 + 1 = 2 𝐶 A j = 𝐶 𝐴 7 = 𝐶 0 = 𝐶 A j + 1 = 𝐶 0 + 1 = 1 + 1 = 2 𝐶 A j = 𝐶 𝐴 8 = 𝐶 3 = 𝐶 A j + 1 = 𝐶 3 + 1 = 2 + 1 = 3 C 0 1 2 3 4 5 2 0 2 3 0 1
  • 123. Algorithm Design 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑘 = 5 𝐶 𝑖 = 𝐶 1 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 1 + C 0 = 0 + 2 = 2 𝐶 𝑖 = 𝐶 2 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 2 + C 1 = 2 + 2 = 4 𝐶 𝑖 = 𝐶 3 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 3 + C 2 = 3 + 4 = 7 𝐶 𝑖 = 𝐶 4 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 4 + C 3 = 0 + 7 = 7 𝐶 𝑖 = 𝐶 5 = 𝐶 𝑖 + 𝐶 𝑖 − 1 = 𝐶 5 + C 4 = 1 + 7 = 8 C 0 1 2 3 4 5 2 2 4 7 7 8
  • 124. Algorithm Design 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 7 = 𝐵 𝐶 0 = 𝐵 2 = 𝐴 𝑗 = 𝐴 7 = 0 𝐶 𝐴 𝑗 = 𝐶 𝐴 7 = 𝐶 0 = 𝐶 𝐴 𝑗 − 1 = 𝐶 0 − 1 = 2 − 1 = 1 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 6 = 𝐵 𝐶 3 = 𝐵 6 = 𝐴 𝑗 = 𝐴 6 = 3 𝐶 𝐴 𝑗 = 𝐶 𝐴 6 = 𝐶 3 = 𝐶 𝐴 𝑗 − 1 = 𝐶 3 − 1 = 6 − 1 = 5 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 5 = 𝐵 𝐶 2 = 𝐵 4 = 𝐴 𝑗 = 𝐴 5 = 2 𝐶 𝐴 𝑗 = 𝐶 𝐴 5 = 𝐶 2 = 𝐶 𝐴 𝑗 − 1 = 𝐶 2 − 1 = 4 − 1 = 3 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 4 = 𝐵 𝐶 0 = 𝐵 1 = 𝐴 𝑗 = 𝐴 4 = 0 𝐶 𝐴 𝑗 = 𝐶 𝐴 4 = 𝐶 0 = 𝐶 𝐴 𝑗 − 1 = 𝐶 0 − 1 = 1 − 1 = 0
  • 125. Algorithm Design 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 3 = 𝐵 𝐶 3 = 𝐵 5 = 𝐴 𝑗 = 𝐴 3 = 3 𝐶 𝐴 𝑗 = 𝐶 𝐴 3 = 𝐶 3 = 𝐶 𝐴 𝑗 − 1 = 𝐶 3 − 1 = 5 − 1 = 4 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 2 = 𝐵 𝐶 5 = 𝐵 8 = 𝐴 𝑗 = 𝐴 2 = 5 𝐶 𝐴 𝑗 = 𝐶 𝐴 2 = 𝐶 5 = 𝐶 𝐴 𝑗 − 1 = 𝐶 5 − 1 = 8 − 1 = 7 𝐵 𝐶 𝐴 𝑗 = 𝐵 𝐶 𝐴 1 = 𝐵 𝐶 2 = 𝐵 3 = 𝐴 𝑗 = 𝐴 1 = 2 𝐶 𝐴 𝑗 = 𝐶 𝐴 1 = 𝐶 2 = 𝐶 𝐴 𝑗 − 1 = 𝐶 2 − 1 = 3 − 1 = 2 B 1 2 3 4 5 6 7 8 C 0 1 2 3 4 5 0 0 2 2 3 3 3 5 0 2 2 4 7 7
  • 127. Algorithm Analysis cost times 𝑐1 1 𝑐2 𝑘 + 2 𝑐3 𝑘 + 1 𝑐4 𝑛 + 1 𝑐5 𝑛 0 1 𝑐7 𝑘 + 1 𝑐8 𝑘 0 1 𝑐10 𝑛 + 1 𝑐11 𝑛 𝑐12 𝑛
  • 128. Algorithm Analysis • General Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ k + 2 + 𝑐3 ∗ k + 1 + 𝑐4 ∗ n + 1 + 𝑐5 ∗ 𝑛 + 𝑐7 ∗ k + 1 + 𝑐8 ∗ 𝑘 + 𝑐10 ∗ n + 1 + 𝑐11 ∗ 𝑛 + 𝑐12 ∗ 𝑛 = 𝑐4 + 𝑐5 + 𝑐10 + 𝑐11 + 𝑐12 𝑛 + 𝑐2 + 𝑐3 + 𝑐7 + 𝑐8 𝑘 + (𝑐1 + 2𝑐2 + 𝑐3 + 𝑐4 +
  • 129. Radix Sort • Assumptions: • Each of the 𝑛 input elements is a (maximum) 𝑑 − 𝑑𝑖𝑔𝑖𝑡 integer in the range: 0 𝑡𝑜 𝑘, where 𝑘 is an integer • When 𝑑 ≪ 𝑛, 𝑻(𝒏) = 𝜣(𝒏) • Sorts recursively on each digit column, starting from the Least Significant digit • Requires 𝑑 passes to sort all elements • Application: • Sort records using multiple fields
  • 131. Algorithm Design 𝑑 Range of Values (𝟎 → 𝟏𝟎𝒅 − 𝟏) 1 0 → 9 2 0 → 99 3 0 → 999 . … . … Maximum Valu𝑒 = 𝑘 = 10𝑑 − 1 log10 𝑘 = 𝑑 − 0 log10 𝑘 = 𝑑 ⇒ 𝑑 ≪ 𝑘
  • 132. Algorithm Analysis • General Case 𝑇 𝑛 = 𝑐1 𝑑 + 1 + 𝛩 𝑛 ∗ 𝑑 = 𝑑𝑐1 + 𝑐1 + 𝑑𝛩 𝑛 = 𝛩 𝑛 , 𝑖𝑓 𝑑 ≪ 𝑛 (which is true since 𝑑 ≪ 𝑘 for Radix Sort, and 𝑘 ≤ 𝑛 for Counting Sort) 𝑇(𝑛) = 𝛩(𝑛) (based on Counting Sort; see Table for other fields too) • Best Case, Worst Case, or Average Case? cost times 𝑐1 𝑑 + 1 𝛩(𝑛) 𝑑
  • 133. Bucket Sort • Assumptions: • Input is drawn from a Uniform distribution • Input is distributed uniformly and independently over the interval [0, 1] • Divides the [0, 1) half-open interval into 𝑛 equal-sized sub-intervals or Buckets and distributes 𝑛 keys into Buckets • Bucket 𝑖 holds values in the interval [𝑖 𝑛 , 𝑖 + 1 𝑛) • Sorts the keys in each Bucket in order • External Array required: • 𝐵[0 … 𝑛 − 1] of Linked Lists (Buckets): Temporary Storage • Without assumption of Uniform distribution, Bucket Sort may still run in Linear time if 𝑖=0 𝑛−1 𝛩 𝐸 𝑛𝑖2 = 𝛩(1)
  • 135. Algorithm Design 𝑨 = {𝟎. 𝟕𝟖, 𝟎. 𝟏𝟕, 𝟎. 𝟑𝟗, 𝟎. 𝟐𝟔, 𝟎. 𝟕𝟐, 𝟎. 𝟗𝟒, 𝟎. 𝟐𝟏, 𝟎. 𝟏𝟐, 𝟎. 𝟐𝟑, 𝟎. 𝟔𝟖} 1 2 3 4 5 6 7 8 9 10 𝑛 = 𝐴. 𝑙𝑒𝑛𝑔𝑡ℎ = 10 𝑙𝑒𝑡 𝐵 0 … 9 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 B 𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 𝑛 − 1 = 10 − 1 = 9 0 1 2 3 4 5 6 7 8 9 𝑚𝑎𝑘𝑒 𝐵 𝑖 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑙𝑖𝑠𝑡 B Iteration 1: 0 1 2 3 4 5 6 7 8 9 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 1 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 1 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 1 = 𝐵 10 ∗ 0.78 = 𝐵 7.8 = 𝐵 7 0.78 0.17 0.39 0.26 0.72 0.94 0.21 0.12 0.23 0.68 / / / / / / / / / /
  • 136. Algorithm Design Iteration 2: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 2 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 2 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 2 = 𝐵 10 ∗ 0.17 = 𝐵 1.7 = 𝐵 1 Iteration 3: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 3 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 3 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 3 = 𝐵 10 ∗ 0.39 = 𝐵 3.9 = 𝐵 3
  • 137. Algorithm Design Iteration 4: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 4 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 4 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 4 = 𝐵 10 ∗ 0.26 = 𝐵 2.6 = 𝐵 2 Iteration 5: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 5 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 5 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 5 = 𝐵 10 ∗ 0.72 = 𝐵 7.2 = 𝐵 7
  • 138. Algorithm Design Iteration 6: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 6 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 6 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 6 = 𝐵 10 ∗ 0.94 = 𝐵 9.4 = 𝐵 9 Iteration 7: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 7 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 7 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 7 = 𝐵 10 ∗ 0.21 = 𝐵 2.1 = 𝐵 2
  • 139. Algorithm Design Iteration 8: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 8 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 8 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 8 = 𝐵 10 ∗ 0.12 = 𝐵 1.2 = 𝐵 1 Iteration 9: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 9 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 9 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 9 = 𝐵 10 ∗ 0.23 = 𝐵 2.3 = 𝐵 2
  • 140. Algorithm Design Iteration 10: 𝑓𝑜𝑟 𝑙𝑜𝑜𝑝: 𝑖 = 10 𝑖𝑛𝑠𝑒𝑟𝑡 𝐴 𝑖 = 𝐴 10 𝑖𝑛𝑡𝑜 𝑙𝑖𝑠𝑡 𝐵 𝑛𝐴 𝑖 = 𝐵 10 ∗ 𝐴 10 = 𝐵 10 ∗ 0.68 = 𝐵 6.8 = 𝐵 6
  • 143. Algorithm Design 𝑐𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡𝑠 𝐵 0 , 𝐵 1 , … , 𝐵 𝑛 − 1 𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟 B 1 2 3 4 5 6 7 8 9 10 0.12 0.17 0.21 0.23 0.26 0.39 0.68 0.72 0.78 0.94
  • 144. Algorithm Analysis cost (Best Case) cost (Worst Case) cost (Average Case) times 𝑐1 𝑐1 𝑐1 1 𝑐2 𝑐2 𝑐2 1 𝑐3 𝑐3 𝑐3 𝑛 + 1 𝑐4 𝑐4 𝑐4 𝑛 𝑐5 𝑐5 𝑐5 𝑛 + 1 𝑐6 𝑐6 𝑐6 𝑛 𝑐7 𝑐7 𝑐7 𝑛 + 1 𝛩(𝑛) Θ(𝑛2) Θ(𝑛2) 𝑛 𝑐9 𝑐9 𝑐9 1
  • 145. Algorithm Analysis • Best Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (𝑛 + 1) + 𝑐4∗ 𝑛 + 𝑐5 ∗ (𝑛 + 1) + 𝑐6 ∗ 𝑛 + 𝑐7 ∗ 𝑛 + 1 + 𝛩(𝑛) ∗ 𝑛 + 𝑐9 ∗ 1 = 𝛩 𝑛2 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑐7 𝑛 + 𝑐1 + 𝑐2 + 𝑐3 + 𝑐5 + 𝑐7 + 𝑐9 = 𝛩 𝑛2 + 𝑎𝑛 + 𝑏 = 𝛩 𝑛2
  • 146. Algorithm Analysis • Worst Case/ Average Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (𝑛 + 1) + 𝑐4∗ 𝑛 + 𝑐5 ∗ (𝑛 + 1) + 𝑐6 ∗ 𝑛 + 𝑐7 ∗ 𝑛 + 1 + 𝛩(𝑛2 ) ∗ 𝑛 + 𝑐9 ∗ 1 = 𝛩 𝑛3 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑐7 𝑛 + 𝑐1 + 𝑐2 + 𝑐3 + 𝑐5 + 𝑐7 + 𝑐9 = 𝛩 𝑛3 + 𝑎𝑛 + 𝑏 = 𝛩 𝑛3
  • 147. Algorithm Analysis • Best Case: 𝑇 𝑛 = 𝛩(𝑛) i. Each Bucket contains exactly one element ii. One Bucket contains all elements in ascending order • Average Case: 𝑇 𝑛 = 𝛩(𝑛2) i. One Bucket contains all elements in non-ascending order ii. Each Bucket contains from 2 to 𝑛 − 2 elements in ascending order • Worst Case: 𝑇 𝑛 = 𝛩(𝑛3 ) i. Each Bucket contains from 2 to 𝑛 − 2 elements in non-ascending order
  • 148. Comparison Name Best Case Average Case Worst Case Space Complexity Insertion O(n) O(n^2) O(n^2) O(n) Merge O(n log n) O(n log n) O(n log n) O(n) Heap O(n log n) O(n log n) O(n log n) O(n) Quick O(log n) O(n log n) O(n^2) O(n+log n) Counting O(n) O(n) O(n) O(n) Radix O(n) O(n) O(n) Bucket O(n) O(n^2) O(n^3)
  • 149. Dynamic Programming • “Programming” refers to a tabular method, and not computer code • Application: • Optimization Problems: Multiple solutions might exist, out of which “an” instead of “the” optimal solution is acquired • Sorting: Optimization Problem? • Core Components of Optimization Problem for Dynamic Programming to be applied upon: 1. Optimal Substructure 2. Overlapping Sub-problems
  • 150. Dynamic Programming 1. Optimal Substructure: Optimal solution(s) to a problem incorporate optimal solutions to related sub-problems, which we may solve independently • How to discover Optimal Substructure in a problem: i. Show that a solution to the problem consists of making a choice ii. Suppose that an optimal solution to the problem consists of making a choice iii. Determine which sub-problems result due to Step 2 iv. Show that solutions to the sub-problems used within an optimal solution to the problem must themselves be optimal
  • 151. Dynamic Programming • Optimal Substructure varies in two ways: i. How many sub-problems an optimal solution to the problem uses? ii. How many choices we have in determining which sub-problems to use in an optimal solution?
  • 152. Dynamic Programming 2. Overlapping Sub-problems: The space of sub-problems must be “small” in the sense that a recursive algorithm for the problem solves the same sub- problems over and over, rather than always generating new sub-problems • Total number of distinct sub-problems is polynomial in 𝑛 • Divide-and-Conquer approach generates brand-new (non-overlapping) problems at each step of the recursion • Dynamic Programming approach takes advantage of overlapping sub-problems by solving each sub-problem once and then storing its solution in a table where it can be looked up when needed, using constant time per lookup
  • 153. Dynamic Programming S# Characteristic Divide-and-Conquer Dynamic Programming 1 Problems Non-Optimization Optimization 2 Sub-problems (Divide) Disjoint Overlapping 3 Solves sub-problems (Conquer) Recursively and Repeatedly Recursively but Only once 4 Saves solutions to sub-problems No Table 5 Combines solutions to sub-problems (Combine) Yes Yes 6 Time Efficient Less More 7 Space Efficient More Less
  • 154. Dynamic Programming • When developing a Dynamic Programming algorithm, we follow a sequence of four steps: 1. Characterize the structure of an optimal solution • Find the Optimal Substructure 2. Recursively define the value of an optimal solution • Define the cost of an optimal solution recursively in terms of the optimal solutions to sub-problems 3. Compute the value of an optimal solution, typically in a bottom-up fashion • Write an algorithm to compute the value of an optimal solution 4. Construct an optimal solution from computed information • An optional step
  • 155. Dynamic Programming • Total Running Time: • Depends on the product of two factors: 1. Total number of sub-problems 2. Number of choices for each sub-problem
  • 156. Rod Cutting • Cutting a Steel Rod into rods of smaller length in a way that maximizes their total value • Serling Enterprises buys long steel rods and cuts them into shorter rods, which it then sells. Each cut is free. The management of Serling Enterprises wants to know the best way to cut up the rods. • We assume that we know, for 𝑖 = 1, 2, … the price 𝑝𝑖 in dollars that Serling Enterprises charges for a rod of length 𝑖 inches. Rod lengths are always an integral number of inches.
  • 158. Rod Cutting: Design • Method 𝟏: Possible Combinations • Consider the case when 𝒏 = 𝟒 • Figure shows all the unique ways (8) to cut up a rod of 4 inches in length, including the way with no cuts at all • Cutting a 4-inch rod into two 2-inch pieces produces revenue 𝑝2 + 𝑝2 = 5 + 5 = 10, which is optimal • Total Possible Combinations of cutting up a rod of length 𝑛 = 𝟐𝒏−𝟏 • 𝑇 𝑛 = 𝛩(𝟐𝒏−𝟏) = 𝛩(𝟐𝒏)
  • 162. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - -
  • 163. Rod Cutting: Design • Method 𝟐: Equation 1 • Top-down • Recursive
  • 164. Rod Cutting: Design • 𝑝𝑛 corresponds to making no cuts at all and selling the rod of length 𝑛 as is • Other 𝑛 − 1 arguments correspond to the revenue obtained by making an initial cut of the rod into two pieces of size 𝑖 and 𝑛 − 𝑖, for each 𝑖 = 1,2, … , 𝑛 − 1, and then optimally cutting up those pieces further, obtaining revenues 𝑟𝑖 and 𝑟𝑛−𝑖 from those two pieces • Since we don’t know ahead of time which value of 𝑖 optimizes revenue, we must consider all possible values of 𝑖 and pick the one that maximizes revenue • We also have the option of picking no 𝑖 at all if we can obtain more revenue by selling the rod uncut
  • 165. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝑟5 = max 𝑝5, 𝑟1 + 𝑟4, 𝑟2 + 𝑟3, 𝑟3 + 𝑟2, 𝑟4 + 𝑟1 𝑟1 = max 𝑝1 = max 1 = 1 𝑟2 = max 𝑝2, 𝑟1 + 𝑟1 = max 5, 1 + 1 = max 5, 2 = 5 𝑟3 = max 𝑝3, 𝑟1 + 𝑟2, 𝑟2 + 𝑟1 = max 8, 1 + 5, 5 + 1 = max 8, 6, 6 = 8 𝑟4 = max 𝑝4, 𝑟1 + 𝑟3, 𝑟2 + 𝑟2, 𝑟3 + 𝑟1 = max 9, 1 + 8, 5 + 5, 8 + 1 = max 9, 9, 10, 9 = 10 𝑟5 = max 10, 1 + 10, 5 + 8, 8 + 5, 10 + 1 = max 10, 11, 13, 13, 11 = 13 • Tracing back the optimal solution 5 = 2 + 3
  • 166. Rod Cutting: Design • To solve the original problem of size 𝒏, we solve problems of the same type, but of smaller sizes • Once we make the first cut, we may consider the two pieces as independent instances of the rod-cutting problem • Which Method is better?
  • 167. Rod Cutting: Analysis 5 1 2 3 4 1 1 1 2 1 1 + + + 2 1 + 1 1 + 1 3 + 2 2 + 3 1 + 1 2 1 1 + 2 1 + 1 1 + 1 2 1 1 + + 2 1 + 1 1 + 1 1 + 1 1 + 1 + + + + 4 3 2 + 1 1 +
  • 168. Rod Cutting: Analysis • For 𝑛 = 5, total problems = 1 + 78 = 79 = 𝛩 2𝑛+1 = 𝛩(2𝑛) Node size Number of sub-problems 1 0 2 2 3 8 4 25 5 8 + 2 ∗ 2 + 8 ∗ 2 + 25 ∗ 2 = 78
  • 169. Rod Cutting: Analysis • Optimal solution for cutting up a rod of length 𝑛 (if we make any cuts at all) uses just one sub-problem (of size 𝑛 − 𝑖), but we must consider 𝒏 − 𝟏 choices for 𝑖 in order to determine which one yields an optimal solution • Optimal way of cutting up a rod of length 𝑛 (if we make any cuts at all) involves optimally cutting up the two pieces resulting from the first cut • Overall optimal solution incorporates optimal solutions to the two related sub-problems, maximizing revenue from each of those two pieces • Rod-cutting problem exhibits Optimal Substructure
  • 170. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - -
  • 171. Rod Cutting: Design • Method 𝟑: Equation 2 • Top-down • Recursive • A decomposition is viewed as consisting of a first piece of length 𝑖 cut off the left end, and then a remainder of length 𝑛 − 𝑖 • Only the remainder, and not the first piece, may be further divided • An optimal solution represents the solution based on only one related sub-problem; the remainder, instead of two sub-problems • Simpler than Methods 1 and 2
  • 172. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝑟5 = max 1≤𝑖≤5 𝑝𝑖 + 𝑟5−𝑖 = max(𝑝1 + 𝑟4, 𝑝2 + 𝑟3, 𝑝3 + 𝑟2, 𝑝4 + 𝑟1, 𝑝5 + 𝑟0) 𝑟0 = 0 𝑟1 = max 𝑝1 + 𝑟0 = max 1 + 0 = max 1 = 1 𝑟2 = max 𝑝1 + 𝑟1, 𝑝2 + 𝑟0 = max 1 + 1, 5 + 0 = max 2, 5 = 5 𝑟3 = max 𝑝1 + 𝑟2, 𝑝2 + 𝑟1, 𝑝3 + 𝑟0 = max 1 + 5, 5 + 1, 8 + 0 = max 6, 6, 8 = 8 𝑟4 = max(𝑝1 + 𝑟3, 𝑝2 + 𝑟2, 𝑝3 + 𝑟1, 𝑝4
  • 173. Rod Cutting: Design 𝑟5 = max 𝑝1 + 𝑟4, 𝑝2 + 𝑟3, 𝑝3 + 𝑟2, 𝑝4 + 𝑟1, 𝑝5 + 𝑟0 = max 1 + 10, 5 + 8, 8 + 5, 9 + 1 , 10 + 0 = max 11, 13, 13, 10, 10 = 13 • Tracing back the optimal solution 5 = 2 + 3 • Which Method is the best?
  • 174. Rod Cutting: Analysis 5 4 2 1 3 0 2 1 3 0 2 1 0 1 0 0 1 0 0 0 0 2 1 0 1 0 0
  • 175. Rod Cutting: Analysis • For 𝑛 = 5, total problems = 1 + 31 = 32 = 𝛩(2𝑛) Node size Number of sub-problems 0 0 1 1 2 3 3 7 4 15 5 5 + 15 + 7 + 3 + 1 = 31
  • 176. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - -
  • 177. Rod Cutting: Design • Method 𝟒: Automation of Method 𝟑 • Top-down • Recursive
  • 178. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 5) 𝑖𝑓 𝑛 == 0  𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 5 𝑞 = max −∞, p 1 + CUT − ROD p, 5 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 4 = max −∞, 1 + 10 = max −∞, 11 = 11 𝑞 = max 11, p 2 + CUT − ROD p, 5 − 2 = max 11, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 3 = max 11, 5 + 8 = max 11, 13 = 13
  • 179. Rod Cutting: Design 𝑞 = max 13, p 3 + CUT − ROD p, 5 − 3 = max 13, 8 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 2 = max 13, 8 + 5 = max 13, 13 = 13 𝑞 = max 13, p 4 + CUT − ROD p, 5 − 4 = max 13, 9 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 = max 13, 9 + 1 = max 13, 10 = 13 𝑞 = max 13, p 5 + CUT − ROD p, 5 − 5 = max 13, 10 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 = max(13, 10 +
  • 180. Rod Cutting: Design 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 4) 𝑖𝑓 𝑛 == 0  𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 4 𝑞 = max −∞, p 1 + CUT − ROD p, 4 − 1 = max −∞, 1 + 𝐶𝑈𝑇 −
  • 181. Rod Cutting: Design 𝑞 = max 10, p 3 + CUT − ROD p, 4 − 3 = max 10, 8 + 𝐶𝑈𝑇 −
  • 182. Rod Cutting: Design 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 3) 𝑖𝑓 𝑛 == 0  𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 3 𝑞 = max −∞, p 1 + CUT − ROD p, 3 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 2 = max −∞, 1 + 5 = max −∞, 6 = 6 𝑞 = max 6, p 2 + CUT − ROD p, 3 − 2 = max 6, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 = max 6, 5 + 1 = max 6, 6 = 6 𝑞 = max 6, p 3 + CUT − ROD p, 3 − 3 = max 6, 8 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 = max 6, 8 + 0 = max 6, 8 = 8 𝑟𝑒𝑡𝑢𝑟𝑛 8
  • 183. Rod Cutting: Design 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 2) 𝑖𝑓 𝑛 == 0  𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 2 𝑞 = max −∞, p 1 + CUT − ROD p, 2 − 1 = max −∞, 1 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 1 = max −∞, 1 + 1 = max −∞, 2 = 2 𝑞 = max 2, p 2 + CUT − ROD p, 2 − 2 = max 2, 5 + 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 0 = max 2, 5 + 0 = max 2, 5 = 5 𝑟𝑒𝑡𝑢𝑟𝑛 5
  • 184. Rod Cutting: Design 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 1) 𝑖𝑓 𝑛 == 0  𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 1 𝑞 = max −∞, p 1 + CUT − ROD p, 1 − 1 = max −∞, 1 + 𝐶𝑈𝑇 −
  • 185. Rod Cutting: Design 𝐶𝑈𝑇 − 𝑅𝑂𝐷(𝑝, 0) 𝑖𝑓 𝑛 == 0 𝑟𝑒𝑡𝑢𝑟𝑛 0
  • 187. Rod Cutting: Analysis • General Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐3 ∗ 1 + 𝑐4 ∗ n + 1 + 𝑐5 ∗ 2𝑛 − 1 + 𝑐6 ∗ 1 = 𝑐52𝑛 + 𝑐4 𝑛 + (𝑐1 + 𝑐3 + 𝑐4 − 𝑐5 + 𝑐6) = 𝑎2𝑛 + 𝑏𝑛 + 𝑐 = 𝛩(2𝑛) cost times 𝑐1 1 𝑐2 0 𝑐3 1 𝑐4 𝑛 + 1 𝑐5 2𝑛 − 1 𝑐6 1
  • 188. Rod Cutting: Analysis • Each node label gives the size 𝒏 of the corresponding immediate sub-problems • An edge from parent 𝒔 to child 𝒕 corresponds to cutting off an initial piece of size 𝒔 − 𝒕, and leaving a remaining sub-problem of size 𝒕 • Total nodes = 𝟐𝒏 • Total leaves = 𝟐𝒏−𝟏 • Total number of paths from root to a leaf = 𝟐𝒏−𝟏 • Total ways of cutting up a rod of length 𝑛 = 𝟐𝒏−𝟏 = Possible Combinations (Method 1)
  • 189. Rod Cutting: Analysis • Each node label represents the number of immediate calls made to CUT-ROD by that node • 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 calls 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 − 𝒊 for 𝑖 = 1, 2, … . , 𝑛 (top to down, left to right in graph) • 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒏 calls 𝑪𝑼𝑻 − 𝑹𝑶𝑫 𝒑, 𝒋 for 𝑗 = 0, 1, … . , 𝑛 − 1 (top to down, right to left in graph) • Let 𝑻(𝒏) denote the total number of calls made to CUT-ROD, when called with its second parameter equal to 𝑛 • 𝑻(𝒏) equals the sum of number of nodes in sub-trees whose root is labeled 𝑛 in the recursion tree • One call to CUT-ROD is made at the root: 𝑇(0) = 1
  • 190. Rod Cutting: Analysis • 𝑻(𝒋) denotes the total number of calls (including recursive calls) made due to 𝑪𝑼𝑻 − 𝑹𝑶𝑫(𝒑, 𝒏 − 𝒊), where 𝑗 = 𝑛 − 𝑖 𝑇 𝑛 = 1 + 𝑗=0 𝑛−1 𝑇(𝑗) 𝑇 𝑛 = 1 + 𝑗=0 𝑛−1 2𝑗 = 1 + 20 + 21 + ⋯ + 2𝑛−1 = 1 + 2𝑛 − 1 = 𝛩(2𝑛 ) • Running Time of CUT-ROD is exponential • For each unit increment in 𝑛, program’s running time doubles
  • 191. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - - 4 Automation of Method 𝟑 𝜣(𝟐𝒏 ) - - -
  • 192. Rod Cutting: Design • Dynamic Programming • Each sub-problem is solved only once, and the solution is saved • Look up the solution in constant time, rather than re-compute it • Time-Space trade-off • Additional memory is used to save computation time • Exponential-time solution may be transformed into Polynomial-time solution i. Total number of distinct sub-problems is polynomial in 𝑛 ii. Each sub-problem can be solved in polynomial time 1. Top-down with Memoization 2. Bottom-up Method
  • 193. Rod Cutting: Design • Method 𝟓: Top-down with Memoization • Top-down • Recursive • Saves solutions of all sub-problems • Solves each sub-problem only once • Memoized: • Solutions initially contain special values to indicate that the solutions need to be computed • Remembers solutions computed earlier • Checks whether solutions of sub-problems have been saved earlier • Memoized version of Method 4
  • 195. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝑀𝐸𝑀𝑂𝐼𝑍𝐸𝐷 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5) 𝑙𝑒𝑡 𝑟 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 r 𝑓𝑜𝑟 𝑖 = 0 𝑡𝑜 𝑛 = 5 0 1 2 3 4 5 𝑟 0 = −∞ 𝑟 1 = −∞ 𝑟 2 = −∞ 𝑟 3 = −∞ r 𝑟 4 = −∞ 0 1 2 3 4 5 𝑟 5 = −∞ 𝑟𝑒𝑡𝑢𝑟𝑛 MEMOIZED−CUT−ROD−AUX (p, 5, r) = 13 −∞ −∞ −∞ −∞ −∞ −∞
  • 196. Rod Cutting: Design • Tracing back the optimal solution 5 = 2 + 3 • Which Method is the best?
  • 197. Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 5, r) 𝑖𝑓 𝑟 5 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0  𝑒𝑙𝑠𝑒 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 5 𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 4, r)) = max( − ∞, 1+10) = max( − ∞, 11 = 11 𝑞 = max 11, p[2] + MEMOIZED−CUT−ROD−AUX (p, 3, r)) = max(11, 5+8) = max(11, 13 = 13 𝑞 = max 13, p[3] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max(13, 8+5) = max(13, 13 = 13
  • 198. Rod Cutting: Design 𝑞 = max(13, p[4] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = 0 1 5 8 10 13
  • 199. Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 4, r) 𝑖𝑓 𝑟 4 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0  𝑒𝑙𝑠𝑒 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 4 𝑞 = max(−∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 3, r)) = max( − ∞, 1+8) = max( − ∞, 9) = 9 𝑞 = max(9, p[2] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max(9, 5+5) = max(9, 10) = 10 𝑞 = max(10, p[3] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max(10, 8+1) = max(10, 9) = 10
  • 200. 𝑞 = max(10, p[4] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) = max(10, 9+0) = max(10, 9)=10 𝑟 4 = 𝑞 = 10 r 𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 10 0 1 2 3 4 5 Rod Cutting: Design 0 1 5 8 10 −∞
  • 201. Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 3, r) 𝑖𝑓 𝑟 3 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0  𝑒𝑙𝑠𝑒 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 3 𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 2, r)) = max( − ∞, 1 + 5 = max −∞, 6 = 6 𝑞 = max 6, p[2] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max(6, 5 + 1 = max 6, 6 = 6
  • 202. 𝑞 = max(6, p[3] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) Rod Cutting: Design 0 1 5 8 −∞ −∞
  • 203. Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 2, r) 𝑖𝑓 𝑟 2 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0  𝑒𝑙𝑠𝑒 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 2 𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 1, r)) = max( − ∞, 1 + 1 = max −∞, 2 = 2 𝑞 = max 2, p[2] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) = max(2, 5 + 0 = max 2, 5 = 5 𝑟 2 = 𝑞 = 5 r 𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 5 0 1 2 3 4 5 0 1 5 −∞ −∞ −∞
  • 204. 0 1 −∞ −∞ −∞ −∞ Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 1, r) 𝑖𝑓 𝑟 1 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0  𝑒𝑙𝑠𝑒 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 = 1 𝑞 = max −∞, p[1] + MEMOIZED−CUT−ROD−AUX (p, 0, r)) = max( − ∞, 1 + 0 = max −∞, 1 = max 1 = 1 𝑟 1 = 𝑞 = 1 r 𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 1 0 1 2 3 4 5
  • 205. 0 −∞ −∞ −∞ −∞ −∞ Rod Cutting: Design MEMOIZED−CUT−ROD−AUX (p, 0, r) 𝑖𝑓 𝑟 0 = −∞ ≥ 0  𝑖𝑓 𝑛 == 0 𝑞 = 0 𝑟 0 = 𝑞 = 0 r 𝑟𝑒𝑡𝑢𝑟𝑛 𝑞 = 0 0 1 2 3 4 5
  • 206. Rod Cutting: Analysis • Top-down • Left-to-Right 5 4 2 1 3 0 2 1 3 0 2 1 0 1 0 0
  • 207. Rod Cutting: Analysis  𝑇(𝑛 cost times 𝑐1 1 𝑐2 𝑛 + 2 𝑐3 𝑛 + 1 ? 1 cost times 𝑐1 1 𝑐2 0 𝑐3 1 𝑐4 0 𝑐5 1 𝑐6 𝑛 + 1 𝑐7 𝑛(𝑛 + 1 ) 2 𝑐8 1 𝑐9 1
  • 208. Rod Cutting: Analysis • General Case MEMOIZED−CUT−ROD−AUX (p, n, r) 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐3 ∗ 1 + 𝑐5 ∗ 1 + 𝑐6 ∗ (n + 1) + 𝑐7 ∗ n(n+1) 2 +𝑐8 ∗ 1 + 𝑐9 ∗ 1 = 𝑐7 2 𝑛2 + (𝑐6 + 𝑐7 2 )𝑛 + (𝑐1+ 𝑐3 + 𝑐5 + 𝑐6 + 𝑐8 + 𝑐9) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝛩(𝑛2) MEMOIZED−CUT−ROD (p, n) 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ n + 2 + 𝑐3 ∗ n + 1 + 𝛩 𝑛2 ∗ 1 = 𝛩 𝑛2 + 𝑐2 + 𝑐3 𝑛 + (𝑐1 + 2𝑐2 + 𝑐3) = 𝛩 𝑛2 + 𝑎𝑛 + 𝑏 = 𝛩(𝑛2)
  • 209. Rod Cutting: Analysis 𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐)
  • 210. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - - 4 Automation of Method 𝟑 𝜣(𝟐𝒏 ) - - - 5 Top-down with Memoization 𝜣(𝒏𝟐) - - -
  • 211. Rod Cutting: Design • Method 𝟔: Bottom-up Method • Bottom-up • Non-Recursive or Iterative • Sorts all sub-problems by size and solves them in that order (smallest first) • When solving a particular sub-problem, all of the smaller sub-problems its solution depends upon, have already been solved • Saves solutions of all sub-problems • Solves each sub-problem only once
  • 213. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5) 𝑙𝑒𝑡 𝑟 0 … 5 𝑏𝑒 𝑎 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦 r 𝑟[0] = 0 0 1 2 3 4 5 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 = 5 𝑞 = −∞ r 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 1 0 1 2 3 4 5 𝑞 = max(−∞, 𝑝 1 + 𝑟[0]) = max −∞, 1 + 0 = max −∞, 1 = 1 𝑟 1 = 𝑞 = 1 r 0 1 2 3 4 5 0 0 1
  • 214. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 2 𝑞 = max(−∞, 𝑝 1 + 𝑟[1]) = max −∞, 1 + 1 = max −∞, 2 = 2 𝑞 = max(2, 𝑝 2 + 𝑟[0]) = max 2, 5 + 0 = max 2, 5 = 5 𝑟 2 = 𝑞 = 5 r 𝑞 = −∞ 0 1 2 3 4 5 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 3 𝑞 = max(−∞, 𝑝 1 + 𝑟[2]) = max −∞, 1 + 5 = max −∞, 6 = 6 𝑞 = max(6, 𝑝 2 + 𝑟[1]) = max 6, 5 + 1 = max 6, 6 = 6 𝑞 = max(6, 𝑝 3 + 𝑟[0]) = max 6, 8 + 0 = max 6, 8 = 8 𝑟 3 = 𝑞 = 8 0 1 5
  • 215. Rod Cutting: Design r 𝑞 = −∞ 0 1 2 3 4 5 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 4 𝑞 = max(−∞, 𝑝 1 + 𝑟[3]) = max −∞, 1 + 8 = max −∞, 9 = 9 𝑞 = max(9, 𝑝 2 + 𝑟[2]) = max 9, 5 + 5 = max 9, 10 = 10 𝑞 = max(10, 𝑝 3 + 𝑟[1]) = max 10, 8 + 1 = max 10, 9 = 10 𝑞 = max(10, 𝑝 4 + 𝑟[0]) = max 10, 9 + 0 = max 10, 9 = 10 𝑟 4 = 𝑞 = 10 r 0 1 2 3 4 5 0 1 5 8 0 1 5 8 10
  • 216. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 5 𝑞 = max(−∞, 𝑝 1 + 𝑟[4]) = max −∞, 1 + 10 = max −∞, 11 = 11 𝑞 = max(11, 𝑝 2 + 𝑟[3]) = max 11, 5 + 8 = max 11, 13 = 13 𝑞 = max(13, 𝑝 3 + 𝑟[2]) = max 13, 8 + 5 = max 13, 13 = 13 𝑞 = max(13, 𝑝 4 + 𝑟[1]) = max 13, 9 + 1 = max 13, 10 = 13 𝑞 = max(13, 𝑝 5 + 𝑟[0]) = max 13, 10 + 0 = max 13, 10 = 13 𝑟 5 = 𝑞 = 13 𝑟𝑒𝑡𝑢𝑟𝑛 𝑟 5 = 13 r 0 1 2 3 4 5 • Tracing back the optimal solution 5 = 2 + 3 • Which Method is the best? 0 1 5 8 1013
  • 217. Rod Cutting: Analysis • Top-down • Right-to-Left 5 4 2 1 3 0 2 1 3 0 2 1 0 1 0 0
  • 218. Rod Cutting: Analysis cost times 𝑐1 1 𝑐2 1 𝑐3 𝑛 + 1 𝑐4 𝑛 𝑐5 𝑛(𝑛 + 1 ) 2 + 𝑛 𝑐6 𝑛(𝑛 + 1 ) 2 𝑐7 𝑛 𝑐8 1
  • 219. Rod Cutting: Analysis • General Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ n n+1 2 + 𝑛 +𝑐6 ∗ n n+1 2 + 𝑐7 ∗ 𝑛 + 𝑐8 ∗ 1 = 𝑐5 2 + 𝑐6 2 𝑛2 + 𝑐3 + 𝑐4 + 3𝑐5 2 + 𝑐6 2 + 𝑐7 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐8) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝚯(𝒏𝟐)
  • 220. Rod Cutting: Analysis 𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐)
  • 221. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - - 4 Automation of Method 𝟑 𝜣(𝟐𝒏 ) - - - 5 Top-down with Memoization 𝜣(𝒏𝟐) - - - 6 Bottom-Up Method 𝜣(𝒏𝟐) - - -
  • 222. Rod Cutting: Analysis S# Characteristic Top-down with Memoization Bottom-up Method 1 Strategy Top-down Bottom-up 2 Type Recursive Iterative 3 Memoized Yes No 4 Sorts all sub-problems No Yes 5 Solves sub-problems Top-down, Left-to-Right Top-down, Right-to-Left 6 Running Time Θ(𝑛2) Θ(𝑛2)
  • 223. Rod Cutting: Analysis • Sub-problem Graphs • 𝐺 = (𝑉, 𝐸) • Reduced or Collapsed version of Recursion Tree • All nodes with the same label are collapsed into a single vertex • All edges go from parent to child • Each vertex label represents the size of the corresponding sub-problem, and each directed edge (𝒙, 𝒚) indicates the need for an optimal solution to sub-problem 𝒚, when determining an optimal solution to sub-problem 𝒙 • Each vertex corresponds to a distinct sub-problem, and the choices for a sub-problem are the edges incident to that sub-problem • 𝑇 𝑛 = 𝑉 ∗ 𝐸 = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐 )
  • 225. Rod Cutting: Design • Method 7: Bottom-Up Method with Optimal Solution • Determines Optimal Solution (along with Optimal Value) • Determines Optimal Size of the first piece to cut off • Extension of Method 6
  • 227. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝐸𝑋𝑇𝐸𝑁𝐷𝐸𝐷 − 𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 (𝑝, 5) 𝑙𝑒𝑡 𝑟 0 … 5 𝑎𝑛𝑑 𝑠 0 … 5 𝑏𝑒 𝑛𝑒𝑤 𝑎𝑟𝑟𝑎𝑦𝑠 𝑟[0] = 0, 𝑠[0] = 0 𝑓𝑜𝑟 𝑗 = 1 𝑡𝑜 𝑛 = 5 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 1 𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[0] = 1 + 0 = 1 𝑞 = 1 𝑠 1 = 1 𝑟 1 = 𝑞 = 1 0 0 1 0 0 1 r 0 1 2 3 4 5 s 0 1 2 3 4 5
  • 228. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 2 𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[1] = 1 + 1 = 2 𝑞 = 2 s 𝑠 2 = 1 0 1 2 3 4 5 𝑖𝑓 𝑞 = 2 < 𝑝 2 + 𝑟[0] = 5 + 0 = 5 𝑞 = 5 s 𝑠 2 = 2 0 1 2 3 4 5 𝑟 2 = 𝑞 = 5 r 0 1 2 3 4 5 0 1 5 0 1 2 0 1 1
  • 229. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 3 𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[2] = 1 + 5 = 6 𝑞 = 6 s 𝑠 3 = 1 0 1 2 3 4 5 𝑖𝑓 𝑞 = 6 < 𝑝 2 + 𝑟[1] = 5 + 1 = 6  𝑖𝑓 𝑞 = 6 < 𝑝 3 + 𝑟[0] = 8 + 0 = 8 𝑞 = 8 s 𝑠 3 = 3 0 1 2 3 4 5 𝑟 3 = 𝑞 = 8 r 0 1 2 3 4 5 0 1 5 8 0 1 2 1 0 1 2 3
  • 230. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 4 𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[3] = 1 + 8 = 9 𝑞 = 9 s 𝑠 4 = 1 0 1 2 3 4 5 𝑖𝑓 𝑞 = 9 < 𝑝 2 + 𝑟[2] = 5 + 5 = 10 𝑞 = 10 𝑠 4 = 2 s 𝑖𝑓 𝑞 = 10 < 𝑝 3 + 𝑟[1] = 8 + 1 = 9  0 1 2 3 4 5 𝑖𝑓 𝑞 = 10 < 𝑝 4 + 𝑟[0] = 9 + 0 = 9  r 𝑟 4 = 𝑞 = 10 0 1 2 3 4 5 0 1 5 8 10 0 1 2 3 1 0 1 2 3 2
  • 231. Rod Cutting: Design 𝑞 = −∞ 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑗 = 5 𝑖𝑓 𝑞 = −∞ < 𝑝 1 + 𝑟[4] = 1 + 10 = 11 𝑞 = 11 s 𝑠 5 = 1 0 1 2 3 4 5 𝑖𝑓 𝑞 = 11 < 𝑝 2 + 𝑟[3] = 5 + 8 = 13 𝑞 = 13 𝑠 5 = 2 s 𝑖𝑓 𝑞 = 13 < 𝑝 3 + 𝑟[2] = 8 + 5 = 13  0 1 2 3 4 5 𝑖𝑓 𝑞 = 13 < 𝑝 4 + 𝑟[1] = 9 + 1 = 10  𝑖𝑓 𝑞 = 13 < 𝑝 5 + 𝑟[0] = 10 + 0 = 10  𝑟 5 = 𝑞 = 13 r 𝑟𝑒𝑡𝑢𝑟𝑛 𝑟 𝑎𝑛𝑑 𝑠 0 1 2 3 4 5 0 1 5 8 1013 0 1 2 3 2 1 0 1 2 3 2 2
  • 232. Rod Cutting: Analysis • Top-down • Right-to-Left 5 4 2 1 3 0 2 1 3 0 2 1 0 1 0 0
  • 233. Rod Cutting: Analysis cost times 𝑐1 1 𝑐2 1 𝑐3 𝑛 + 1 𝑐4 𝑛 𝑐5 𝑛(𝑛 + 1 ) 2 + 𝑛 𝑐6 𝑛(𝑛 + 1 ) 2 𝑐7 𝑛(𝑛 + 1 ) 4 𝑐8 𝑛(𝑛 + 1 ) 4 𝑐9 𝑛 𝑐10 1
  • 234. Rod Cutting: Analysis • General Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ ( n n+1 2 + 𝑛) +𝑐6 ∗ n n+1 2 + 𝑐7 ∗ n n+1 4 + 𝑐8 ∗ n n+1 4 + 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1 = 𝑐5 2 + 𝑐6 2 + 𝑐7 4 + 𝑐8 4 𝑛2 + 𝑐3 + 𝑐4 + 3𝑐5 2 + 𝑐6 2 + 𝑐7 4 + 𝑐8 4 + 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐10) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝚯 𝒏𝟐 • Best Case, Worst Case, or Average Case?
  • 235. Rod Cutting: Analysis cost times (Best Case) times (Worst Case) 𝑐1 1 1 𝑐2 1 1 𝑐3 𝑛 + 1 𝑛 + 1 𝑐4 𝑛 𝑛 𝑐5 𝑛(𝑛 + 1 ) 2 + 𝑛 𝑛(𝑛 + 1 ) 2 + 𝑛 𝑐6 𝑛(𝑛 + 1 ) 2 𝑛(𝑛 + 1 ) 2 𝑐7 1 𝑛(𝑛 + 1 ) 2 𝑐8 1 𝑛(𝑛 + 1 ) 2 𝑐9 𝑛 𝑛 𝑐10 1 1
  • 236. Rod Cutting: Analysis • Best Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ ( n n+1 2 + 𝑛) +𝑐6 ∗ n n+1 2 + 𝑐7 ∗ 1 + 𝑐8 ∗ 1 + 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1 = 𝑐5 2 + 𝑐6 2 + 𝑐7 4 + 𝑐8 4 𝑛2 + 𝑐3 + 𝑐4 + 3𝑐5 2 + 𝑐6 2 + 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐7 + 𝑐8 + 𝑐10) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝚯 𝒏𝟐
  • 237. Rod Cutting: Analysis • Worst Case 𝑇 𝑛 = 𝑐1 ∗ 1 + 𝑐2 ∗ 1 + 𝑐3 ∗ (n + 1) + 𝑐4 ∗ n + 𝑐5 ∗ ( n n+1 2 + 𝑛) +𝑐6 ∗ n n+1 2 + 𝑐7 ∗ n n+1 2 + 𝑐8 ∗ n n+1 2 + 𝑐9 ∗ 𝑛 + 𝑐10 ∗ 1 = 𝑐5 2 + 𝑐6 2 + 𝑐7 2 + 𝑐8 2 𝑛2 + 𝑐3 + 𝑐4 + 3𝑐5 2 + 𝑐6 2 + 𝑐7 2 + 𝑐8 2 + 𝑐9 𝑛 + (𝑐1 + 𝑐2 + 𝑐3 + 𝑐10) = 𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝚯 𝒏𝟐
  • 238. Rod Cutting: Analysis 𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐)
  • 239. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - - 4 Automation of Method 𝟑 𝜣(𝟐𝒏 ) - - - 5 Top-down with Memoization 𝜣(𝒏𝟐) - - - 6 Bottom-Up Method 𝜣(𝒏𝟐) - - - 7 Bottom-Up Method with Optimal Solution 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐)
  • 240. Rod Cutting: Design • Method 8: Bottom-Up Method with Optimal Decomposition • Determines First Optimal Decomposition (alongwith Optimal Value) • Determines Optimal Sizes of all the pieces to cut off • Extension of Method 7
  • 241. Rod Cutting: Design • Consider the case when 𝒏 = 𝟓 𝑃𝑅𝐼𝑁𝑇 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 − 𝑆𝑂𝐿𝑈𝑇𝐼𝑂𝑁 𝑝, 5 𝑟, 𝑠 = 𝐸𝑋𝑇𝐸𝑁𝐷𝐸𝐷 − 𝐵𝑂𝑇𝑇𝑂𝑀 − 𝑈𝑃 − 𝐶𝑈𝑇 − 𝑅𝑂𝐷 𝑝, 5 = ( 0, 1, 5, 8, 10, 13 , 0, 1, 2, 3, 2, 2 ) 𝑤ℎ𝑖𝑙𝑒 𝑛 = 5 > 0 𝑝𝑟𝑖𝑛𝑡 𝑠 5 = 2 𝑛 = 𝑛 − 𝑠 𝑛 = 5 − 2 = 3 𝑝𝑟𝑖𝑛𝑡 𝑠 3 = 3 𝑛 = 𝑛 − 𝑠 𝑛 = 3 − 3 = 0 • Which Method is the best? 𝑠[5] 2 𝑠[3] 3
  • 243. Rod Cutting: Analysis • Top-down • Right-to-Left 5 4 2 1 3 0 2 1 3 0 2 1 0 1 0 0
  • 244. Rod Cutting: Analysis cost times 𝛩(𝑛2) 1 𝑐2 𝑛 2 + 1 𝑐3 𝑛 2 𝑐4 𝑛 2
  • 245. Rod Cutting: Analysis • General Case 𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ ( 𝑛 2 + 1) + 𝑐3 ∗ 𝑛 2 + 𝑐4 ∗ 𝑛 2 = 𝛩(𝑛2) + 𝑐2 2 + 𝑐3 2 + 𝑐4 2 𝑛 + 𝑐2 = 𝛩(𝑛2) + 𝑎𝑛 + 𝑏 = 𝚯(𝒏𝟐) • Best Case, Worst Case, or Average Case?
  • 246. Rod Cutting: Analysis cost times (Best Case) times (Worst Case) 𝛩(𝑛2) 1 1 𝑐2 2 𝑛 + 1 𝑐3 1 𝑛 𝑐4 1 𝑛
  • 247. Rod Cutting: Analysis • Best Case 𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ 2 + 𝑐3 ∗ 1 + 𝑐4 ∗ 1 = 𝛩(𝑛2) + 2𝑐2 + 𝑐3 + 𝑐4 = 𝛩(𝑛2) + 𝑎 = 𝚯(𝒏𝟐) • Worst Case 𝑇 𝑛 = 𝛩(𝑛2) ∗ 1 + 𝑐2 ∗ (𝑛 + 1) + 𝑐3 ∗ 𝑛 + 𝑐4 ∗ 𝑛 = 𝛩(𝑛2) + 𝑐2 + 𝑐3 + 𝑐4 𝑛 + 𝑐2 = 𝛩 𝑛2 + 𝑎𝑛 + 𝑏 = 𝚯(𝒏𝟐)
  • 248. Rod Cutting: Analysis 𝑇 𝑛 = Total number of sub-problems * Number of choices for each sub-problem = 𝑛 ∗ 𝑛 = 𝜣(𝒏𝟐)
  • 249. Comparative Analysis of Methods S# Method/ Case General Best Average Worst 1 Possible Combinations 𝜣(𝟐𝒏 ) - - - 2 Equation 1 𝜣(𝟐𝒏 ) - - - 3 Equation 2 𝜣(𝟐𝒏 ) - - - 4 Automation of Method 𝟑 𝜣(𝟐𝒏 ) - - - 5 Top-down with Memoization 𝜣(𝒏𝟐) - - - 6 Bottom-Up Method 𝜣(𝒏𝟐) - - - 7 Bottom-Up Method with Optimal Solution 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 8 Bottom-Up Method with Optimal Decomposition 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐) 𝜣(𝒏𝟐)
  • 250. Greedy Algorithm • A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. • The algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem. • Greedy algorithms are quite successful in some problems, such as Huffman encoding which is used to compress data, or Dijkstra's algorithm, which is used to find the shortest path through a graph.
  • 251. Activity-Selection Problem • The Activity Selection Problem is an optimization problem which deals with the selection of non-conflicting activities that needs to be executed by a single person or machine in each time frame. • Each activity is marked by a start and finish time. Greedy technique is used for finding the solution since this is an optimization problem. • Let's consider that you have n activities with their start and finish times, the objective is to find solution set having maximum number of non-conflicting activities that can be executed in a single time frame, assuming that only one person or machine is available for execution.
  • 252. Activity-Selection Problem • Some points to note here: • It might not be possible to complete all the activities, since their timings can collapse. • Two activities, say i and j, are said to be non-conflicting if si >= fj or sj >= fi where si and sj denote the starting time of activities i and j respectively, and fi and fj refer to the finishing time of the activities i and j respectively. • Greedy approach can be used to find the solution since we want to maximize the count of activities that can be executed. This approach will greedily choose an activity with earliest finish time at every step, thus yielding an optimal solution.
  • 253. Steps for Activity Selection Problem • Following are the steps we will be following to solve the activity selection problem, • Step 1: Sort the given activities in ascending order according to their finishing time. • Step 2: Select the first activity from sorted array act[] and add it to sol[] array. • Step 3: Repeat steps 4 and 5 for the remaining activities in act[]. • Step 4: If the start time of the currently selected activity is greater than or equal to the finish time of previously selected activity, then add it to the sol[] array. • Step 5: Select the next activity in act[] array. • Step 6: Print the sol[] array.
  • 254. Algorithm • GREEDY- ACTIVITY SELECTOR (s, f) • n ← length [s] • A ← {1} • j ← 1. • for i ← 2 to n • do if si ≥ fi • then A ← A ∪ {i} • j ← i • return A
  • 255. Example • S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10) • Si = (1,2,3,4,7,8,9,9,11,12) • fi = (3,5,4,7,10,9,11,13,12,14)
  • 256. Example • Now, schedule A1 • Next schedule A3 as A1 and A3 are non-interfering. • Next skip A2 as it is interfering. • Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as A1 A3 A4 and A6 are non-interfering. • Skip A5 as it is interfering. • Next, schedule A7 as A1 A3 A4 A6 and A7 are non-interfering. • Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.
  • 257. Example • Skip A8 as it is interfering. • Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering. • Thus, the final Activity schedule is: • Activity Selection Problem
  • 258.
  • 259. Time Complexity Analysis • Following are the scenarios for computing the time complexity of Activity Selection Algorithm: • Case 1: When a given set of activities are already sorted according to their finishing time, then there is no sorting mechanism involved, in such a case the complexity of the algorithm will be O(n) • Case 2: When a given set of activities is unsorted, then we will have to use the sort() method defined in bits/stdc++ header file for sorting the activities list. The time complexity of this method will be O(nlogn), which also defines complexity of the algorithm.
  • 260. Real-life Applications of Activity Selection Problem • Following are some of the real-life applications of this problem: • Scheduling multiple competing events in a room, such that each event has its own start and end time. • Scheduling manufacturing of multiple products on the same machine, such that each product has its own production timelines. • Activity Selection is one of the most well-known generic problems used in Operations Research for dealing with real-life business problems.
  • 261. Huffman Codes • Every information in computer science is encoded as strings of 1s and 0s. • The objective of information theory is to usually transmit information using fewest number of bits in such a way that every encoding is unambiguous. • This tutorial discusses about fixed-length and variable-length encoding along with Huffman Encoding which is the basis for all data encoding schemes • Encoding, in computers, can be defined as the process of transmitting or storing sequence of characters efficiently. • Fixed-length and variable length are two types of encoding schemes Syed Zaid Irshad
  • 262. Encoding Schemes • Fixed-Length encoding - Every character is assigned a binary code using same number of bits. Thus, a string like “aabacdad” can require 64 bits (8 bytes) for storage or transmission, if each character uses 8 bits. • Variable- Length encoding - As opposed to Fixed-length encoding, this scheme uses variable number of bits for encoding the characters depending on their frequency in the given text. Thus, for a given string like “aabacdad”, frequency of characters ‘a’, ‘b’, ‘c’ and ‘d’ is 4,1,1 and 2 respectively. Since ‘a’ occurs more frequently than ‘b’, ‘c’ and ‘d’, it uses least number of bits, followed by ‘d’, ‘b’ and ‘c’.
  • 263. Example • Suppose we randomly assign binary codes to each character as follows- a 0 b 011 c 111 d 11 • Thus, the string “aabacdad” gets encoded to 00011011111011 (0 | 0 | 011 | 0 | 111 | 11 | 0 | 11), using fewer number of bits compared to fixed-length encoding scheme. • But the real problem lies with the decoding phase. If we try and decode the string 00011011111011, it will be quite ambiguous since, it can be decoded to the multiple strings, few of which are- • aaadacdad (0 | 0 | 0 | 11 | 0 | 111 | 11 | 0 | 11) aaadbcad (0 | 0 | 0 | 11 | 011 | 111 | 0 | 11) aabbcb (0 | 0 | 011 | 011 | 111 | 011)
  • 264. Example • To prevent such ambiguities during decoding, the encoding phase should satisfy the “prefix rule” which states that no binary code should be a prefix of another code. This will produce uniquely decodable codes. The above codes for ‘a’, ‘b’, ‘c’ and ‘d’ do not follow prefix rule since the binary code for a, i.e., 0, is a prefix of binary code for b i.e 011, resulting in ambiguous decodable codes. • Let's reconsider assigning the binary codes to characters ‘a’, ‘b’, ‘c’ and ‘d’. • a 0 b 11 c 101 d 100 • Using the above codes, string “aabacdad” gets encoded to 001101011000100 (0 | 0 | 11 | 0 | 101 | 100 | 0 | 100). Now, we can decode it back to string “aabacdad”.
  • 265. Huffman Encoding • Huffman Encoding can be used for finding solution to the given problem statement. • Developed by David Huffman in 1951, this technique is the basis for all data compression and encoding schemes • It is a famous algorithm used for lossless data encoding • It follows a Greedy approach, since it deals with generating minimum length prefix-free binary codes • It uses variable-length encoding scheme for assigning binary codes to characters depending on how frequently they occur in the given text. The character that occurs most frequently is assigned the smallest code and the one that occurs least frequently gets the largest code
  • 266. Algorithm Steps • Step 1- Create a leaf node for each character and build a min heap using all the nodes (The frequency value is used to compare two nodes in min heap) • Step 2- Repeat Steps 3 to 5 while heap has more than one node • Step 3- Extract two nodes, say x and y, with minimum frequency from the heap • Step 4- Create a new internal node z with x as its left child and y as its right child. Also, frequency(z)= frequency(x)+frequency(y) • Step 5- Add z to min heap • Step 6- Last node in the heap is the root of Huffman tree
  • 267. Algorithm • Huffman (C) • n=|C| • Q ← C • for i=1 to n-1 • do • z= allocate-Node () • x= left[z]=Extract-Min(Q) • y= right[z] =Extract-Min(Q) • f [z]=f[x]+f[y] • Insert (Q, z) • return Extract-Min (Q)
  • 268. Example • Characters Frequencies • a 10 • e 15 • i 12 • o 3 • u 4 • s 13 • t 1
  • 272. Example • Characters Binary Codes • i 00 • s 01 • e 10 • u 1100 • t 11010 • o 11011 • a 111
  • 273. Time Complexity • Since Huffman coding uses min Heap data structure for implementing priority queue, the complexity is O(nlogn). This can be explained as follows- • Building a min heap takes O(nlogn) time (Moving an element from root to leaf node requires O(logn) comparisons and this is done for n/2 elements, in the worst case). • Building a min heap takes O(nlogn) time (Moving an element from root to leaf node requires O(logn) comparisons and this is done for n/2 elements, in the worst case). • Since building a min heap and sorting it are executed in sequence, the algorithmic complexity of entire process computes to O(nlogn)
  • 274. Graph • A Graph is a non-linear data structure consisting of nodes and edges. • The nodes are sometimes also referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph. • More formally a Graph can be defined as, A Graph consists of a finite set of vertices(or nodes) and set of Edges which connect a pair of nodes.
  • 275. Breadth First Search • Breadth-First Traversal (or Search) for a graph is like Breadth-First Traversal of a tree. • The only catch here is, unlike trees, graphs may contain cycles, so we may come to the same node again. • To avoid processing a node more than once, we use a Boolean visited array. • For simplicity, it is assumed that all vertices are reachable from the starting vertex.
  • 276.
  • 287. Time Complexity • Following is Breadth First Traversal (starting from vertex 2): • 2 0 3 1 • Time Complexity: O(V+E) where V is several vertices in the graph and E is several edges in the graph.
  • 288. Depth First Search • Depth First Traversal (or Search) for a graph is like Depth First Traversal of a tree. • The only catch here is, unlike trees, graphs may contain cycles (a node may be visited twice). • To avoid processing a node more than once, use a Boolean visited array.
  • 290. Complexity Analysis • Time complexity: O(V + E), where V is the number of vertices and E is the number of edges in the graph. • Space Complexity: O(V), since an extra visited array of size V is required.
  • 291. Topological Sorting • Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge u v, vertex u comes before v in the ordering. • Topological Sorting for a graph is not possible if the graph is not a DAG. • In DFS, we print a vertex and then recursively call DFS for its adjacent vertices. • In topological sorting, we need to print a vertex before its adjacent vertices. • So Topological sorting is different from DFS.
  • 292. DFS vs TS • In DFS, we start from a vertex, we first print it and then recursively call DFS for its adjacent vertices. • In topological sorting, we use a temporary stack. • We don’t print the vertex immediately, we first recursively call topological sorting for all its adjacent vertices, then push it to a stack. Finally, print contents of the stack. • Note that a vertex is pushed to stack only when all its adjacent vertices (and their adjacent vertices and so on) are already in the stack.
  • 294.
  • 295.
  • 296.
  • 297. Complexity Analysis • Time Complexity: O(V+E). • The above algorithm is simply DFS with an extra stack. So, time complexity is the same as DFS which is. • Auxiliary space: O(V). • The extra space is needed for the stack.
  • 298. Strongly Connected Components • A directed graph is strongly connected if there is a path between all pairs of vertices. • A strongly connected component (SCC) of a directed graph is a maximal strongly connected subgraph.
  • 299. Kosaraju’s Algorithm For each vertex u of the graph, mark u as unvisited. Let L be empty. For each vertex u of the graph do Visit(u), where Visit(u) is the recursive subroutine: If u is unvisited then: Mark u as visited. For each out-neighbour v of u, do Visit(v). Prepend u to L. Otherwise do nothing.
  • 300. Kosaraju’s Algorithm For each element u of L in order, do Assign(u,u) where Assign(u,root) is the recursive subroutine: If u has not been assigned to a component, then: Assign u as belonging to the component whose root is root. For each in-neighbour v of u, do Assign(v,root). Otherwise do nothing.
  • 301. Steps • Create an empty stack ‘S’ and do DFS traversal of a graph. In DFS traversal, after calling recursive DFS for adjacent vertices of a vertex, push the vertex to stack. • Reverse directions of all arcs to obtain the transpose graph. • One by one pop a vertex from S while S is not empty. Let the popped vertex be ‘v’. Take v as source and do DFS. The DFS starting from v prints strongly connected component of v.