SlideShare a Scribd company logo
1 of 153
Design and Analysis of
Algorithms
Prepared By:
Smruti Smaraki Sarangi
Asst. Professor
IMS Unison University, Dehradun
Module - 1
Algorithm:
 An algorithm is a process to solve a problem manually in
sequence with finite number of steps.
 It is a set of rules that must be followed when solving a
specific problem.
 It is a well-defined computational procedure which takes
some value or set of values as input and generates some
set of value as output.
 So, an algorithm is defined as a finite sequence of
computational steps, that transforms to given input into
the output for a given problem.
 An algorithm is considered to be correct, if for every input
instance, it generates the correct output and gets
terminated.
 So a correct algorithm solves a given computational
problem and gives the desired output.
 The main objectives of algorithm is
 To solve a problem manually in sequence with finite no.
of steps
 For designing an algorithm, we need to construct an
efficient solution for a problem.
Why we read Algorithm:
 We read the algorithm.
 For testing the good programmer in 2 levels. That is:
i. Macro-level (Implementation of algorithm like s/w)
ii. Micro-level (Algorithm part like h/w)
 For designing an algorithm, we have to learn to solve
computational problem in economics.
 By applying mathematical algebra, we get computational
logic.
Algorithm Paradigm:
 It includes 4 steps. That is:
 Design of algorithm
 Algorithm validation
 Analysis of algorithms
 Algorithm testing
1. Design of algorithm:
 Various designing techniques are available which
yield good and useful algorithm.
 These techniques are not only applicable to only
computer science, but also to other areas, such as
operation research and electrical engineering.
 The techniques are: divide and conquer, incremental
approach, dynamic programming etc. By studying this
we can formulate good algorithm.
2. Algorithm validation:
 Algorithm validation checks the algorithm result for
all legal set of input.
 After designing, it is necessary to check the
algorithm, whether it computes the correct and desired
result or not for all possible legal set of input.
 Here the algorithm is not converted into the program.
But after showing the validity of the method, a
program is written. This is known as “program
providing” or “program verification”.
 Here we check the program output for all possible set
of input.
 It requires that, each statement should be precisely
defined and all basic operations can be correctly
provided.
3. Analysis of algorithms:
 The analysis of algorithm focuses on time complexity
or space complexity.
 The amount of memory needed by program to run to
completion is referred to as space complexity.
 The amount of time needed by an algorithm to run to
completion is referred to as time complexity.
 For an algorithm time complexity depends upon the
size of the input, thus is a function of input size „n‟.
 Usually, we deal with the best case time, average case
time and worst case time for an algorithm.
 The minimum amount of time that an algorithm
requires for an input size „n‟, is referred to as Best
Case Time Complexity.
 Average Case Time Complexity is the execution of an
algorithm having typical input data of size „n‟.
 The maximum amount of time needed by an
algorithm for an input size „n‟ is referred to as Worst
Case Time Complexity.
4. Algorithm testing:
 This phase involves testing of a program. It consists
of two phases. That is: Debugging and Performance
Measurement.
 Debugging is the process of finding and correcting the
cause at variance with the desired and observed
behaviors.
 Debugging can only point to the presence of errors,
but not their absence.
 The Performance Measurement or Profiling precise
by described the correct program execution for all
possible data sets and it takes time and space to
compute results.
NOTES:
 While designing and analyzing an algorithm, two
fundamental issue to be considered. That is:
1. Correctness of the algorithm
2. Efficiency of the algorithm
 While designing the algorithm, it should be clear, simple
and should be unambiguous.
 The characteristics of algorithm is: finiteness,
definiteness, efficiency, input and output.
Analysis of Algorithms:
 Analysis of algorithms depend upon various factors, such
as memory, communication bandwidth or computer
hardware. But the most often used is the computational
time that an algorithm requires for completing the given
task.
 As algorithms are machine and language independent these
are the only important, durable and original parts of
computer science. Thus, we will do all our design and
implementation for the RAM model of computation.
 In RAM model, all instructions are executed sequentially
one after another with concurrent operations. In performing
simple operations like addition, subtraction, assignment etc.
model takes 1 step.
 A call to a subroutine and loops are not single step
operation. Instead each memory access takes exactly one
step. By counting the number of steps the running time of
an algorithm is reassured.
 The analysis of an algorithm focuses on the time and space
complexity. The space complexity refers to the amount of
memory required by an algorithm to run completion.
 Time complexity is a function of input size „n‟. It is
referred to as the amount of time required by an algorithm
to run to completion.
 Perhaps different time can arise for the same algorithm, we
usually refer best case, average case, worst case
complexity.
1. Worst-Case Time Complexity:
 The worst-case time complexity is a function defined
by the maximum amount of time needed by an
algorithm for an input size „n‟. Thus, it is the function
defined by the maximum no. of steps taken on any
instance of size „n‟.
 A worst case estimate is normally computed, because
it provides an upper bound for all inputs including
particularly the bad ones.
N
21
Worst Case
Average Case
Best Case
No. of
Steps
2. Average-Case Time Complexity:
 The average case time complexity is the execution of
an algorithm having typical input data of size „n‟, thus
if the function by the average no. of steps taken on
any instance of size „n‟.
 Average-case analysis does not provide the upper-
bound and it is difficult to compute.
3. Best-Case Time Complexity:
 The best-case time complexity is the maximum
amount of time that an algorithm requires for an input
of size „n‟, thus it is the function defined by the
minimum no. of steps taken on any instance of size
„n‟.
 All this time complexities define a numerical function
time ~ size.
Calculation of Running
Time:
 There are several ways to estimate the running time of a
program. If 2 programs are expected to take similar times,
probably the best way to decide which is faster is to code
them both up and run them.
 Generally there are several algorithmic ideas and we would
like to estimate the bad ones early.
 So, an analysis is usually required furthermore the ability to
do an analysis usually provides insight into designing
efficient algorithm.
 The analysis also generally pin points, which are worth
coding carefully.
 To simplify the analysis, we will adopt the conversion that
there are no particular units of time. Thus, we throw away
low ordered terms.
 So, what we are essentially doing is computing a big oh (O)
running time. Since big oh (O) is an upper bound, we must
be careful never to underestimate the running time of the
program.
 In effect, the answer provided is a guarantee that the
program will terminate within a certain time period. The
program may stop earlier than this, but never later.
Analyzing the control
structure:
 Sequencing:
 Let „P1‟ and „P2‟ be two fragments of an algorithm they
may be single instructions or complicated sub-
algorithms. Let „t1‟ and „t2‟ be the times taken by „P1‟ and
„P2‟ respectively. These times may depend on various
parameters such as the instance size.
 The sequencing root says that the time required
computing P1, P2. i.e.: 1st P1 then P2 is simply t1 + t2 by
maximum rule, the time will be O(max(t1, t2)).
 E.g.: i) t1 = θ(n), t2 = θ(n2). So, the computational time is:
t2 = θ(n2).
ii) if t1 = θ(n), t2 = θ(n2) => t2 = O(n2).
iii) if t1 = O(n), t2 = θ(n2) => t2 = θ(n2).
Analyzing the control
structure:
 Sequencing:
 Let „P1‟ and „P2‟ be two fragments of an algorithm they
may be single instructions or complicated sub-
algorithms. Let „t1‟ and „t2‟ be the times taken by „P1‟ and
„P2‟ respectively. These times may depend on various
parameters such as the instance size.
 The sequencing root says that the time required
computing P1, P2. i.e.: 1st P1 then P2 is simply t1 + t2 by
maximum rule, the time will be O(max(t1, t2)).
 E.g.: i) t1 = θ(n), t2 = θ(n2). So, the computational time is:
t2 = θ(n2).
ii) if t1 = θ(n), t2 = θ(n2) => t2 = O(n2).
iii) if t1 = O(n), t2 = θ(n2) => t2 = θ(n2).
 Add the time individual statements. The maximum is the
one that count.
 If then else:
 Again consider P1 and P2 be the parts of an algorithm,
with computation time t1 and t2 respectively. Now „P1‟ is
compared only when the given condition is true.
Otherwise for the false condition „P2‟ is computed. Thus,
the total time is according to the conditional rule „if then
else‟.
 According to the maximum rule this computation time is:
max(t1, t2).
E.g.:
i) Suppose P1 = t1 = θ(n), P2 = t2 = θ(n2) => T(n) = θ(n2).
ii) t1 = O(n2), t2 = θ(n2) => T(n) = O(n2) or θ(n2) => O(n2)
 For Loop:
 It is to be noted that P(i) is computed for each iteration
from i ← 1 to m. If the value of „m‟ is zero, then we are
considering that „m‟ does not generate any error instead
the loop is terminated without doing anything.
E.g.: for i ← 1 to m
{
P(i)
}
 If P(i) takes any constant time „t‟, for its computation
then for „m‟ iterations, the total time for the loop is
simply „mt‟. Here, we are not considering the loop
control. As we know that for loop can be expressed as:
while(i ≤ m)
{
P(i)
i ← i + 1
}
 The test condition, the assignment instruction and
sequencing operation (goto: implicit in while loop) are
considered at unit cost for simplicity. Suppose if all these
operations are bounded by „c‟ then the computation time
for the loop is bundled above by:
T(n) ≤ c : for i ← 1
(m + 1)c: test condition i ≤ m.
mt: for execution of P(i).
mc: for execution of i ← i + 1.
mc: for the sequencing operation.
 T ≤ (t + 3c)m + 2c. If „c‟ is very small relative to „t‟, then
the computational time for the loop is bounded above by
T(n) ≤ mt.
 Now, if the computation time „ti‟ for P(i) varies as a
function of „i‟, then total computation time for the loop,
(after neglecting loop control) is given not by
multiplication, but by a sum.
for i ← 1 to m
{
P(i)
}
=> T(n) = 𝑡𝑖
𝑚
𝑖=1
E.g.: for i ← 1 to m
{
sum ← sum + t[i]
}
Total time = 𝑡𝑖
𝑚
𝑖=1 = θ(1)𝑚
𝑖=1 = θ( 1𝑚
𝑖=1 ) = θ(m)
 If the algorithm consists of nested for loop, then the total
time is:
for i ← 1 to m {
for j ← 1 to m {
P(i j)
}
} m
∑
i = 1
m
∑
j = 1
tij=> T(n) =
 While Loop:
 The while loops are difficult to analyze in comparison to
for loops, as in these there is no obvious method which
determines how many times we shall have to repeat the
loops.
 The simple technique for analyzing the loops is to firstly
determine functions of variables involve whose value
decreases each time. Secondly for determining the loop it
is necessary that this value must be a positive integer.
 By keeping the track of how many times the values of
function decreases, one can obtain the no. of repetition of
the loop. The other approach for analyzing while loops is
to treat them. E.g.:
while(m > 0)
{ m ← m – 1}
=> Time T(n) = θ(m)
 The rules are:
1. Sequencing: Add the time of the individual
statements. The maximum is the one that count.
2. Alternative Structures: Time for testing the
condition + the maximum time taken by any of the
alternative paths.
3. Loops: Execution time of a loop is at most the
execution time of the statements of the body
(including the condition tests).
4. Nested Loops: Analyze them as inside out.
5. Sub-programs: Analyze them as separate algorithms
and substitute the time whenever necessary.
6. Recursive Sub-programs: Generally the running
time can be expressed as a recurrence relation, with
solution growth rate of execution time.
Asymptotic Notation:
 The notation, which we use to describe the asymptotic
running time of an algorithm are defined in terms of
functions, whose domains are the set of natural numbers
and real numbers.
 The natural number set is denoted as: N = {0, 1, 2, …}
 The positive integer set is denoted as: N+ = {1, 2, 3, …}
 Real number set is denoted as R.
 Positive real number set is denoted as R+.
 Non-negative real number set is denoted as R*.
 Such notations are convenient for describing the worst
case running time function T(n), which is usually defined
only on integer input sizes.
 The different types of notations are:
 Big oh (O) notation
 Small oh (o) notation
 Theta (θ) notation
 Omega (Ω) notation
 Small omega (ω) notation
1. Big Oh (O) Notation:
 The upper bound for the function is provided by Big Oh
(O) notation. We can say, the running time of an
algorithm is O(g(n)), if whenever input size is equal to
or exceeds, some threshold „n0‟, its running time can be
bounded by some positive constant „c‟ time g(n).
 Let f(n) and g(n) are two functions from set of natural
numbers to set of non-negative real numbers and f(n) is
said to be O(g(n)).
 That is: f(n) = O(g(n)), iff there exist a natural number
„n0‟ and a positive constant c > 0, such that f(n) ≤
c(g(n)), for all n ≥ n0.
n0 Input size
Running
time
c(g(n))
f(n)
Examples:
1. f(n) = 2n2 + 7n – 10, n = 5, c = 3.
=> f(n) = O(g(n)), where g(n) = n2
f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2
=> 2 x 25 + 7 x 5 – 10 ≤ 3 x 25
=> 50 + 35 – 10 ≤ 75 => 75 ≤ 75.
So, it is in O(g(n)) = O(n2).
2. f(n) = 2n2 + 7n – 10, n = 4, c = 3, g(n) = n2
=> f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2
=> 2 x 16 + 7 x 4 – 10 ≤ 3 x 16
=> 32 + 28 – 10 ≤ 48 => 50 ≤ 48.
So, it is not in O(g(n)).
3. f(n) = 2n2 + 7n – 10, n = 6, c = 3, g(n) = n2
=> f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2
=> 2 x 36 + 7 x 6 – 10 ≤ 3 x 36
=> 72 + 42 – 10 ≤ 108 => 104 ≤ 108.
So, it is in O(g(n)) = O(n2).
2. Small Oh (o) Notation:
 The functions in small oh (o) notation are the smaller
function in Big oh (O) notation. We use small oh (o)
notation to denote an upper bound that is not
asymptotically tight.
 This notation defined as: f(n) = o(g(n)), iff there exist
any positive constant c > 0 and n0 > 0, such that f(n) <
c(g(n)), for all n > n0.
 The definition of O-notation and o-notation are similar.
 The main difference is that in f(n) = O(g(n)), the bound
f(n) ≤ c(g(n)) holds for some constant c > 0, but in f(n) =
o(g(n)), the bound f(n) < c(g(n)) hold for all constant
c > 0.
 In this notation, the function f(n) becomes insignificant
relative to g(n) as „n‟ approaches infinity. That is:
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= 0 .
3. Big omega (Ω) Notation:
 The lower bound for the function is provided by Big
Omega (Ω) Notation.
 We can say, the running time of an algorithm Ω(g(n)), if
whenever input size is equal to or exceeds some
thresholds value „n0‟, its running time can be denoted by
some positive constant „c‟ times g(n).
 Let f(n) and g(n) are 2 functions from set of natural
numbers to set of non-negative real numbers and f(n)
said to be Ω(g(n)).
 That is, f(n) = Ω(g(n)) iff there exist a natural number
„n0‟ and a constant c > 0, such that f(n) ≥ c(g(n)), for all
n ≥ n0.
n0 Input size
Running
time c(g(n))
f(n)
Example:
1. f(n) = n2 + 3n + 4, n = 1, c = 1.
=> f(n) = Ω(g(n)), where g(n) = n2
f(n) ≥ c(g(n)) => n2 + 3n + 4 ≥ cn2
=> 1 + 3 x 1 + 4 ≥ 1 x 1 => 8 ≥ 1
=> f(n) = Ω(g(n)) = Ω(n2). (proved)
4. Small omega (ω) Notation:
 For a given function g(n), we denoted it as ω(g(n)),
where „ω‟ notation are the larger functions of Big
omega (Ω) notation.
 We use a notation to denote a lower bound that is not
asymptotically tight.
 We define this notation as: f(n) = ω(g(n)), there exist
some positive constant c > 0 and n0 > 0, such that f(n)
> c(g(n)), for all n ≥ n0.
 The relation f(n) = ω(g(n)) implies that lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= ∞ .
5. Theta (θ) Notation:
 For a given function g(n), we denoted it as θ(g(n)), the
set of functions f(n) = θ(g(n)), if there exist some
constant c1, c2 and n0, such that:
c1(g(n)) ≤ f(n) ≤ c2(g(n)), for all n ≥ n0.
 For all values of „n‟ to the right of „n0‟, the values of f(n)
lies at or above c1(g(n)) and at or below c2(g(n)).
 In other words, for all n ≥ n0, the function f(n) is equal to
g(n) within a constant factor.
 We say that g(n) is an asymptotically tight bound for f(n).
 The definition of θ(g(n)) requires that every member
f(n) є θ(g(n)) be asymptotically, non-negative. That is:
f(n) be non-negative whenever „n‟ is sufficiently large.
 So f(n) = θ(g(n)), which implies that:
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑐 .
n0 Input size
Running
time c1(g(n))
f(n)
c2(g(n))
Asymptotic Notation
Properties:
1. Reflexivity:
i. f(n) = θ(f(n))
ii. f(n) = O(f(n))
iii. f(n) = Ω(f(n))
2. Symmetric:
i. f(n) = θ(g(n)) iff g(n) = θ(f(n))
3. Reflexivity:
i. f(n) = O(g(n)) iff g(n) = Ω(f(n))
ii. f(n) = o(g(n)) iff g(n) = ω(f(n))
4. Transitivity:
i. f(n) = θ(g(n)) and g(n) = θ(h(n)) => f(n) = θ(h(n))
ii. f(n) = O(g(n)) and g(n) = O(h(n)) => f(n) = O(h(n))
iii. f(n) = Ω(f(n)) and g(n) = Ω(h(n)) => f(n) = Ω(h(n))
iv. f(n) = o(g(n)) and g(n) = o(h(n)) => f(n) = o(h(n))
v. f(n) = ω(f(n)) and g(n) = ω(h(n)) => f(n) = ω(h(n))
5. Some Important Formula:
i. For any two functions f(n) and g(n), we have:
f(n) = θ(g(n) iff f(n) = O(g(n)) and f(n) = Ω(g(n))
ii. If f(n) = O(g(n)) and f(n) = Ω(g(n)) => f(n) = θ(g(n))
iii. If T1(n) = O(f(n)) and T2 = O(g(n)), then
a. T1(n) + T2(n) = O[max (f(n), g(n))]
b. T1(n) * T2(n) = o[f(n) * g(n)]
iv. f(n) and g(n) are two asymptotic non-negative
functions, then max(f(n), g(n)) = θ(f(n) + g(n)).
6. Some Important Formula:
i. lgn = log2n (binary)
ii. lnn = logen (natural)
iii. lgkn = (lgn)k (exponential)
iv. lglgn = lg(lgn) (composition)
v. a = blogba
vi. logc(ab) = logca + logcb
vii. logban = nlogba
viii.logca/b = logca/logcb
ix. logc(1/a) = -logba
x. logba = 1/logab
xi. alogbc = clogba
xii. logn < nlogn < n2 < 2n < n! < nn
7. Factorial Functions:
i. n! = {1, if n = 0 and n(n – 1)!, if n > 0.
So, n! = 1 * 2 * 3 * …. * n.
ii. n! = √2πn(n/e)n(1 + θ(1/n)) [Stirling‟s
Approximation]
iii. n! = O(nn)
iv. n! = ω(2n)
v. lg(n!) = θ(nlgn)
vi. lg*n = min{i > 0; lg(i) n < 1}
a. lg*2 = 1
b. lg*4 = 2
c. lg*16 = 3
d. lg*65536 = 4
e. lg*(265536) = 5
Problems:
1. Show that for any real constant ‘a’ and ‘b’,
where b > 0.
Ans: (n + a)b = θ(nb).
Here f(n) = (n + a)b and g(n) = nb.
We know that, f(n) = θ(g(n)), if lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡.
=> (n + a)b = θ(nb), if lim
𝑛→∞
(n + a)b
nb = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡.
=> lim
𝑛→∞
*n(1 + a/n)}b
nb => lim
𝑛→∞
*nb(1 + a/n)}b
nb
=> lim
𝑛→∞
(1 + a/n)b => 1b = 1 = constant.
=> (n + a)b = θ(nb) (proved)
2. Prove that 2n + 1 = O(2n)
Ans: Here f(n) = 2n + 1 and g(n) = 2n.
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= lim
𝑛→∞
2n + 1
2n = lim
𝑛→∞
2n .2
2n = 2 = constant
=> f(n) = θ(g(n)).
We know that, f(n) = θ(g(n)) there exist f(n) < c(g(n)), for all
n ≥ n0. According to O-notation, f(n) = O(g(n)) iff there exist
positive constant „c‟ and „n0‟ and either f(n) ≤ c(g(n)) and
for all n ≥ n0.
=> 2n + 1 = O(2n) (proved)
3. Prove that 22n = ω(2n)
Ans: Here f(n) = 22n and g(n) = 2n.
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= lim
𝑛→∞
22n
2n = lim
𝑛→∞
22∗∞
2∞ = ∞ => f(n) = ω(g(n))
=> 22n = ω(2n), which is a ω-notation. (Proved)
4. Show that 5n2 = o(n3)
Ans: Here f(n) = 5n2 and g(n) = n3.
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= lim
𝑛→∞
5n2
n3 = lim
𝑛→∞
5
n
= 0. So, it is in small oh
notation. So => 5n2 = o(n3) (proved).
5. Show that 2n = o(n2)
Ans: Here f(n) = 2n and g(n) = n2.
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= lim
𝑛→∞
2n
n2 = 2/∞ = 0. So, f(n) = o(g(n)) => 2n =
o(n2) is proved. So, it is small oh notation.
6. Show that n2/2 = ω(n)
Ans: Here f(n) = n2/2 and g(n) = n.
lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= lim
𝑛→∞
n2/2
n
= lim
𝑛→∞
𝑛2
2
∗
1
𝑛
= lim
𝑛→∞
𝑛
2
= ∞
So, f(n) = ω(g(n)) (proved).
7. Theorem:
Let f(n) = a0 + a1n + a2n2 + … + annm, then
prove that f(n) = θ(nm).
Ans: Here f(n) = a0 + a1n + a2n2 + … + annm and g(n) = nm.
According to θ-notation, lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= 𝑐
=> f(n) = θ(g(n))
=> lim
𝑛→∞
a0 + a1n + a2n2 + … +annm
nm
=> lim
𝑛→∞
nm(a0/nm + a1n/nm+ a2n2/nm+ … + an)
nm
=> an = constant c => f(n) = θ(nm) (proved)
8. Let f(n) = 7n3 + 5n2 + 4n + 2. Prove f(n) = θ(n3).
Ans: Here f(n) = 7n3 + 5n2 + 4n + 2 and g(n) = n3.
According to θ-notation, lim
𝑛→∞
𝑓 𝑛
𝑔 𝑛
= 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
=> f(n) = θ(g(n)) => lim
𝑛→∞
7n3 + 5n2 + 4n + 2
n3
=> lim
𝑛→∞
(7 + 5/n + 4/n2 + 2/n3) = 7 = co𝑛𝑠𝑡𝑎𝑛𝑡
=> f(n) = θ(n3) (proved)
9. Prove lg(n!) = θ(nlgn).
Ans: nn ≥ n!
=> lgnn ≥ lgn! => nlgn ≥ lgn! => lgn! ≤ nlgn
=> lgn! ≤ 1 – nlgn => lgn! = O(nlgn) --- (i)
Where c1 =1. Now to show that lgn! = Ω(nlgn). That is there
exist some constant „c‟ and „n0‟, such that 0 ≤ cnlgn ≤ lgn!
=> lgncn ≤ lgn! => ncn ≤ n!.
Taking c = 1/3, we get nn/3 ≤ n!, which is always true.
=> lgn! = Ω(nlgn) --- (ii)
From equation (i) and (ii), lgn! = θ(nlgn) (proved).
10. Prove n! = O(nn).
Ans: We have to show that, lim
𝑛→∞
n!
nn = 0
n! = √2πn(n/e)n(1 + θ(1/n)) (According to Stirling‟s
Approximation)
=> lim
𝑛→∞
√2πn(n/e)n(1 + θ(1/n))
en ,
Let g(n) = 1/n, f(n) = θ(g(n))
=> f(n) ≤ c1(g(n)) = c1.1/n => f(n) = θ(1/n) ≤ c1/n
But, θ(1/n) < c1.1/n.
=> lim
𝑛→∞
n!
nn ≤ lim
𝑛→∞
√2πn(1 + c1/n)
en
=> lim
𝑛→∞
√2π(√n/en + c1/√n.en)
en
=> √2π lim
𝑛→∞
(√n/en + lim
𝑛→∞
c1/√n.en)
=> √2π lim
𝑛→∞
(
1
2n1/2 − 1
+ 0)/nen – 1 (since lim
𝑛→∞
(
1
√n. en
= 0)
=> √2π lim
𝑛→∞
(
1
2n1/2 − 1
+ 0)
=> √2π/2 lim
𝑛→∞
1
n3/2en - 1 = √π/2 * 0 = 0.
Here, lim
𝑛→∞
n!
nn = 0 => n! = o(nn) proved.
Recurrence:
 A recurrence is an equation or inequality that describes a
function in terms of its value on small inputs.
 The running time of the recursive algorithm can be
obtained by a recurrence.
 To solve the recurrence relation means to obtain a function
defined on the natural numbers that satisfies the
recurrence.
Recurrence Relation:
 A recurrence relation (RR) is defined as for a sequence {an}
is an equation that expresses „an‟ in terms of one or more
previous elements a0, a1.. an-1 of sequence for all n ≥ n0
without any base cases.
 E.g.: i) for instance, consider a recurrence relation tn = 2tn –
1. if „c‟ is a constant, then any function of the form c2n is a
solution to the above recurrence relation.
 By considering, mathematical induction, we have induction
part, if tn – 1= c2n – 1.
 If we have the initial condition t0 = 5, then the only choice
for the constant „c‟ is to have value 5, so as to give the
correct initial value. Thus on the basis part of the proof, we
have tn = 5.2n.
 It does not matter in which order the basis and induction are
established, what matters is that both have been verified to
be correct. Hence the solution for the recurrence is tn = 5.2n.
 E.g.: ii) the worst-case running time T(n) of the MERGE-
SORT procedure could be described by the recurrence.
T(n) = θ(1), if n = 1
2T(n/2) + θ(n), if n > 1.
Whose solution was claimed to be T(n) = θ(nlgn)
Recurrence Equation
Method:
 We solve the recurrence equation by the following
methods. That is:
i. Substitution method
ii. Iterative method
iii. Master method
iv. Recurrence or Recursion Tree
1. Substitution Method:
 In this method, first we guess a solution and use
mathematical induction to find the constant and show
that the solution works.
 The substitution method can be used to establish either
upper or lower bounds on a recurrence.
 E.g.: T(n) = 2T(└n/2┘) + n --- (1) is a recurrence
relation. We guess the solution T(n) = O(nlgn), where
g(n) = nlgn is the solution. That is: f(n) = T(n) ≤ cnlgn ---
(2). Then we have to prove that, this solution is true, by
using mathematical induction.
 From equation-1, T(n) = 2T(└n/2┘) + n
=> T(n) ≤ 2c(└n/2┘) lg(└n/2┘) + n
=> T(n) ≤ 2c. n/2 lgn/2 + n = cn(lgn – lg2) + n
=> cnlgn – cn + n. So, T(n) ≤ cnlgn – n(c – 1)
=> T(n) ≤ cnlgn for c >1
 Now, using mathematical induction,
T(n) = T(n-1) + 1.
 For n = 1, if T(1) = 1, then 1 ≤ c => lg1 = 0
=> T(1) is not true for n = 1. But let T(1) is true.
 For n = 2, if T(2) = 2T(1) + 2 = 4, then
4 ≤ c . 2lg2 => c ≥ 2.
Hence, T(n) ≤ cnlgn, c > 1 is true for n = 2
=> T(2) is true.
 Similarly, T(3), T(4) … is true => T(k) is true.
=> T(k + 1) is true by using Tk.
=> T(n ) = T(n – 1) + 1, g(n) = n = O(n)
 T(n) = T(n – 1) + 1
= T(n – 2) + 1 + 1
= T(n + 3) + 1 + 1 + 1
 Let k = T(n – k) + k. Let n – k = 0 (base value).
=> n = k => T0 + n = n + 1
 Let T0 = 1 (base case)
T(1) = 1, n – k = 1 => k = n – 1 => T(n) = 1 + n – 1 = n
 So, it is true for └n/2┘and it is true for n. If it is true for
n, then it is true for 3. If it is true for 3, then it is true for 6
and 7 and if it is true for 6, then it is true for 12 and 13.
 Hence, we conclude that for n ≥ c, where c ≥ 2. So, T(n) =
O(nlgn) is a solution of T(n) = 2T(└n/2┘) + n.
 Substitution Method is of two types. That is :
i. Backward Substitution
ii. Forward Substitution
i. Backward Substitution Method:
 Question: T(n + 1) = 2T(n). Solve this recurrence
relation using substitution method, using backward
substitution.
 Ans: T(n + 1) = 2T(n), let base value T(0) = 1
=> T(n + 1) = 2(2T(n – 1)) = 22T(n – 1) (1st term)
= 22(2T(n – 3)) = 23T(n – 2) (2nd term)
For kth term = 2k + 1T(n – k)
 Let n – k = 0 => k = n.
So for kth term, it is 2n + 1 T(0) = 2n + 1.1 = 2n + 1
=> T(n + 1) = 2n + 1 => T(n) = 2n (Ans)
 Now to prove this backward substitution using
mathematical induction is: T(n) = 2n
Let for n = 1 is true => T(1) = 21 = 2
Let for n = n is true => T(n) = 2n
We have to prove, T(n + 1) = 2n + 1
=> T(n + 1) = 2.T(n) = 2.2n = 2n + 1
=> T(n + 1) = 2n + 1 (proved)
ii. Forward Substitution Method:
 Question: T(n + 1) = 2T(n). Solve this recurrence
relation using forward substitution method.
 Ans: T(n + 1) = 2T(n),
=> T(1) = 2T(0) = 2 (n = 0 and T(0) = 1)
=> T(2) = 2T(1) = 2 x 2 = 22 ( n = 1 and T(1) = 2)
=> T(3) = 2T(2) = 23
 For k, T(k) = 2k (n = k and T(k – 1) = 2k – 1
Putting n = k => T(n) = 2n => T(n + 1) = 2n + 1
 To prove this forward substitution method, we have to
solve by the mathematical induction process.
 That is: T(n) = 2n.
Let n = 1 is true => T(1) = 21 = 2
Let n = n is true => T(n) = 2n
 We have to prove, T(n + 1) = 2n + 1
=> T(n + 1) = 2.T(n) = 2. 2n = 2n + 1 [T(n) = 2n]
=> T(n +1) = 2n + 1 (proved)
Problems on Substitution
Method:
1. T(n) = 2T(n – 1) + 1, initial value/base value
T(0) = 1 using forward substitution.
 Ans: T(n) = 2T(n – 1) + 1, where T(0) = 1
=> T(1) = 2T(0) + 1 = 2 x 1 + 1 = 3 = 22 - 1
T(2) = 2T(1) + 1 = 2 x 3 + 1 = 7 = 23 - 1
T(3) = 2T(2) + 1 = 2 x 7 + 1 = 15 = 24 - 1
…
T(k) = 2k + 1 - 1
=> T(n) = 2n + 1 - 1
=> T(n + 1) = 2n + 2 – 1 = 2T(n) + 1
 Now, by using mathematical induction,
T(n) = 2n + 1 – 1. Let n = 1 => T(1) = 21 + 1 – 1.
 Similarly, let „n‟ is true => T(n) = 2n + 1 – 1
We have to prove, T(n + 1) = 2T(n) + 1 = 2(2n + 1 – 1) + 1
=> 2. 2n + 1 – 2 + 1 = 2n + 2 – 1
=> T(n + 1) = 2n + 2 – 1 (proved)
2. Consider the recurrence T(n) = 3T(n/2) + n,
n ≥ 1 with initial condition T(0) = 0, obtain the
solution for the above recurrence.
 Ans: T(n) = 3T(n/2) + n, n ≥ 1and T(0) = 0
Let for n = 1, T(1) = 3T(1/2) + 1 = 1
n = 2, T(2) = 3T(1) + 2 = 3 x 1 + 2 = 5
n = 22 = 4, T(4) = 3T(2) + 3 = 3(3 x 1 + 2) + 22
= 32 x 1 + 3 x 2 + 22 = 9 + 6 + 4 = 19
n = 23 = 8, T(8) = 65 = 33 x 1 + 32 x 2 + 3 x 22 + 23
n = 24 = 16, T(8) = 211 = 34 x 1 + 33 x 2 + 32 x 22 + 3 x 23 + 24
(We take the values represented in the powers of 2).
 From the above computation, we guess the solution is:
T(n) = 3n + 1 – 2n + 1
So, for n = 0, T(0) = 30 – 20 = 0, is satisfy the initial
condition. So the statement is correct6 for n = 1. Let us
assume for some k > 0 then:
T(2k) = 3k20 + 3k – 121 + 3k - 222 + … + 3 x 2k - 1 + 30 x 2k
= 3k (2/3)𝑖𝑘
𝑖=𝑜 = 3k + 1 – 2k + 1
 So it can be observed that, T(k) = 3k + 1 – 2k + 1 (if „n‟ is not
power of 2). By this, we guess as the solution is correct.
Thus, T(n) = 3n + 1 – 2n + 1
3. Consider the recurrence T(n) = 1, n = 1 and
2T(└n/2┘), n > 1. We have to find an asymptotic
bound on T(n).
 T(n) = 1, n = 1 and 2T(└n/2┘), n > 1
 For the above recurrence, we guess that it satisfies
O(nlgn). Thus, for this we have to show that there exists a
constant „c‟, such that T(m < n) ≤ cmlogm, which implies,
T(n) ≤ cnlogn
=> T(n) ≤ c2└n/2┘log└n/2┘+ n
=> T(n) ≤ c2logn – cnlog2 + n
=> T(n) ≤ cnlogn – (clog2 – 1)n
=> T(n) ≤ cnlogn, for all c > 1/log2
 By mathematical induction, we can check T(2) = 4 and
T(3) = 5.
 As for n = 1, T(n) ≤ cnlogn yields 0. Thus, the inductive
proof T(n) ≤ cnlogn for constant c ≥ 1 is completed by
choosing „c‟ large enough such that: T(2) ≤ c2log2 and
T(2) ≤ c3log3. Since, the above relation holds for c ≥ 2.
Thus, T(n) ≤ cnlogn holds true.
 Thus, our guess as the solution T(n) = O(nlogn) is correct.
4. Consider the recurrence T(n) = 2T(└n/2┘+ 16)
+ n. We have to show that it is asymptotically
bound on O(nlogn).
 T(n) = 2T(└n/2┘+ 16) + n. For, T(n) = O(nlogn), we have
to show that for some constant,
T(n) ≤ cnlogn
=> T(n) ≤ c2(└n/2┘+ 16) log(└n/2┘+ 16) + n
= cnlog(n/2) + 32 + n – cnlog2 + n
= cnlogn – cn + 32 + n
= cnlogn – (c – 1)n + 32
= cnlogn – b ≤ cnlogn (if c ≥ 1, b is constant)
Thus, T(n) = O(nlogn)
5. Consider the recurrence T(n) = 2T(└n/2┘) + n,
we have to show that it is asymptotically bound
on O(nlogn).
 T(n) = 2T(└n/2┘) + n. For, T(n) = Ω(nlogn), we have to
show that for some constant „c‟,
T(n) ≥ cnlogn
=> T(n) ≤ c2(└n/2┘)log(└n/2┘) + n
=> T(n) = cnlog(n/2) + n – cnlog2 + n
= cnlogn – cnlog2 + n
= cnlogn – cn + n
=> T(n) = cnlogn for c = 1. Thus T(n) = Ω(nlogn)
6. Consider the recurrence T(n) = 2T(└n/2┘) + 1,
we have to show that it is asymptotically bound
on O(logn).
 T(n) = 2T(└n/2┘) + 1. For, T(n) = O(logn), we have to
show that for some constant „c‟,
T(n) ≤ cnlogn
=> T(n) ≤ clog(└n/2┘) + 1 = clogn – clog2 + 1
=> T(n) ≤ clogn for c ≥ 1. Thus T(n) = O(logn)
2. Iterative Method:
 An iterative method is the method, where the recurrence
relation is solved by considering 3 steps. That is:
I. Step 1: expand the recurrence
II. Step 2: express is as a summation (∑) of terms,
dependent only on „n‟ and the initial condition.
III. Step 3: evaluate the summation (∑).
Problems on Iteration
Method:
1. T(n) = 0, n = 0 – (i) (initial condition) and
= c + T(n – 1), n > 0 – (ii)
=> T(n) = T(n – 1) + c
= T(n – 2) + c + c => T(n – 2) + 2c
= T(n – 3) + c + c + c => T(n – 3) + 3c
= T(n – 4) + c + c + c + c => T(n – 4) + 4c
For kth term, T(n – k) + c + c + c … k times
= T(n – k) + 𝑐𝑘
𝑖=1
From the base case, n – k = 0 => n = k
=> T(n) = T(0) + 𝑐𝑛
𝑖=1 = 0 + cn => T(n) = cn.
2. T(n) = 0, n = 0 – (i) (initial condition) and
= T(n – 1) + n, n > 0 – (ii)
=> T(n) = T(n – 1) + n
T(n – 1) = T(n – 2) + n – 1
=> T(n) = T(n – 2) + n – 1 + n
T(n – 2) = T(n – 3) + n – 2
=> T(n) = T(n – 3) + n – 2 + n – 1 + n
….
For kth term, T(n – k) + n – k – 1 + ….. + n – 1 + n
=> T(n) = T(n – k) + 𝑛 − 𝑖𝑘+1
𝑖=0
From the base case, n – k = 0 => n = k
=> T(n) = T(0) + 𝑛 − 𝑖𝑛+1
𝑖=0 = 0 + 𝑛 − 𝑖𝑛+1
𝑖=0
=> T(n) = 𝑛 − 𝑖𝑛+1
𝑖=0
We know that, 𝑖𝑛
𝑖=1 = n (n + 1) /2
T(n) = 𝑛 − 𝑖𝑛+1
𝑖=0 = n + n – 1 + n – 2 + … + 1 + 0 + (-1)
=> T(n) = n(n + 1)/2 – 1
3. T(n) = c, n = 1 – (i) (initial condition) and
= 2T(n/2) + c, n > 1 – (ii)
=> T(n) = 2T(n/2) + c
= 2T(2T(n/22) + c) + c = 22T(n/22) + 2c + c
= 22 (2T(n/23) + c) + 2c + c
= 23T(n/23) + 22c + 21c + 20c
….
For kth term, 2kT(n/2k) + 2k - 1c + … + 21c + 20c
=> n/2k = 1 => n = 2k => k = log2n
That is,
2logn + 2logn – 1/c + …. + 21c + 20c
= 2log2n = nlog22 = n1 = n (since, alogbc = clogba)
= 2kT(n/2k) + c 2𝑖 − 1𝑘
𝑖=1
(since, a + ar + ar2 + … + arn = a(rn+ 1 – 1)/r – 1)
= 2kT(n/2k) + c(2k – 1/2 – 1)
= 2kT(n/2k) + c(2k – 1)
Let k = logn,
= 2lognT(n/2logn) +c2logn – c
= nT(1) + cn – c
= 2cn – c
= c(2n – 1)
4. T(n) = 2T(└n/3┘) + n
Expanding the above terms, we get
T(n) = n + 2/3n + 4T(n/9)
= n + 2/3n + 4/9n + 8T(n/27)
It is to be noticed that, we can meet the boundary
condition, where (n/3i) ≤ 1. i.e.: after performing ┌log3n┐
expansions. Thus,
T(n) = n 2/3 𝑖 𝑛 + 2𝑙𝑜𝑔3 𝑛θ(1)
┌log3n┐
𝑖=0
≤ n 2/3 𝑖 𝑛 + 2𝑙𝑜𝑔3 𝑛θ(1)
log3n
𝑖=0
≤ n 2/3 𝑖 𝑛 + 𝑛𝑙𝑜𝑔32θ(1)∞
𝑖=0
= 3n + O(n) = O(n).
So, T(n) = O(n).
5. T(n) = T(n – 1) + 1, T(1) = θ(1
Expanding the above terms T(n – 1) = T(n – 2) + 1
So, T(n) = (T(n – 2) + 1) + 1 = T(n – 2) + 1
T(n – 2) = T(n – 3) + 1
=> T(n) = (T(n – 3) + 1) + 2 = T(n – 3) + 3
For kth term, T(n) = T(n – k) + k
When k = n – 1 => T(n – k) = T(1) = θ(1)
Thus, T(n) = θ(1) + (n – 1) = θ(n).
Hence, T(n) = θ(n)
6. T(n) = T(n/2) + n, T(1) = θ(1)
Expanding the above terms T(n/2) = T(n/4) + n/2
Thus, T(n) = T(n/4) + n/2 + n
T(n/4) = T(n/8) + n/4
=> T(n) = T(n/8) + n/4 + n/2 + n
T(n/8) = T(n/16) + n/8
=> T(n) = T(n/16) + n/8 + n/4 + n/2 + n
T(n/16) = T(n/32) + n/16
=> T(n) = T(n/32) + n/16 + n/8 + n/4 + n/2 + n
…..
For kth term, T(n) = T(n/2k) + (
𝑛
2 𝑗)𝑘 −1
𝑗=0
It can be observed that the recursion stops, when we get to
T(1). This happens when n/2k = 1. That is n = 2k => k = logn
Thus, T(n) = θ(1) + (
𝑛
2 𝑗)
𝑙𝑜𝑔𝑛 −1
𝑗=0
< θ(1) + (
𝑛
2 𝑗)∞
𝑗=0
< θ(1) + 2n = θ(n)
Hence, T(n) = θ(n)
7. T(n) = 3T(└n/4┘) + n
Expanding the above terms, we get
T(n) = 3(3T(└n/16┘)) + (└n/4┘) + n
= 3(3(3T(└n/164┘))) + (└n/16┘) + (└n/4┘) + n
= n + 3(└n/4┘) + 9(└n/16┘) + 2T(T(└n/164┘))
The recursion stops, when n/4i ≤ 1, which implies n ≤ 4i
=> log4n = i
Thus, T(n) = n + 3└n/4┘+ … + 3i└n/4i┘+ 3log4n.θ(1)
T(n) ≤ (n + 3n/4 + 9n/16 + … + 3log4n)θ(1)
≤ n (3/4) 𝑘∞
𝑘=0 + θ(nlog43)
{as 3log4n = nlog43}
≤ n(1/(1 – ¾)) + O(n)
= 4n + O(n) {as log43 < 1} = O(n)
Hence, T(n) = O(n) (proved)
3. Master Method:
 The master method is used for solving the following
types of recurrences, T(n) = aT(n/b) + f(n), where „a‟
and „b‟ are constants and a ≥ 1, b > 1.
 In the above recurrence, the problem of size „n‟ is
divided into „a‟ sub-problems each of size „n/b‟.
 Each sub-problem of size „n/b‟ can be solved recursively
in time T(n/b).
 The cost of dividing or splitting the problem and
combine the solutions or result is described by the
function f(n).
 Here the size is interpreted as └n/b┘or ┌n/b┐. The T(n)
can be bounded asymptotically by the following 3 cases.
1. CASE I: if f(n) = O(nlog
b
a – є) for some constant є > 0
then T(n) = θ(nlog
b
a)
2. CASE II: if f(n) = θ(nlog
b
a) then T(n) = θ(nlog
b
a . logn)
3. CASE III: if f(n) = Ω(nlog
b
a + є) for some constant є > 0
if af(n/b) ≤ cf(n), for some constant c > 0 and c < 1 and
n = sufficiently very large, then T(n) = θ(f(n))
NOTES:
 If the recurrence is of following form i.e.:
T(n) = aT(n/b) + cnd, n > n0
 Then the solution of the recurrence is,
T(n) = θ(nd), if a < (b)d
θ(ndlogn), if a = (b)d
θ(nlog
b
a), if a > (b)d
 E.g.: T(n) = 3T(n1/3) + log3n
Let us assume m = log3n => n = 3m. Thus, n1/3 = 3n/3
=> T(3m) = 3T(3m/3) + m. Again, consider s(m) = T(3m).
We have s(m) = 3s(m/3) + m.
Using master method,
s(m) є θ(mlogm) => T(n) є θ(log3n(log log3n)) (Ans)
 Let T(n) = 4T(n/2) + nlogn. If f(n) = logarithmic part or
polynomial part, then master method did not work or can‟t
be apply.
 So for the solution we have to apply substitution or
iterative method.
 Here a = 4, b = 2, f(n) = nlogn, nlog
b
a = n2. Here we can‟t
compare n2 with nlogn.
 So we can‟t say that those two terms are equal or greater
than or less than. So we go for iterative or substitution
method.
 T(n) = 2T(3n/2) + 3. Here a = 2, b = 2/3, f(n) = 3. Now
nlog
b
a = nlog
2/3
2.
 It can‟t be solved in master method, because logarithmic
function can‟t take fractional value.
 It always took integer form. So this type of problem is
solved by either iterative or substitution method.
Problems on Master
Method:
1. T(n) = 3T(n/2) + n2. Here a = 3, b = 2, f(n) = n2. Now,
nlog
b
a = nlog
2
3 = n1.585. But f(n) = n2, so it will be in case-3,
where є > 0.
=> є = .415 => f(n) = Ω(nlog
b
a + є) => T(n) = θ(f(n))
=> T(n) = θ(n2) and af(n/b) ≤ cf(n) => 3 * f(n/2) ≤ cn2.
=> 3n2/4 ≤ cn2 = 3/4 ≤ c. (Since c > 0 and c < 1).
So, af(n/b) ≤ cf(n) is satisfied.
2. T(n) = 4T(n/2) + n2. Here a = 4, b = 2, f(n) = n2. Now,
nlog
b
a = nlog
2
4 = nlog
222 = n2log
2
2 = n2 x 1 = n2.
(since, log22 = 1). So it satisfied case – 2. i.e.:
f(n) = θ(nlog
b
a) => f(n) = θ(n2) => T(n) = θ(nlog
b
a . logn)
=> T(n) = θ(n2logn) (Ans)
3. T(n) = 2T(n/2) + n. Here a = 2, b = 2, f(n) = n. Now,
nlog
b
a = nlog
2
2 = n = f(n). So it satisfies case – 2. That is:
f(n) = θ(nlog
b
a) = n => T(n) = θ(nlog
b
a . logn) = θ(nlogn).
4. T(n) = 16T(n/4) + n. Here a = 16, b = 4, f(n) = n. Now,
nlog
b
a = nlog
4
16 = nlog
442 = n2log
4
4 = n2 x 1 = n2. It satisfies
case – 1. That is f(n) = O(nlog
b
a – є)
=> n = O(n2 – є) = O(n2 – 1) => n = O(n)
(since, є = 1 and є > 0)
=> T(n) = θ(nlog
b
a) => T(n) = θ(n2) (Ans)
5. T(n) = 2T(n/2) + n – 1. Here a = 2, b = 2, f(n) = n – 1.
Now, nlog
b
a = nlog
2
2 = n. Since, f(n) = n – 1 that does not
belongs to O(nlog
b
a – є). So case – 1 does not apply. But as
f(n) є θ(n).
So according to case – 2, we have T(n) = O(nlogn) (Ans)
6. T(n) = T(3n/4) + 1 and T(1) = θ(1). We have to find its
asymptotic bound. Using the Master method we have,
a = 1, b = 4/3 and f(n) = 1. Now, nlog
b
a = nlog
4/3
1 = n0 = 1.
So case – 2 applies. Since 1 = θ(1). So T(n) = θ(logn)
7. T(n) = 4T(n/2) + n. Here a = 4, b = 2, f(n) = n. Now,
nlog
b
a = nlog
2
4 = nlog
222 = n2log
2
2 = n2 . Since, f(n) = n. So it
satisfies case – 1. That is f(n) = O(nlog
b
a – є)
=> n = O(n2 – 1) => n = O(n). (since є > 0 and є = 1).
Thus T(n) = θ(nlog
b
a) => T(n) = θ(n2) (Ans).
8. T(n) = 4T(n/2) + n3. Here a = 4, b = 2, f(n) = n3. Now,
nlog
b
a = nlog
2
4 = nlog
222 = n2log
2
2 = n2 . Since, f(n) = n3. So
it satisfies case – 3. That is f(n) = Ω(nlog
b
a + є) = Ω(n2 + 1)
=> n = Ω(n3). Thus, T(n) = θ(n3) and af(n/b) ≤ cf(n).
=> 4f(n/2) ≤ cn3 => 4f(n3/8) ≤ cn3 => 4/8n3 ≤ cn3
=> 1/2n3 = cn3 => c = 1/2. (since c > 0 and c < 1). So,
af(n/b) ≤ cf(n) is satisfied.
4. Recursion Tree Method:
 Recursion Tree Method is pictorial representation of an
iteration method, which is in the form of a tree, where at
each levels, nodes are expanded.
 It is used to keep track of the size of the remaining
arguments in the recurrence and the non-recursive costs.
In a recursion tree, each node represents the cost of a
single sub-problem.
 We add the cost within each level of the tree to obtain a
set of pre-level cost and then we add all the levels of
costs to determine the total cost of all levels of
recursion.
 In general, T(n) = aT(n/b) + f(n)
T(1)
f(n)
f(n/b)f(n/b)
f(n/b2)f(n/b2) f(n/b2)
f(n/b)
f(n/b2)f(n/b2) f(n/b2) f(n/b2)f(n/b2) f(n/b2)
af(n/b)
a2f(n/b2)
f(n/bn)f(n/bn) f(n/bn) anf(n/bn)
T(n) = aT(n/b) + f(n)
T(n/b) = aT(n/b2) + f(n/b)
Theorem:
1. Let a ≥ 1 and b > 1 be constants. Let f(n) be a non-
negative function defined on exact powers of „b‟. Define
T(n), be an exact power of „b‟, by the recurrence.
T(n) = θ(1) , n = 1
aT(n/b) + f(n), if n = bi, i = +ve integer
=> T(n) = θ(nlog
b
a) + 𝑎𝑖 𝑓(
𝑛
𝑏𝑖)
𝑙𝑜𝑔 𝑏
𝑛−1
𝑖=0
Since, alogbn = nlogba
θ(nlog
b
a) = Total cost for the leafs,
=
𝑙𝑜𝑔 𝑏
𝑛−1
𝑖=0 sum over of all levels
aif(n/bi) = cost per level
Problems on Recursion
Tree:
1. T(n) = T(n/3) + T(2n/3) + n. The recursion tree for this is:
=> T(n) = n + n + n + ….. log3/2n times = θ(nlogn).
T(n)/n
n/3 2n/3
n/9 2n/9 2n/9 4n/9
n
n
n
Total = θ(nlogn)
log3/2n
2. T(n) = 2T(n/2) + n2. The recursion tree for this is:
So, the above recurrence has the solution T(n) = θ(n2).
n2
(n/2)2 (n/2)2
(n/4)2 (n/4)2 (n/4)2
(n/4)2
n2/2
n2/4
n2
Total = θ(n2)
log2n
3. T(n) = 4T(n/2) + n. The recursion tree for this is:
We have, n + 2n + 4n + … + logn times
= n(1 + 2 + 4 ….. logn times)
= n(2log22 – 1)/(2 – 1) = n2 – n = θ(n2)
=> T(n) = θ(n2)
n
2(n/2) 2(n/2)
4(n/4) 4(n/4) 4(n/4) 4(n/4)
2n
4n
n
Total = θ(n2)
logn
1 1
logn
1 1 1
4. T(n) = 3T(n/4) + n. The recursion tree for this is:
=> T(n) = θ(n4log3) + (
3
4
)𝑖 𝑛
4𝑙𝑜𝑔𝑛−1
𝑖=0
=> T(n) < θ(n4log3) + (
3
4
)𝑖 𝑛∞
𝑖=0
=> T(n) < θ(n4log3) + (1/(1 – ¾))n
=> T(n) < θ(n4log3) + (1/4n)
=> T(n) є O(n).
Here, we have the linear worst case complexity.
n
n/4n/4
(n/16)(n/16) (n/16)
n/4
(n/16)(n/16) (n/16) (n/16)(n/16) (n/16)
3n/4
9n/16
T(1)T(1) T(1)
n4log3 leaves
n
4logn
5. Solve the factorial with recursion tree with recurrence
relation.
Factorial:
The term „n‟ factorial indicates the product of the
positive integers from 1 to n inclusive and is denoted
by n!. Factorial of number (n) is a recursive manner is
defined by the recurrence relation is:
Factorial(n) = T(n) = 1, n = 0
T(n – 1) * 1, n > 0
The algorithm for the factorial is:
fact(n)
{
if n = 0 then return 1
else if n > 0 return n * fact(n – 1)
}
From the recurrence relation,
T(n) = 1, n = 0
n * T(n – 1) * 1, n > 0
Put, n = 1, 2, …. N
T(1) = T(0) + 1, T(2) = T(1) + 1 …. T(n) = T(n – 1) + 1
Using recurrence tree, the solution is:
=> T(n) = T(n – 1) + 1
+ T(n – 1) = T(n – 2) + 1
+ T(n – 2) = T(n – 3) + 1 ….
+ T(2) = T(1) + 1
+ T(1) = T(0) + 1
---------------------------
T(n) = T(0) + 1 + 1 + 1 + …. + n => T(n) = T(0) + n
=> T(n) = n + 1
If, T(1) = 1 is given, then we have to calculate upto
T(2) = T(1) + 1
=> T(n) = T(1) + 1 + 1 + 1 + … + n – 1
=> T(n) = T(1) + n – 1 = 1 + n – 1 = n => T(n) = n
: Calculate if for „n‟ value n(1) = n
T(n)
T(n – 1)
T(n – 2)
T(n – 3)
T(0)
1
1
1
1
6. Solve the Fibonacci series with recursion tree with
recurrence relation.
Fibonacci Series:
Fibonacci Series is a series of positive integer in
manner that the next term of a series is the addition of
previous 2 terms. i.e.: 0, 1, 1, 2, 3, 5, 8, 13, 21.
The algorithm for Fibonacci series is:
Fibseq(n)
{
if n = 0
then return 0
else if n = 1
then return 1
else if n > 1
then return (Fibseq(n – 1) + Fibseq(n – 2))
The Fibonacci sequence in recursive manner is defined by
the recurrence relation is:
T(n) = 0, if n = 0
1, if n = 1
T(n – 1) * T(n – 2), if n > 1.
From the recurrence relation,
Put, n = 2, 3, …. n.
So, T(2) = T(0) + T(1),
T(3) = T(1) + T(2) ….
T(n) = T(n – 1) + T(n – 2)
T(n)
T(n – 2)
T(n – 2) T(n – 3)
T(n – 1)
T(n – 3) T(n – 4)
T(0) T(1) T(0) T(1) T(0) T(1) T(0) T(1)
=> T(n) = T(n – 1) + T(n – 2)
+ T(n – 1) = T(n – 2) + T(n – 3)
+ T(n – 2) = T(n – 3) + T(n – 4) ….
+ T(3) = T(1) + T(2)
+ T(2) = T(0) + T(1)
---------------------------
T(n) = T(0) + T(n – 1) = T(n – 1)
=> T(n) = T(n – 1)
Binary Search Algorithm:
 Binary_search(a[], n, x)
begin
low ← 1, high ← n, j =0;
while (low ≤ high AND j =0)
begin
mid ← └(low + high)/2┘;
if (x = a[mid])
j ← mid;
else if (x < a[mid])
high ← mid – 1;
else low ← mid + 1;
end while
return j; end
Binary Search Analysis:
 For 1st iteration – n
2nd iteration - └n/2┘
3rd iteration - └└n/2┘/2┘ = └n/4┘ ….
For jth iteration - └n/2j - 1┘= 1
 The floor definition, 1 ≤ n/2j – 1 ≤ 2 => 2j – 1 ≤ n ≤ 2j
=> log2j – 1 ≤ log2n ≤ log2j (Taking logarithmic value)
=> j – 1 ≤ n < j
=> j ≤ logn + 1 => j = └logn┘ + 1
So, the time complexity is: O(logn)
Insertion Sort Algorithm:
 Insertion_sort(a[], n)
begin
for (i = 2 to n)
begin j ← i;
temp ← a[i];
while(j > 1 AND a[j – 1] > temp)
begin
a[j] ← a[j – 1];
j ← j – 1;
end while
a[j] ← temp;
end for
end
Insertion sort Analysis:
 Insertion_sort(a[], n)
Sl.no Code Cost times
1 begin 0 1
2 for (i = 2 to n) C1 n
3 begin 0 n – 1
4 j ← i C2 n – 1
5 temp ← a[i] C3 n – 1
6 while(j > 1 AND a[j – 1] > temp) begin C4 𝑡𝑗
𝑛
j=2
7 a[j] ← a[j – 1] C5 𝑡𝑗
𝑛
𝑗=2 - 1
8 j ← j – 1 C6 𝑡𝑗
𝑛
𝑗=2 - 1
9 end while 0 𝑡𝑗
𝑛
𝑗=2 - 1
10 a[j] ← temp C7 n – 1
11 end for 0 n – 1
12 end 0 1
 Total time = ∑ cost x time
= 0 + C1n + 0 + C2(n – 1) + C3(n – 1) + C4 𝑡𝑗
𝑛
j=2 +
C5 𝑡𝑗 − 1𝑛
j=2 + C6 𝑡𝑗 − 1𝑛
j=2 + 0 + C7(n – 1) +
0 + 0
= C1n + C2(n – 1) + C3(n – 1) + C7(n – 1) +
C4 𝑡𝑗
𝑛
j=2 + C5 𝑡𝑗 − 1𝑛
j=2 + C6 𝑡𝑗 − 1𝑛
j=2
= (C1 + C2 + C3 + C7)n – (C2 + C3 + C7) + C4 𝑡𝑗
𝑛
j=2
+ C5 (𝑡𝑗 − 1)𝑛
j=2 + C6 (𝑡𝑗 − 1)𝑛
j=2
= C8n – C9 + C4 𝑡𝑗
𝑛
j=2 + C5 𝑡𝑗
𝑛
j=2 − C5 1𝑛
j=2 +
C6 𝑡𝑗
𝑛
j=2 − C6 1𝑛
j=2
= C8n – C9 + (C4 + C5 + C6) 𝑡𝑗
𝑛
j=2 − C5 1𝑛
j=2 −
C6 1𝑛
j=2
= C8n – C9 + C10 𝑡𝑗
𝑛
j=2 − C5n + C5 − C6n + C6
= C8n − C5n − C6n – C9 + C5 + C6 + C10 𝑡𝑗
𝑛
j=2
= n(C8 − C5 − C6) + (C5 + C6 – C9)+ C10 𝑡𝑗
𝑛
j=2
= C11 + C12n + C10 𝑡𝑗
𝑛
j=2
= C11 + C12n + C10n(n + 1)/2 − C10
= C11 − C10 + C12n + C10n2/2 + C10n/2
= C13 + C12n + C10n2/2 + C10n/2
= C13 + C12n + C10n/2 + C10n2/2
= C10n2/2 + C14n/2 + C13 ≈ An2 + Bn + C ≈ O(n2)
 So, the best case run time of insertion sort is:
T(n) = C1n + C2(n – 1) + C3(n – 1) + C7(n – 1)
= (C1 + C2 + C3 + C7)n – (C2 + C3 + C7)
 The running time, can be expressed as „an + b‟ for constant
„a‟ and „b‟ that depend on the statement costs „Ci‟, it is thus
a linear function of „n‟.
 The worst case run time of insertion sort:
T(n) = C1n + C2(n – 1) + C3(n – 1) + C4((n(n + 1) – 1/2) +
C5((n(n – 1)/2) + C6((n(n – 1)/6) + C7(n – 1)
= (C4/2 + C5/2 + C6/2)n2 + (C1 + C2 + C3 + C4/2 – C5/2
+ C6/2 + C7)
 The worst case running time can be expressed as:
an2 + bn + c for constants a, b, and c and again depend on
the statement costs „Ci‟, it is thus a quadratic function of
„n‟.
Bubble Sort Algorithm:
 This algorithm sorts the element of an array „A‟, having
elements in ascending or increasing order.
 Step 1: Initialization (p = pass counter, E = count the no. of
exchanges, l = unsorted element)
 Step 2: loop,
Repeat through step 4, while (p ≤ n – 1)
Set E ← 0 : initializing exchange variables
 Step 3: Comparison loop
Repeat for i ← 1, … l – 1
if (A[i] > A[i + 1]) then
set A[i] ← A[i + 1] : Exchanging values
set E ← E + 1
 Step 4: Finish or reduce the size
if (E – 0), then
exit
else
set l ← l - 1
 Here, pass refers to the search for the element with next
smallest key.
 At a time, each pass places one element in its proper
position. Thus, for performing the above sort „n – 1‟ passes
are required.
 In pass 1 the adjacent elements are compared such as A[1]
and A[2] and the elements are arranged in proper order like
A[2], A[1] (if A[2] < A[1]). After that A[1] and A[3] are
compared.
 The process continues until the greatest element is placed at
the last position. Thus A[n] contains the largest element. In
this pass (n – 1) comparisons are required.
 In pass 2, the second largest element is placed at A[n – 1]
by performing (n – 2) comparisons. After (n – 1) passes, we
get the final sorted list as: A1 ≤ A2 ≤ A3 ≤ … An - 1 ≤ An.
 The whole list of „n‟ elements of an array „A‟ is sorted after
(n – 1) passes. So the time complexity of Bubble sort is:
i. For pass 1: n elements
ii. For pass 2: n elements
iii. i.e.: n x n = n2 => O(n2)
Analysis Design Technique:
 Given a problem, the algorithm is largely influenced by the
choice of data structure. With a chosen data structure one
can develop a no. of different algorithm for set problem.
 The 1st intuitive algorithm may not be the best one far as
memory and time efficiency is concerned. There are some
general techniques for development of algorithms. Those
are:
 Divide and Conquer
 Greedy Strategy
 Dynamic Programming
 Back Tracking
 Branch and Bound
Divide and Conquer:
 The Divide and conquer method includes 3 steps. That are:
1. Step 1: Divide the problem into no. of sub-problems.
2. Step 2: Conquer the sub-problem by solving them
recursively, only if the problem sizes are small enough
to be solved in a straight forward manner, otherwise
step-1 is executed.
3. Step 3: Combine the solutions obtained by sub-
problems and create a final solution to the original
problem.
 Example: Merge Sort, Quick Sort and Heap Sort
a. Merge Sort:
1. Step 1: The whole list is divided into two sub-lists of
n/2 elements each for sorting.
1. Step 2: Sort the sub-list recursively using merge sort.
2. Step 3: Now merge the two sorted sub-lists to generate
the sorted answer.
 For accomplishing the whole task, we are using two
procedures „Merge Sort‟ and „Merge‟. The procedure
„Merge‟ is used for combining the sub-lists.
 The analysis is the part of the Merge Sort is solved by
recursion tree method.
Merge Sort Algorithm:
Merge_Sort(A, p, r)
i. if p < r
ii. then q ← └p + r/2┘
iii. Merge_Sort(A, p, q)
iv. Merge_Sort(A, q + 1, r)
v. Merge(A, p, q, r)
Algorithm for Merge:
Merge (A, p, q, r)
i. n1 ← q – p + 1
ii. n2 ← r – q
iii. create arrays L[1 … n1 + 1] and R[1 … n2 + 1]
iv. for i ← 1 to n1
v. do L[i] ← A[p + i – 1]
vi. for j ← 1 to n2
vii. do R[j] ← A[ q + j]
viii.L[n1 + 1] ← ∞
ix. R[n2 + 1] ← ∞
x. i ← 1
xi. j ← 1
xii. for k ← p to r
xiii.do if L[i] ≤ R[j]
xiv. then A[k] ← L[i]
xv. i ← i + 1
xvi. else A[k] ← R[j]
xvii.j ← j + 1
Analysis for Merge Sort:
 The time complexity of Merge Sort (total time) is:
T(n) = θ(nlogn) and Total cost = cnlogn. The divide steps
compute the middle of the sub-array which takes constant
time i.e.: θ(1).
 We recursively solve 2 sub-problem, each of size n/2,
which contributes 2T(n/2) to the running time. Merge
Procedure on a „n‟ elements sub-array takes time θ(n).
 So total time = T(n) = 2T(n/2) + θ(n) and recurrence
relation for Merge Sort = T(n) = 2T(n/2) + cn.
Solve the Merge Sort with Recursion Tree with
recurrence relation:
 The recurrence relation of Merge Sort is:
 T(n) = 2T(n/2) + cn. So, the recursion tree for this is:
T(n)
T(n/2)
T(n/4) T(n/4)
T(n/2)
T(n/4) T(n/4)
cn
cn/2
cn/4 cn/4
cn/2
cn/4 cn/4
cn
cn
cn
cn
logn
 Divide this until and unless a single sub-point is not
coming. Let assume that height of the tree is logn.
Total Cost = Total cost of f(n) part * height of tree
Here, total cost = cn * logn
Here, cn = cost of f(n) part and f(n) = cn
logn = height of the tree
=> Total Cost = cnlogn and Total Time = nlogn
=> T(n) = θ(nlogn)
Example: Sort the elements 2, 4, 5, 7, 1, 2, 3, 6
using merge sort.
Ans: 2, 4, 5, 7, 1, 2, 3, 6. Now, the array is:
p ←1 2 3 4 (q) 5 6 7 8 → r
2 4 5 7 1 2 3 6
p = 1, r = 8, q = └(1 + 8)/2┘= └4.5┘= 4
Merge_sort(A, p, q) = Merge_sort(A, 1, 4)
Merge_sort(A, q + 1, r) = Merge_sort(A, 5, 8)
Merge(A, p, q, r)
n1 = q – p + 1 = 4, n2 = r – q = 8 – 4 = 4
Create Arrays L[1 … n1 + 1] and R[1 … n2 + 1]
=> L[1 to 5] and R[1 to 5]
for i = 1to n1 = 1 to 4 for j = 1to n2 = 1 to 4
L[i] = A[p + i – 1] R[j] = A[q + j]
L[1] = A[1] = 2 R[1] = A[5] = 1
L[2] = A[2] = 4 R[2] = A[6] = 2
L[3] = A[3] = 5 R[3] = A[7] = 3
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
L[4] = A[4] = 7 R[4] = A[8] = 6
L[5] = ∞ R[5] = ∞
Now,
k
i j
 Now i ← 1, j ← 1. For k = p to r = 1 to 8. If L[i] ≤ R[j],
then A[k] ← R[i] and i = i + 1 else A[k] = R[j] and j = j + 1.
 Here, L[i] = L[1] = 2 and R[j] = R[1] = 1 => L[i] ≤ R[j]
(false).
 So, A[k] = R[j] => A[1] = R[1] = 1 and j = j + 1 = 1 + 1 = 2.
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
1 2 3 4 5 6 7 8
A 2 4 5 7 1 2 3 6
Now,
k
i j
 Now i = 1, j =2, k = p to r = 2 to 8.
 L[i] ≤ R[j] => L[1] ≤ R[2] => 2 ≤ 2 (true).
 So, A[k] = L[i] => A[2] = L[1] = 2 and i = i + 1= 1 + 1 = 2.
Now,
k
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
1 2 3 4 5 6 7 8
A 1 4 5 7 1 2 3 6
1 2 3 4 5 6 7 8
A 1 2 5 7 1 2 3 6
i j
 Now i = 2, j =2, k = p to r = 3 to 8.
 L[i] ≤ R[j] => L[2] ≤ R[2] => 4 ≤ 2 (false).
 So, A[k] = R[j] => A[3] = R[2] = 2 and j = j + 1= 2 + 1 = 3.
Now,
k
i j
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
1 2 3 4 5 6 7 8
A 1 2 2 7 1 2 3 6
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
 Now i = 2, j =3, k = p to r = 4 to 8.
 L[i] ≤ R[j] => L[2] ≤ R[3] => 4 ≤ 3 (false).
 So, A[k] = R[j] => A[4] = R[3] = 3 and j = j + 1= 3 + 1 = 4.
Now,
k
i j
 Now i = 2, j =4, k = p to r = 5 to 8.
 L[i] ≤ R[j] => L[2] ≤ R[4] => 4 ≤ 6 (true).
 So, A[k] = L[i] => A[5] = L[2]= 4 and i = i + 1= 3.
1 2 3 4 5 6 7 8
A 1 2 2 3 1 2 3 6
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
Now,
k
i j
 Now i = 3, j =4, k = p to r = 6 to 8.
 L[i] ≤ R[j] => L[3] ≤ R[4] => 5 ≤ 6 (true).
 So, A[k] = L[i] => A[6] = L[3]= 5 and i = i + 1= 4.
Now,
k
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
1 2 3 4 5 6 7 8
A 1 2 2 3 4 2 3 6
1 2 3 4 5 6 7 8
A 1 2 2 3 4 5 3 6
i j
 Now i = 4, j =4, k = p to r = 7 to 8.
 L[i] ≤ R[j] => L[4] ≤ R[4] => 7 ≤ 6 (false).
 So, A[k] = R[j] => A[7] = R[4]= 6 and j = j + 1= 5.
Now,
k
i j
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
1 2 3 4 5 6 7 8
A 1 2 2 3 4 5 6 6
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
 Now i = 4, j =5, k = p to r = 8 to 8.
 L[i] ≤ R[j] => L[4] ≤ R[5] => 7 ≤ ∞ (true).
 So, A[k] = L[i] => A[8] = L[4]= 7 and i = i + 1= 5.
Now,
i j
 Now the elements are sorted using merge sort.
 That is: 1, 2, 2, 3, 4, 5, 6, 7.
1 2 3 4 5 6 7 8
A 1 2 2 3 4 5 6 7
1 2 3 4 5
L 2 4 5 7 ∞
1 2 3 4 5
R 1 2 3 6 ∞
 The representation of tree structure is:
 The recurrence relation is:
T(n) = θ(1), if n ≤ C
aT(n + b) + D(n) + C(n), otherwise
1 2 2 3 4 5 6 7
2 4 5 7 2 4 5 7
2 5 4 7 1 3 2 6
5 2 4 7 1 3 2 6
Merge Merge
Merge Merge Merge Merge
b. Quick Sort:
 Quick Sort technique is based on the divide and conquers
design technique, which works recursively on a longer
list.
 Here first we select the “pivot element” from the list,
then it partitions the list into elements that are less than
the pivot element and greater than the pivot. Here the
problem of sorting a given list is reduced to the problem
of sorting two sub-lists.
 The reduction step in the quick sort finds the final
position of particular element, which can be
accomplished by scanning that last element of the list
from the right to left and checks the elements.
 The comparisons of elements with the first element stops,
when we obtain the elements smaller than the first
element. Thus, in this case, exchange of both the
elements takes place.
 The whole procedure continues until all the elements of
the list are arranged in such a way that on the left side of
the pivot element, the elements are lesser and on the right
side, the elements are greater than the pivot. Thus, the list
is sub-divided into two lists.
 The sorting technique is considered as an in-place, since
it used no other array storage. Given an array Q[p … r],
on the basis of Divide and Conquer the quick sort works
as follows:
i. Divide Q[p … r] into Q[p … q] and Q[q + 1 … r], all
have „q‟ determined as a part of the division.
ii. In conquer method, Q[p … q] and Q[q + 1 … r] are
then sorted recursively.
iii. In combine method, none as all this leaves sorted
array in-place.
Procedure for Quick Sort:
1. While pivot > a[down], then down++.
2. While pivot < a[up], then up--.
3. If the position of down < position of up, then swap
the value of up and down. Then again the conditions 1
and 2 etc. are performed.
4. If position of down < position of up is false then swap
up the value with pivot value.
5. Then after the first pivot element is selected then the
array is divided into 2 parts and again these 2 parts
are again sorted.
7 5 3 2 9 8 10 -5 4 1
pivot down up
Quick Sort Algorithm:
Quick_Sort(A, p, r)
i. if p < r
ii. q ← partition(A, p, r)
iii. Quick_Sort(A, p, q – 1)
iv. Quick_Sort(A, q + 1, r)
Algorithm for Partition of Quick sort:
Partition(A, p, r)
i. x ← A[r]
ii. i ← p – 1
iii. for j ← p to r – 1
iv. do if A[j] ≤ x
v. then i ← i + 1
vi. exchange A[i] ↔ A[j]
vii. exchange A[i + 1] ↔ A[r]
viii. return i + 1
Analysis for Quick Sort:
 The running time of Quick sort depends on whether the
partition is balanced or unbalanced and it depends on which
elements are used for partitioning.
I. Best Case Analysis:
 Partitioning produces two sub-problems, each of size
number, more than n/2. The recurrence for the running
time, T(n) = 2T(n/2) + θ(n).
=> T(n) = θ(nlogn). [Applying master method].
 E.g.: T(n) = T(n – 1) + T(0) + θ(n)
 => T(n) = T(n – 1) + θ(n)
 => T(n – 1) = T(n – 2) + θ(n - 1)
II. Worst Case Analysis:
It occurs when the partitioning routine or algorithm or
pseudo-code produces one sub-problem with n – 1
elements and one with zero elements.
Let us assume that this unbalanced partitioning arises
in each recursive call. The partitioning costs, θ(n)
times, since T(1) = θ(1) and the recurrence for the
running time is:
E.g.: T(n) = T(n – 1) + T(0) + θ(n)
=> T(n) = T(n – 1) + θ(n)
=> T(n) = θ(n2) [Using substitution method].
III. Average Case Analysis:
In average case analysis, the array is partitioned by
choosing any random number. In this case, at each
level some of the partitions are well balanced while
some are fairly unbalanced. Let us assume that the
partition of array to be q : 1, the recurrence so obtained
is: T(n) = T(9n/10) + T(n/10) + n => T(n) = θ(nlogn).
Each step is about „n‟ and (9/10)i . n = 1.
=> log(10/9) . logn steps.
≤ n
n
9n/10
n/100 9n/100
n/10
9n/100 81n/100
n
n
n
θ(nlogn)
c. Heap Sort:
 The heap sort is accomplished by using 2 other function,
that is:
i. Build-Heap: For maintaining a max heap
ii. Max Heapify: For fixing the heap
 The heap is created when we input an array of „n‟
element where „n‟ represents the length of the array „A‟.
i.e.: n = length[A].
Algorithm for Build a heap:
Build_Max_Heap(A)
i. heapsize[A]
ii. for i ← length [A]/2 down to 1
iii. do Max_Heapify(A, i)
Algorithm for Max_Heapify:
Max_Heapify(A, i)
i. l ← left(i)
ii. r ← right(i)
iii. if l ≤ heapsize[A] and A[l] > A[i]
iv. then largest ← l
v. else largest ← i
vi. if r ≤ heapsize[A] and A[r] ≥ A[largest]
vii. then largest ← r
viii.if largest ≠ i
ix. then exchange A[i] ← A[largest]
x. Max_Heapify(A, largest)
Algorithm for Heapsort:
Heapsort(A)
i. Build_Max_Heap(A)
ii. for i ← length [A] down to 2
iii. do exchange A[1] ↔ A[i]
iv. heapsize[A] ← [A] – 1
v. Max_Heapify(A, 1)
Analysis for Heapsort:
I. Running Time of Max_Heapify:
The running time of Max_Heapify on a sub-tree of size
„n‟ rooted at given node (i) is θ(1) time to fix up the
relationship among the elements A[i], A[left(i)] and
A[right(i)] + the time to run Max_Heapify on a sub-tree
rooted at one of the children of node „i‟.
The children‟s sub-tree is having size 2n/3. So the worst
case occurs when the last row of the tree is exactly half
full.
If „n‟ is the heapsize, then T(n) ≤ T(2n/3) + θ(1) that
implies T(n) = O(logn) [Using Master Method]. Time
required by Max_Heapify, when called on a node of
height „b‟ is O(h).
II. Running time of Build_Max_Heap:
The running time of Max_Heapify is O(logn). Here
heapify is invoked n/2 times, thus the time complexity
of Build-Max-Heap is O(nlogn) and takes time O(n).
III. Running time for Heapsort:
The heap sort procedure takes time O(nlogn). Since the
call to Build-Max-Heap takes time θ(n) and each of the
n – 1 call to Max_Heapify takes O(logn). So the total
running time for Heapsort is O(nlogn).
Lower Bound of Sorting:
 Before going to lower bound for sorting, some important
concepts are explained below. That are:
 Internal sorting: It refers to the sorting operation
performed over a list which is stored in a primary
memory.
 External sorting: When a list is stored in a file is
accommodated in the secondary memory, the sorting
technique is referred to external sorting.
 In-place: The sorting algorithm is in-place, only if a
constant number of data elements of an input array are
ever stored is required and hence it is possible to sort a
large list without the need of additional working
language.
 Stable: A sorting algorithm can be divided into 2 main
classes. That is: in-place and stable. A sorting algorithm
is stable, in which the two elements that are equal remain
in the same relative position after performing the sorting.
 In a comparison sort, we use only comparisons between
elements to gain order information about an input sequence
<a1, a2, … an> that is given two element ai and aj.
 We perform one of the tests ai < aj, ai ≤ aj, ai = aj, ai ≥ aj, or
ai > aj to determine their relative order.
 Here ai = aj is useless and the comparisons ai ≤ aj, ai ≥ aj,
ai < aj, ai > aj are all equivalent in that they yield identical
information about the relative order „ai‟ and „aj‟. We
therefore assume that all comparisons have the from ai ≤ aj.
 Here we will present on abstract model to represent
comparison based sorts referred to as “Decision tree
Model”.
Comparison Based Sorts
Algorithm
Running Time
Worst
Case
Average
case
Best Case In-place
Insertion
Sort
O(n2) O(n2) O(n) √
Merge Sort O(nlogn) O(nlogn) O(nlogn) X
Heap Sort O(nlogn) O(nlogn) O(nlogn) √
Quick Sort O(n2) O(nlogn) O(nlogn) √
The Decision Tree Model:
 The decision tree can represent any comparison based
algorithm behavior on an input of a certain size for a
particular value of „h‟.
 The decision tree is a fully binary tree. Each node in the
decision tree corresponds to one of the comparisons in the
algorithm. The sorting algorithm starts at the root node and
does the first comparison.
i. If ai ≤ aj, then take left branch
ii. If ai > aj, then take right branch
 The whole process is repeated until a leaf is encountered.
Each leaf represents correct/one ordering of the input.
 It should be noted that the sorting algorithm is proved to be
correct only when each of n! permutations on „n‟ element
must appear as one of the leaves of the decision trees and
also that each of these leaves are reachable from root node.
Example:
 The decision tree for inserting sort operating on 3 elements.
An internal node annotated by i : j indicates a comparison
between ai and aj.
 A leaf annotated by the permutation <π(1), π(2), … π(n)>
indicates the ordering aπ(1) ≤ aπ(2) ≤ … aπ(n). The shaded path
indicates the decisions made when sorting the input
sequence (a1 = 6, a2 = 8, a3 = 5).
 The permutation <3, 1, 2> at the leaf indicates that the
sorted ordering is a3 = 5 ≤ a1 = 6 ≤ a2 = 8. There are 3! = 6
possible permutations of the input elements, so the decision
tree must have at least 6 leaves.
1 : 2
1 : 32 : 3
1 : 3 2 : 3<1, 2, 3> <2, 1, 3>
<1, 3, 2> <3, 1, 2> <2, 3, 1> <3, 2, 1>
≤
≤
≤ ≤
≤
>
> >
> >
A Lower Bound of Worst
Case:
 The length of the longest path from the root of a decision
tree to any of its reachable leaves represents the worst case
number of comparisons, that the corresponding sorting
algorithm performs.
 Consequently, the worst-case number of comparisons for a
given comparison sort algorithm equals the height of its
decision tree.
 A lower bound on the heights of all decision trees in which
each permutation appears as a reachable leaf is therefore a
lower bound on the running time of any comparison sorts
algorithm.
 Any comparison sort algorithm requires Ω(nlogn)
comparisons in the worst case, which contains following
properties. That is:
i. There must be n! permutation leaves, one
corresponding for each possible ordering of „n‟
elements.
ii. Length (no. of edges) of longest path in decision tree
(its height) is either equal to the worst case number of
operations of algorithm(Lower Bound on time).
 Heap sort and Merge sort are asymptotically optimal
comparison sorts.
 When O(nlogn) upper bounds on the running times for
heap sort and merge sort match Ω(nlogn) worst-case lower
bound.
Priority Queue:
 Priority Queue is defined as a set „P‟ of elements where
each element is associated with a key.
 Two variants of priority queue occur maximum priority
queue and minimum priority queue.
 The main operations supported by the maximum priority
are as follows:
i. Insert(p, x)
ii. Maximum(P)
iii. Extract_Maximum(P)
iv. Increase_Key(P, x, k)
Algorithm for Priority
Queue:
1. Algorithm for insert(p, x) in maximum
priority queue:
 This operation inserts the element „x‟ into the set „p‟.
That is: p ← p U {x}. The algorithm for this is:
 Procedure_Insert(H, k)
The above procedure inserts an element with
key value „k‟ in a given maximum heap. The
heap size is incremented by 1 after the insertion
of the element with key „k‟.
 Step 1: Incrementing the array size, assuming size does
not exceed the maximum array size.
Set heapsize[H] ← heapsize[H] + 1
 Step 2: Initialization
set i ← heapsize[H]
 Step 3: Loop obtaining proper position
while(i > 1 and H[parent(i)] < k
set H[i] ← H[parent(i)]
set i ← parent(i)
 Step 4: Insertion
set H[i] ← k
 Step 5: return at the point of call
return
Analysis for insertion:
 It is noticeable that while inserting an element
the process follows the path from a leaf to the
root of the tree. Recall that the height of the tree is
O(logn) which yields the total running time of O(logn).
2. Algorithm for function Maximum(p):
 This operation returns the element having largest key
value from the set „p‟. The algorithm for this is:
 Function_Maximum(H)
The above function returns the element having
largest key value from the given heap.
 Step 1: Return the value at the point of call.
return(H[1])
Analysis for Function Maximum:
 The running time for the above algorithm is θ(1), as
always the largest key value is stored at the root only.
3. Algorithm for Extract_Maximum(p):
 This operation removes and returns the element having
largest value from the set „p‟. The algorithm for this is:
 Function_Heap_Extract_Maximum(H)
The above function removes and returns the
element having largest key value from the given
heap. The heap size is decremented by 1, after
removing the element. The function call
“Heapify” for fixing the new heap.
 Step 1: Is empty?
if(heapsize[H] < 1) then
message “underflow heap”
else
goto step 2
 Step 2: Initialization and adjusting the values
set max ← H[1]
set H[1] ← H[heapsize[H]]
set heapsize[H] ← heapsize[H] - 1
 Step 3: Fixing new heap
call to Heapify(H, 1)
 Step 4: return value at the point of call
return(max)
Analysis for Extract_Maximum:
 It can be observed that „Heapify‟ takes O(logn) time
and in the above algorithm it is called only at once.
 The rest of the instructions are performed only once,
which takes θ(1) time. Thus, extracting the element
having maximum key value from the heap is
performed in O(logn) time.
4. Algorithm for Increase_Key(p):
 This operation increases the value of the element x‟s
key with the new value „k‟, which is assumed to be at
least as large as element x‟s current value. The
algorithm for this is:
 Procedure_Heap_Increase_Key(p, x, k)
The above procedure increases value of the
element is key with the new value „k‟, which is
assumed to be at least as large as element i‟s
current value.
 Step 1: Is smaller?
if(k < H[i]) then
message: “new key „k‟ is smaller than the
current key”.
return
else goto step 2
 Step 2: Adjusting the new key „k‟
set H[i] ← k
while(i > 1 and H[parent(i) < H(i)]
set H[i] ↔ H[parent(i)]
set i ← parent(i)
 Step 3: return at the point of call
return
Analysis for Increase_Key:
 It can be seen that the above algorithm runs in
O(logn) time.
 The element with key is adjusted in O(logn) as the
path is traced from the node to the root.
Counting Sort:
 Counting Sort assumes that each of „n‟ input elements in an
integer in the range 0 to k, for some integer „k‟ = O(n), the
sort runs in θ(n) times.
 The basic idea of counting sort is to determine, for each
input element „x‟, the number of elements less than x. This
is used to place element „x‟ directly into its position in the
output array.
 The algorithm for counting sort is;
Counting_Sort(A, B, k)
i. for i ← 0 to k
ii. do c[i] ← 0
iii. for i ← 1 to length[A]
iv. do c[A[j]] ← c[A[j]] + 1
v. c[i] now contains the number of elements
equals to i.
vi. for i ← 1 to z
vii. do c[i] ← c[i] + c[i – 1]
viii. c[i] now contains the number of elements less
than or equal to i.
ix. for j ← length[A] down to 1
x. do B[c[A[j]]] ← A[j]
xi. c[A[j]] ← c[A[j] – 1
 Here, in the code for counting sort A[1…n] = input in an
array, length[A] = n, array B[1…n] = holds the sorted
output, array c[0…k] = temporary working storage and k is
the elements range from 1 to k elements value ≤ k.
 Counting sort is a stable sort and is used in Radix sort.
 Analysis of Counting sort:
 It can be observed that two for loops of size „k‟ and two
for loops for size length = n, exist in the above algorithm.
Thus the running time for the counting sort is O(n + k).
 Since, it is a non-comparison sort, it can beat the lower
bound of Ω(nlogn) time, thus for k = O(nlogn). [can get
close to O(n) linear time for larger „k‟ using hash tables].
 Example:
(i)
1 2 3 4 5 6 7 8
A 2 5 3 0 2 3 0 3
0 1 2 3 4 5
C 2 0 2 3 0 1
1 2 3 4 5 6 7 8
B 0 3
(ii)
(iii) (iv)
=>
0 1 2 3 4 5
C 2 2 4 7 7 8
1 2 3 4 5 6 7 8
B 0 3
0 1 2 3 4 5
C 2 2 4 6 7 8
1 2 3 4 5 6 7 8
B 0 3 3
0 1 2 3 4 5
C 1 2 4 6 7 8
0 1 2 3 4 5
C 1 2 4 5 7 8
1 2 3 4 5 6 7 8
B 0 0 2 2 3 3 3 5
Radix Sort:
 In Radix Sort technique, the list consists of „n‟ integers and
each integer has „d‟ digits (or digits in any base).
 We start the sorting repeatedly starting at the lower order
digit and finishing with the highest order.
 It is noticeable that the sorting is stable, thus if the numbers
are already sorted with respect to low order digits and then
later we sort with respect to high order digits will remain
sorted with respect to their lower order digit.
 The algorithm for Radix Sort:
Radix_Sort(A, d)
i. for i ← 1 to d
ii. do use a stable sort to sort array „A‟ on digit „i‟.
 Analysis of Radix sort:
 Consider the running time for stable sort (internal sort) is
given by „Ts‟, we know the counting sort runs in O(k + n)
time. Thus, for counting sort, Ts(n) = O(k + n). So for d-
digits, O(d(Ts(n)) = O(d(k + n)).
 If d = O(1) and k = O(n), then Ts = O(n). If d = O(logn)
and k = 2, then Ts = O(d(k + n)) = O(nlogn)
 Example: The elements for radix sort is:
 For d = 1 unit
 For d = 2 unit
 For d = 3 unit
725 831 711 215 055 783 222 444 303 125 110 324
110 831 711 222 783 303 324 444 725 215 055 125
303 110 711 215 222 324 725 125 831 444 055 783
055 110 125 215 222 303 324 444 711 725 783 831
Bucket Sort:
 In Bucket Sort, the assumption is that the input elements
are distributed uniformly over some known range, for
example: (0, 1).
 The basic idea of this sorting technique is to divide the
internal [0, 1) into an n-equal sized sub-interval or bucket
and then distributed the „n‟ numbers into the buckets so
created. The algorithm for this is:
Bucket_Sort(A)
i. n ← length[A]
ii. for i ← 1 to n
iii. do insert A[i] into list B[LA[i]]
iv. for i ← 0 to n – 1
v. do sort list B[i] with insertion sort
vi. concatenate the lists B[0], B[1], … B[n – 1] together
in order.
 Analysis of Bucket sort:
 It can be observed that except the sorting using insertion
sort, all instructions are executed in O(n) time.
 Total „n‟ calls are made to the insertion sort. So in order
to calculate the cost of calling the insertion sort, consider
„ni‟ a random variable which denotes the number of
elements placed in bucket B[i].
 Recall that the insertion runs in quadratic time. Thus, the
running time for bucket sort is:
T(n) = θ(n) + 𝑂 𝑛𝑖
2𝑛 −1
𝑖=0
 The expected time to sort elements in B[i] is:
E[T(n)] = E[θ(n) + 𝑂 𝑛𝑖
2𝑛 −1
𝑖=0 ]
= θ(n) + 𝐸[𝑂 𝑛𝑖
2𝑛 −1
𝑖=0 ]
= θ(n) + 𝑂(𝐸[ 𝑛𝑖
2𝐸 −1
𝑖=0 ])
[By linearity of expectation]
 We know that, there are „n‟ elements and „n‟ buckets, so
the probability that an element acquires bucket B[i] is 1/n.
 For the binomial distribution B(k : n, p) where ni = k and
p = 1/n.
 The expected time for distribution of random variable is:
E(ni) = np = 1.
 The variance is calculated as:
var[ni] = np(1 – p) = n.1/n(1 – 1/n) = 1 – 1/n.
 It is noticeable that for any random variable:
E[ni
2] = var[ni] + E2[ni] = (1 – 1/n) + 12 = 2 – 1/n.
 The expected time for the Bucket sort is:
T(n) = θ(n) + 𝑂 2 − 1/𝑛 2𝑛 −1
𝑖=0 = θ(n)
 Thus, the entire Bucket sort algorithm runs in θ(n) linear
expected time.
Algorithm
Non-Comparison Based Sorts Running Time
Worst Case Average Case Best Case In-place
Counting
Sort
O(n + k) O(n + k) O(n + k) X
Radix Sort O(d(n + k)) O(d(n + k)) O(d(n + k)) –
Bucket Sort – O(n) – –

More Related Content

What's hot

Asymptotic Notations
Asymptotic NotationsAsymptotic Notations
Asymptotic NotationsRishabh Soni
 
Divide and Conquer - Part 1
Divide and Conquer - Part 1Divide and Conquer - Part 1
Divide and Conquer - Part 1Amrinder Arora
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxSyed Zaid Irshad
 
Bruteforce algorithm
Bruteforce algorithmBruteforce algorithm
Bruteforce algorithmRezwan Siam
 
DESIGN AND ANALYSIS OF ALGORITHMS
DESIGN AND ANALYSIS OF ALGORITHMSDESIGN AND ANALYSIS OF ALGORITHMS
DESIGN AND ANALYSIS OF ALGORITHMSGayathri Gaayu
 
Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and AlgorithmDhaval Kaneria
 
Lecture 2 role of algorithms in computing
Lecture 2   role of algorithms in computingLecture 2   role of algorithms in computing
Lecture 2 role of algorithms in computingjayavignesh86
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of AlgorithmsSwapnil Agrawal
 
Algorithms Lecture 1: Introduction to Algorithms
Algorithms Lecture 1: Introduction to AlgorithmsAlgorithms Lecture 1: Introduction to Algorithms
Algorithms Lecture 1: Introduction to AlgorithmsMohamed Loey
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)swapnac12
 
Fundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencyFundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexityAnkit Katiyar
 
8 queens problem using back tracking
8 queens problem using back tracking8 queens problem using back tracking
8 queens problem using back trackingTech_MX
 
Syntax directed translation
Syntax directed translationSyntax directed translation
Syntax directed translationAkshaya Arunan
 

What's hot (20)

Asymptotic Notations
Asymptotic NotationsAsymptotic Notations
Asymptotic Notations
 
Divide and Conquer - Part 1
Divide and Conquer - Part 1Divide and Conquer - Part 1
Divide and Conquer - Part 1
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
 
Data Structure and Algorithm - Divide and Conquer
Data Structure and Algorithm - Divide and ConquerData Structure and Algorithm - Divide and Conquer
Data Structure and Algorithm - Divide and Conquer
 
Bruteforce algorithm
Bruteforce algorithmBruteforce algorithm
Bruteforce algorithm
 
DESIGN AND ANALYSIS OF ALGORITHMS
DESIGN AND ANALYSIS OF ALGORITHMSDESIGN AND ANALYSIS OF ALGORITHMS
DESIGN AND ANALYSIS OF ALGORITHMS
 
Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and Algorithm
 
Lecture 2 role of algorithms in computing
Lecture 2   role of algorithms in computingLecture 2   role of algorithms in computing
Lecture 2 role of algorithms in computing
 
Daa
DaaDaa
Daa
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
 
asymptotic notation
asymptotic notationasymptotic notation
asymptotic notation
 
Greedy algorithms
Greedy algorithmsGreedy algorithms
Greedy algorithms
 
Algorithms Lecture 1: Introduction to Algorithms
Algorithms Lecture 1: Introduction to AlgorithmsAlgorithms Lecture 1: Introduction to Algorithms
Algorithms Lecture 1: Introduction to Algorithms
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)
 
Fundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm EfficiencyFundamentals of the Analysis of Algorithm Efficiency
Fundamentals of the Analysis of Algorithm Efficiency
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexity
 
8 queens problem using back tracking
8 queens problem using back tracking8 queens problem using back tracking
8 queens problem using back tracking
 
Complexity analysis in Algorithms
Complexity analysis in AlgorithmsComplexity analysis in Algorithms
Complexity analysis in Algorithms
 
Syntax directed translation
Syntax directed translationSyntax directed translation
Syntax directed translation
 

Viewers also liked

Quick sort algo analysis
Quick sort algo analysisQuick sort algo analysis
Quick sort algo analysisNargis Ehsan
 
Algorithm: Quick-Sort
Algorithm: Quick-SortAlgorithm: Quick-Sort
Algorithm: Quick-SortTareq Hasan
 
Algorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIAlgorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIMohamed Loey
 
The False-Position Method
The False-Position MethodThe False-Position Method
The False-Position MethodTayyaba Abbas
 
Social Psychology: Introduction: Lecture1
Social Psychology: Introduction: Lecture1Social Psychology: Introduction: Lecture1
Social Psychology: Introduction: Lecture1James Neill
 
software engineering
software engineeringsoftware engineering
software engineeringramyavarkala
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical AnalysisMohammad Tawfik
 
Applications of numerical methods
Applications of numerical methodsApplications of numerical methods
Applications of numerical methodsTarun Gehlot
 
Software Engineering ppt
Software Engineering pptSoftware Engineering ppt
Software Engineering pptshruths2890
 
Filipino psychology concepts and methods
Filipino psychology   concepts and methodsFilipino psychology   concepts and methods
Filipino psychology concepts and methodsyanloveaprilbordador
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi methodandrushow
 
Software Engineering UPTU
Software Engineering UPTUSoftware Engineering UPTU
Software Engineering UPTURishi Shukla
 
Social Psychology - Social Influence
Social Psychology - Social InfluenceSocial Psychology - Social Influence
Social Psychology - Social InfluenceSavipra Gorospe
 
Psychology: Motivation,Types of Motivation & Theories of Motivation
Psychology: Motivation,Types of Motivation & Theories of MotivationPsychology: Motivation,Types of Motivation & Theories of Motivation
Psychology: Motivation,Types of Motivation & Theories of MotivationPriyanka Nain
 

Viewers also liked (20)

chapter 1
chapter 1chapter 1
chapter 1
 
Quick sort algo analysis
Quick sort algo analysisQuick sort algo analysis
Quick sort algo analysis
 
Daa unit 2
Daa unit 2Daa unit 2
Daa unit 2
 
Algorithm: Quick-Sort
Algorithm: Quick-SortAlgorithm: Quick-Sort
Algorithm: Quick-Sort
 
Attachment
AttachmentAttachment
Attachment
 
Algorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIAlgorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms II
 
bisection method
bisection methodbisection method
bisection method
 
Historyandtypespsych[1]
Historyandtypespsych[1]Historyandtypespsych[1]
Historyandtypespsych[1]
 
The False-Position Method
The False-Position MethodThe False-Position Method
The False-Position Method
 
Social Psychology: Introduction: Lecture1
Social Psychology: Introduction: Lecture1Social Psychology: Introduction: Lecture1
Social Psychology: Introduction: Lecture1
 
General Psychology: Chapter 1
General Psychology: Chapter 1General Psychology: Chapter 1
General Psychology: Chapter 1
 
software engineering
software engineeringsoftware engineering
software engineering
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical Analysis
 
Applications of numerical methods
Applications of numerical methodsApplications of numerical methods
Applications of numerical methods
 
Software Engineering ppt
Software Engineering pptSoftware Engineering ppt
Software Engineering ppt
 
Filipino psychology concepts and methods
Filipino psychology   concepts and methodsFilipino psychology   concepts and methods
Filipino psychology concepts and methods
 
Regula falsi method
Regula falsi methodRegula falsi method
Regula falsi method
 
Software Engineering UPTU
Software Engineering UPTUSoftware Engineering UPTU
Software Engineering UPTU
 
Social Psychology - Social Influence
Social Psychology - Social InfluenceSocial Psychology - Social Influence
Social Psychology - Social Influence
 
Psychology: Motivation,Types of Motivation & Theories of Motivation
Psychology: Motivation,Types of Motivation & Theories of MotivationPsychology: Motivation,Types of Motivation & Theories of Motivation
Psychology: Motivation,Types of Motivation & Theories of Motivation
 

Similar to Daa notes 1

Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdfMemMem25
 
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
 
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdf
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdfDAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdf
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdfOnkarSalunkhe5
 
Chapter 1 Data structure.pptx
Chapter 1 Data structure.pptxChapter 1 Data structure.pptx
Chapter 1 Data structure.pptxwondmhunegn
 
Daa presentation 97
Daa presentation 97Daa presentation 97
Daa presentation 97Garima Verma
 
Algorithm analysis in fundamentals of data structure
Algorithm analysis in fundamentals of data structureAlgorithm analysis in fundamentals of data structure
Algorithm analysis in fundamentals of data structureVrushali Dhanokar
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdfNayanChandak1
 
Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithmssangeetha s
 
Introduction to Data Structure and algorithm.pptx
Introduction to Data Structure and algorithm.pptxIntroduction to Data Structure and algorithm.pptx
Introduction to Data Structure and algorithm.pptxesuEthopi
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptxSayanSen36
 
Aad introduction
Aad introductionAad introduction
Aad introductionMr SMAK
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agorithamlilyMalar1
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxJeevaMCSEKIOT
 

Similar to Daa notes 1 (20)

Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
 
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdf
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdfDAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdf
DAAMOD12hjsfgi haFIUAFKJNASFQF MNDAF.pdf
 
Chapter 1 Data structure.pptx
Chapter 1 Data structure.pptxChapter 1 Data structure.pptx
Chapter 1 Data structure.pptx
 
Introduction to algorithms
Introduction to algorithmsIntroduction to algorithms
Introduction to algorithms
 
Daa presentation 97
Daa presentation 97Daa presentation 97
Daa presentation 97
 
Algorithm analysis in fundamentals of data structure
Algorithm analysis in fundamentals of data structureAlgorithm analysis in fundamentals of data structure
Algorithm analysis in fundamentals of data structure
 
Analyzing algorithms
Analyzing algorithmsAnalyzing algorithms
Analyzing algorithms
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithms
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Introduction to Data Structure and algorithm.pptx
Introduction to Data Structure and algorithm.pptxIntroduction to Data Structure and algorithm.pptx
Introduction to Data Structure and algorithm.pptx
 
DA lecture 3.pptx
DA lecture 3.pptxDA lecture 3.pptx
DA lecture 3.pptx
 
Analysis algorithm
Analysis algorithmAnalysis algorithm
Analysis algorithm
 
Aad introduction
Aad introductionAad introduction
Aad introduction
 
Python algorithm
Python algorithmPython algorithm
Python algorithm
 
Unit 1.pptx
Unit 1.pptxUnit 1.pptx
Unit 1.pptx
 
Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptx
 

More from smruti sarangi

Software engineering study materials
Software engineering study materialsSoftware engineering study materials
Software engineering study materialssmruti sarangi
 
Computer graphics notes
Computer graphics notesComputer graphics notes
Computer graphics notessmruti sarangi
 
Data structure using c module 1
Data structure using c module 1Data structure using c module 1
Data structure using c module 1smruti sarangi
 
Data structure using c module 2
Data structure using c module 2Data structure using c module 2
Data structure using c module 2smruti sarangi
 
Data structure using c module 3
Data structure using c module 3Data structure using c module 3
Data structure using c module 3smruti sarangi
 

More from smruti sarangi (7)

Daa notes 3
Daa notes 3Daa notes 3
Daa notes 3
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
 
Software engineering study materials
Software engineering study materialsSoftware engineering study materials
Software engineering study materials
 
Computer graphics notes
Computer graphics notesComputer graphics notes
Computer graphics notes
 
Data structure using c module 1
Data structure using c module 1Data structure using c module 1
Data structure using c module 1
 
Data structure using c module 2
Data structure using c module 2Data structure using c module 2
Data structure using c module 2
 
Data structure using c module 3
Data structure using c module 3Data structure using c module 3
Data structure using c module 3
 

Recently uploaded

Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxcallscotland1987
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxAmanpreet Kaur
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSCeline George
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfPoh-Sun Goh
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfSherif Taha
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibitjbellavia9
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseAnaAcapella
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.MaryamAhmad92
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxJisc
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxheathfieldcps1
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 

Recently uploaded (20)

Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 

Daa notes 1

  • 1. Design and Analysis of Algorithms Prepared By: Smruti Smaraki Sarangi Asst. Professor IMS Unison University, Dehradun Module - 1
  • 2. Algorithm:  An algorithm is a process to solve a problem manually in sequence with finite number of steps.  It is a set of rules that must be followed when solving a specific problem.  It is a well-defined computational procedure which takes some value or set of values as input and generates some set of value as output.  So, an algorithm is defined as a finite sequence of computational steps, that transforms to given input into the output for a given problem.
  • 3.  An algorithm is considered to be correct, if for every input instance, it generates the correct output and gets terminated.  So a correct algorithm solves a given computational problem and gives the desired output.  The main objectives of algorithm is  To solve a problem manually in sequence with finite no. of steps  For designing an algorithm, we need to construct an efficient solution for a problem.
  • 4. Why we read Algorithm:  We read the algorithm.  For testing the good programmer in 2 levels. That is: i. Macro-level (Implementation of algorithm like s/w) ii. Micro-level (Algorithm part like h/w)  For designing an algorithm, we have to learn to solve computational problem in economics.  By applying mathematical algebra, we get computational logic.
  • 5. Algorithm Paradigm:  It includes 4 steps. That is:  Design of algorithm  Algorithm validation  Analysis of algorithms  Algorithm testing 1. Design of algorithm:  Various designing techniques are available which yield good and useful algorithm.  These techniques are not only applicable to only computer science, but also to other areas, such as operation research and electrical engineering.
  • 6.  The techniques are: divide and conquer, incremental approach, dynamic programming etc. By studying this we can formulate good algorithm. 2. Algorithm validation:  Algorithm validation checks the algorithm result for all legal set of input.  After designing, it is necessary to check the algorithm, whether it computes the correct and desired result or not for all possible legal set of input.  Here the algorithm is not converted into the program. But after showing the validity of the method, a program is written. This is known as “program providing” or “program verification”.  Here we check the program output for all possible set of input.
  • 7.  It requires that, each statement should be precisely defined and all basic operations can be correctly provided. 3. Analysis of algorithms:  The analysis of algorithm focuses on time complexity or space complexity.  The amount of memory needed by program to run to completion is referred to as space complexity.  The amount of time needed by an algorithm to run to completion is referred to as time complexity.  For an algorithm time complexity depends upon the size of the input, thus is a function of input size „n‟.  Usually, we deal with the best case time, average case time and worst case time for an algorithm.
  • 8.  The minimum amount of time that an algorithm requires for an input size „n‟, is referred to as Best Case Time Complexity.  Average Case Time Complexity is the execution of an algorithm having typical input data of size „n‟.  The maximum amount of time needed by an algorithm for an input size „n‟ is referred to as Worst Case Time Complexity. 4. Algorithm testing:  This phase involves testing of a program. It consists of two phases. That is: Debugging and Performance Measurement.  Debugging is the process of finding and correcting the cause at variance with the desired and observed behaviors.
  • 9.  Debugging can only point to the presence of errors, but not their absence.  The Performance Measurement or Profiling precise by described the correct program execution for all possible data sets and it takes time and space to compute results. NOTES:  While designing and analyzing an algorithm, two fundamental issue to be considered. That is: 1. Correctness of the algorithm 2. Efficiency of the algorithm  While designing the algorithm, it should be clear, simple and should be unambiguous.  The characteristics of algorithm is: finiteness, definiteness, efficiency, input and output.
  • 10. Analysis of Algorithms:  Analysis of algorithms depend upon various factors, such as memory, communication bandwidth or computer hardware. But the most often used is the computational time that an algorithm requires for completing the given task.  As algorithms are machine and language independent these are the only important, durable and original parts of computer science. Thus, we will do all our design and implementation for the RAM model of computation.  In RAM model, all instructions are executed sequentially one after another with concurrent operations. In performing simple operations like addition, subtraction, assignment etc. model takes 1 step.
  • 11.  A call to a subroutine and loops are not single step operation. Instead each memory access takes exactly one step. By counting the number of steps the running time of an algorithm is reassured.  The analysis of an algorithm focuses on the time and space complexity. The space complexity refers to the amount of memory required by an algorithm to run completion.  Time complexity is a function of input size „n‟. It is referred to as the amount of time required by an algorithm to run to completion.  Perhaps different time can arise for the same algorithm, we usually refer best case, average case, worst case complexity.
  • 12. 1. Worst-Case Time Complexity:  The worst-case time complexity is a function defined by the maximum amount of time needed by an algorithm for an input size „n‟. Thus, it is the function defined by the maximum no. of steps taken on any instance of size „n‟.  A worst case estimate is normally computed, because it provides an upper bound for all inputs including particularly the bad ones. N 21 Worst Case Average Case Best Case No. of Steps
  • 13. 2. Average-Case Time Complexity:  The average case time complexity is the execution of an algorithm having typical input data of size „n‟, thus if the function by the average no. of steps taken on any instance of size „n‟.  Average-case analysis does not provide the upper- bound and it is difficult to compute. 3. Best-Case Time Complexity:  The best-case time complexity is the maximum amount of time that an algorithm requires for an input of size „n‟, thus it is the function defined by the minimum no. of steps taken on any instance of size „n‟.  All this time complexities define a numerical function time ~ size.
  • 14. Calculation of Running Time:  There are several ways to estimate the running time of a program. If 2 programs are expected to take similar times, probably the best way to decide which is faster is to code them both up and run them.  Generally there are several algorithmic ideas and we would like to estimate the bad ones early.  So, an analysis is usually required furthermore the ability to do an analysis usually provides insight into designing efficient algorithm.  The analysis also generally pin points, which are worth coding carefully.
  • 15.  To simplify the analysis, we will adopt the conversion that there are no particular units of time. Thus, we throw away low ordered terms.  So, what we are essentially doing is computing a big oh (O) running time. Since big oh (O) is an upper bound, we must be careful never to underestimate the running time of the program.  In effect, the answer provided is a guarantee that the program will terminate within a certain time period. The program may stop earlier than this, but never later.
  • 16. Analyzing the control structure:  Sequencing:  Let „P1‟ and „P2‟ be two fragments of an algorithm they may be single instructions or complicated sub- algorithms. Let „t1‟ and „t2‟ be the times taken by „P1‟ and „P2‟ respectively. These times may depend on various parameters such as the instance size.  The sequencing root says that the time required computing P1, P2. i.e.: 1st P1 then P2 is simply t1 + t2 by maximum rule, the time will be O(max(t1, t2)).  E.g.: i) t1 = θ(n), t2 = θ(n2). So, the computational time is: t2 = θ(n2). ii) if t1 = θ(n), t2 = θ(n2) => t2 = O(n2). iii) if t1 = O(n), t2 = θ(n2) => t2 = θ(n2).
  • 17. Analyzing the control structure:  Sequencing:  Let „P1‟ and „P2‟ be two fragments of an algorithm they may be single instructions or complicated sub- algorithms. Let „t1‟ and „t2‟ be the times taken by „P1‟ and „P2‟ respectively. These times may depend on various parameters such as the instance size.  The sequencing root says that the time required computing P1, P2. i.e.: 1st P1 then P2 is simply t1 + t2 by maximum rule, the time will be O(max(t1, t2)).  E.g.: i) t1 = θ(n), t2 = θ(n2). So, the computational time is: t2 = θ(n2). ii) if t1 = θ(n), t2 = θ(n2) => t2 = O(n2). iii) if t1 = O(n), t2 = θ(n2) => t2 = θ(n2).
  • 18.  Add the time individual statements. The maximum is the one that count.  If then else:  Again consider P1 and P2 be the parts of an algorithm, with computation time t1 and t2 respectively. Now „P1‟ is compared only when the given condition is true. Otherwise for the false condition „P2‟ is computed. Thus, the total time is according to the conditional rule „if then else‟.  According to the maximum rule this computation time is: max(t1, t2). E.g.: i) Suppose P1 = t1 = θ(n), P2 = t2 = θ(n2) => T(n) = θ(n2). ii) t1 = O(n2), t2 = θ(n2) => T(n) = O(n2) or θ(n2) => O(n2)
  • 19.  For Loop:  It is to be noted that P(i) is computed for each iteration from i ← 1 to m. If the value of „m‟ is zero, then we are considering that „m‟ does not generate any error instead the loop is terminated without doing anything. E.g.: for i ← 1 to m { P(i) }  If P(i) takes any constant time „t‟, for its computation then for „m‟ iterations, the total time for the loop is simply „mt‟. Here, we are not considering the loop control. As we know that for loop can be expressed as:
  • 20. while(i ≤ m) { P(i) i ← i + 1 }  The test condition, the assignment instruction and sequencing operation (goto: implicit in while loop) are considered at unit cost for simplicity. Suppose if all these operations are bounded by „c‟ then the computation time for the loop is bundled above by: T(n) ≤ c : for i ← 1 (m + 1)c: test condition i ≤ m. mt: for execution of P(i). mc: for execution of i ← i + 1. mc: for the sequencing operation.
  • 21.  T ≤ (t + 3c)m + 2c. If „c‟ is very small relative to „t‟, then the computational time for the loop is bounded above by T(n) ≤ mt.  Now, if the computation time „ti‟ for P(i) varies as a function of „i‟, then total computation time for the loop, (after neglecting loop control) is given not by multiplication, but by a sum. for i ← 1 to m { P(i) } => T(n) = 𝑡𝑖 𝑚 𝑖=1
  • 22. E.g.: for i ← 1 to m { sum ← sum + t[i] } Total time = 𝑡𝑖 𝑚 𝑖=1 = θ(1)𝑚 𝑖=1 = θ( 1𝑚 𝑖=1 ) = θ(m)  If the algorithm consists of nested for loop, then the total time is: for i ← 1 to m { for j ← 1 to m { P(i j) } } m ∑ i = 1 m ∑ j = 1 tij=> T(n) =
  • 23.  While Loop:  The while loops are difficult to analyze in comparison to for loops, as in these there is no obvious method which determines how many times we shall have to repeat the loops.  The simple technique for analyzing the loops is to firstly determine functions of variables involve whose value decreases each time. Secondly for determining the loop it is necessary that this value must be a positive integer.  By keeping the track of how many times the values of function decreases, one can obtain the no. of repetition of the loop. The other approach for analyzing while loops is to treat them. E.g.: while(m > 0) { m ← m – 1} => Time T(n) = θ(m)
  • 24.  The rules are: 1. Sequencing: Add the time of the individual statements. The maximum is the one that count. 2. Alternative Structures: Time for testing the condition + the maximum time taken by any of the alternative paths. 3. Loops: Execution time of a loop is at most the execution time of the statements of the body (including the condition tests). 4. Nested Loops: Analyze them as inside out. 5. Sub-programs: Analyze them as separate algorithms and substitute the time whenever necessary. 6. Recursive Sub-programs: Generally the running time can be expressed as a recurrence relation, with solution growth rate of execution time.
  • 25. Asymptotic Notation:  The notation, which we use to describe the asymptotic running time of an algorithm are defined in terms of functions, whose domains are the set of natural numbers and real numbers.  The natural number set is denoted as: N = {0, 1, 2, …}  The positive integer set is denoted as: N+ = {1, 2, 3, …}  Real number set is denoted as R.  Positive real number set is denoted as R+.  Non-negative real number set is denoted as R*.  Such notations are convenient for describing the worst case running time function T(n), which is usually defined only on integer input sizes.
  • 26.  The different types of notations are:  Big oh (O) notation  Small oh (o) notation  Theta (θ) notation  Omega (Ω) notation  Small omega (ω) notation 1. Big Oh (O) Notation:  The upper bound for the function is provided by Big Oh (O) notation. We can say, the running time of an algorithm is O(g(n)), if whenever input size is equal to or exceeds, some threshold „n0‟, its running time can be bounded by some positive constant „c‟ time g(n).
  • 27.  Let f(n) and g(n) are two functions from set of natural numbers to set of non-negative real numbers and f(n) is said to be O(g(n)).  That is: f(n) = O(g(n)), iff there exist a natural number „n0‟ and a positive constant c > 0, such that f(n) ≤ c(g(n)), for all n ≥ n0. n0 Input size Running time c(g(n)) f(n)
  • 28. Examples: 1. f(n) = 2n2 + 7n – 10, n = 5, c = 3. => f(n) = O(g(n)), where g(n) = n2 f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2 => 2 x 25 + 7 x 5 – 10 ≤ 3 x 25 => 50 + 35 – 10 ≤ 75 => 75 ≤ 75. So, it is in O(g(n)) = O(n2). 2. f(n) = 2n2 + 7n – 10, n = 4, c = 3, g(n) = n2 => f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2 => 2 x 16 + 7 x 4 – 10 ≤ 3 x 16 => 32 + 28 – 10 ≤ 48 => 50 ≤ 48. So, it is not in O(g(n)).
  • 29. 3. f(n) = 2n2 + 7n – 10, n = 6, c = 3, g(n) = n2 => f(n) ≤ c(g(n)) => 2n2 + 7n – 10 ≤ 3 x n2 => 2 x 36 + 7 x 6 – 10 ≤ 3 x 36 => 72 + 42 – 10 ≤ 108 => 104 ≤ 108. So, it is in O(g(n)) = O(n2). 2. Small Oh (o) Notation:  The functions in small oh (o) notation are the smaller function in Big oh (O) notation. We use small oh (o) notation to denote an upper bound that is not asymptotically tight.  This notation defined as: f(n) = o(g(n)), iff there exist any positive constant c > 0 and n0 > 0, such that f(n) < c(g(n)), for all n > n0.  The definition of O-notation and o-notation are similar.
  • 30.  The main difference is that in f(n) = O(g(n)), the bound f(n) ≤ c(g(n)) holds for some constant c > 0, but in f(n) = o(g(n)), the bound f(n) < c(g(n)) hold for all constant c > 0.  In this notation, the function f(n) becomes insignificant relative to g(n) as „n‟ approaches infinity. That is: lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = 0 . 3. Big omega (Ω) Notation:  The lower bound for the function is provided by Big Omega (Ω) Notation.  We can say, the running time of an algorithm Ω(g(n)), if whenever input size is equal to or exceeds some thresholds value „n0‟, its running time can be denoted by some positive constant „c‟ times g(n).
  • 31.  Let f(n) and g(n) are 2 functions from set of natural numbers to set of non-negative real numbers and f(n) said to be Ω(g(n)).  That is, f(n) = Ω(g(n)) iff there exist a natural number „n0‟ and a constant c > 0, such that f(n) ≥ c(g(n)), for all n ≥ n0. n0 Input size Running time c(g(n)) f(n)
  • 32. Example: 1. f(n) = n2 + 3n + 4, n = 1, c = 1. => f(n) = Ω(g(n)), where g(n) = n2 f(n) ≥ c(g(n)) => n2 + 3n + 4 ≥ cn2 => 1 + 3 x 1 + 4 ≥ 1 x 1 => 8 ≥ 1 => f(n) = Ω(g(n)) = Ω(n2). (proved) 4. Small omega (ω) Notation:  For a given function g(n), we denoted it as ω(g(n)), where „ω‟ notation are the larger functions of Big omega (Ω) notation.  We use a notation to denote a lower bound that is not asymptotically tight.  We define this notation as: f(n) = ω(g(n)), there exist some positive constant c > 0 and n0 > 0, such that f(n) > c(g(n)), for all n ≥ n0.  The relation f(n) = ω(g(n)) implies that lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = ∞ .
  • 33. 5. Theta (θ) Notation:  For a given function g(n), we denoted it as θ(g(n)), the set of functions f(n) = θ(g(n)), if there exist some constant c1, c2 and n0, such that: c1(g(n)) ≤ f(n) ≤ c2(g(n)), for all n ≥ n0.  For all values of „n‟ to the right of „n0‟, the values of f(n) lies at or above c1(g(n)) and at or below c2(g(n)).  In other words, for all n ≥ n0, the function f(n) is equal to g(n) within a constant factor.  We say that g(n) is an asymptotically tight bound for f(n).
  • 34.  The definition of θ(g(n)) requires that every member f(n) є θ(g(n)) be asymptotically, non-negative. That is: f(n) be non-negative whenever „n‟ is sufficiently large.  So f(n) = θ(g(n)), which implies that: lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑐 . n0 Input size Running time c1(g(n)) f(n) c2(g(n))
  • 35. Asymptotic Notation Properties: 1. Reflexivity: i. f(n) = θ(f(n)) ii. f(n) = O(f(n)) iii. f(n) = Ω(f(n)) 2. Symmetric: i. f(n) = θ(g(n)) iff g(n) = θ(f(n)) 3. Reflexivity: i. f(n) = O(g(n)) iff g(n) = Ω(f(n)) ii. f(n) = o(g(n)) iff g(n) = ω(f(n))
  • 36. 4. Transitivity: i. f(n) = θ(g(n)) and g(n) = θ(h(n)) => f(n) = θ(h(n)) ii. f(n) = O(g(n)) and g(n) = O(h(n)) => f(n) = O(h(n)) iii. f(n) = Ω(f(n)) and g(n) = Ω(h(n)) => f(n) = Ω(h(n)) iv. f(n) = o(g(n)) and g(n) = o(h(n)) => f(n) = o(h(n)) v. f(n) = ω(f(n)) and g(n) = ω(h(n)) => f(n) = ω(h(n)) 5. Some Important Formula: i. For any two functions f(n) and g(n), we have: f(n) = θ(g(n) iff f(n) = O(g(n)) and f(n) = Ω(g(n)) ii. If f(n) = O(g(n)) and f(n) = Ω(g(n)) => f(n) = θ(g(n)) iii. If T1(n) = O(f(n)) and T2 = O(g(n)), then a. T1(n) + T2(n) = O[max (f(n), g(n))] b. T1(n) * T2(n) = o[f(n) * g(n)] iv. f(n) and g(n) are two asymptotic non-negative functions, then max(f(n), g(n)) = θ(f(n) + g(n)).
  • 37. 6. Some Important Formula: i. lgn = log2n (binary) ii. lnn = logen (natural) iii. lgkn = (lgn)k (exponential) iv. lglgn = lg(lgn) (composition) v. a = blogba vi. logc(ab) = logca + logcb vii. logban = nlogba viii.logca/b = logca/logcb ix. logc(1/a) = -logba x. logba = 1/logab xi. alogbc = clogba xii. logn < nlogn < n2 < 2n < n! < nn
  • 38. 7. Factorial Functions: i. n! = {1, if n = 0 and n(n – 1)!, if n > 0. So, n! = 1 * 2 * 3 * …. * n. ii. n! = √2πn(n/e)n(1 + θ(1/n)) [Stirling‟s Approximation] iii. n! = O(nn) iv. n! = ω(2n) v. lg(n!) = θ(nlgn) vi. lg*n = min{i > 0; lg(i) n < 1} a. lg*2 = 1 b. lg*4 = 2 c. lg*16 = 3 d. lg*65536 = 4 e. lg*(265536) = 5
  • 39. Problems: 1. Show that for any real constant ‘a’ and ‘b’, where b > 0. Ans: (n + a)b = θ(nb). Here f(n) = (n + a)b and g(n) = nb. We know that, f(n) = θ(g(n)), if lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡. => (n + a)b = θ(nb), if lim 𝑛→∞ (n + a)b nb = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡. => lim 𝑛→∞ *n(1 + a/n)}b nb => lim 𝑛→∞ *nb(1 + a/n)}b nb => lim 𝑛→∞ (1 + a/n)b => 1b = 1 = constant. => (n + a)b = θ(nb) (proved)
  • 40. 2. Prove that 2n + 1 = O(2n) Ans: Here f(n) = 2n + 1 and g(n) = 2n. lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = lim 𝑛→∞ 2n + 1 2n = lim 𝑛→∞ 2n .2 2n = 2 = constant => f(n) = θ(g(n)). We know that, f(n) = θ(g(n)) there exist f(n) < c(g(n)), for all n ≥ n0. According to O-notation, f(n) = O(g(n)) iff there exist positive constant „c‟ and „n0‟ and either f(n) ≤ c(g(n)) and for all n ≥ n0. => 2n + 1 = O(2n) (proved) 3. Prove that 22n = ω(2n) Ans: Here f(n) = 22n and g(n) = 2n. lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = lim 𝑛→∞ 22n 2n = lim 𝑛→∞ 22∗∞ 2∞ = ∞ => f(n) = ω(g(n)) => 22n = ω(2n), which is a ω-notation. (Proved)
  • 41. 4. Show that 5n2 = o(n3) Ans: Here f(n) = 5n2 and g(n) = n3. lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = lim 𝑛→∞ 5n2 n3 = lim 𝑛→∞ 5 n = 0. So, it is in small oh notation. So => 5n2 = o(n3) (proved). 5. Show that 2n = o(n2) Ans: Here f(n) = 2n and g(n) = n2. lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = lim 𝑛→∞ 2n n2 = 2/∞ = 0. So, f(n) = o(g(n)) => 2n = o(n2) is proved. So, it is small oh notation. 6. Show that n2/2 = ω(n) Ans: Here f(n) = n2/2 and g(n) = n. lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = lim 𝑛→∞ n2/2 n = lim 𝑛→∞ 𝑛2 2 ∗ 1 𝑛 = lim 𝑛→∞ 𝑛 2 = ∞ So, f(n) = ω(g(n)) (proved).
  • 42. 7. Theorem: Let f(n) = a0 + a1n + a2n2 + … + annm, then prove that f(n) = θ(nm). Ans: Here f(n) = a0 + a1n + a2n2 + … + annm and g(n) = nm. According to θ-notation, lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = 𝑐 => f(n) = θ(g(n)) => lim 𝑛→∞ a0 + a1n + a2n2 + … +annm nm => lim 𝑛→∞ nm(a0/nm + a1n/nm+ a2n2/nm+ … + an) nm => an = constant c => f(n) = θ(nm) (proved)
  • 43. 8. Let f(n) = 7n3 + 5n2 + 4n + 2. Prove f(n) = θ(n3). Ans: Here f(n) = 7n3 + 5n2 + 4n + 2 and g(n) = n3. According to θ-notation, lim 𝑛→∞ 𝑓 𝑛 𝑔 𝑛 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 => f(n) = θ(g(n)) => lim 𝑛→∞ 7n3 + 5n2 + 4n + 2 n3 => lim 𝑛→∞ (7 + 5/n + 4/n2 + 2/n3) = 7 = co𝑛𝑠𝑡𝑎𝑛𝑡 => f(n) = θ(n3) (proved) 9. Prove lg(n!) = θ(nlgn). Ans: nn ≥ n! => lgnn ≥ lgn! => nlgn ≥ lgn! => lgn! ≤ nlgn => lgn! ≤ 1 – nlgn => lgn! = O(nlgn) --- (i) Where c1 =1. Now to show that lgn! = Ω(nlgn). That is there exist some constant „c‟ and „n0‟, such that 0 ≤ cnlgn ≤ lgn!
  • 44. => lgncn ≤ lgn! => ncn ≤ n!. Taking c = 1/3, we get nn/3 ≤ n!, which is always true. => lgn! = Ω(nlgn) --- (ii) From equation (i) and (ii), lgn! = θ(nlgn) (proved). 10. Prove n! = O(nn). Ans: We have to show that, lim 𝑛→∞ n! nn = 0 n! = √2πn(n/e)n(1 + θ(1/n)) (According to Stirling‟s Approximation) => lim 𝑛→∞ √2πn(n/e)n(1 + θ(1/n)) en , Let g(n) = 1/n, f(n) = θ(g(n)) => f(n) ≤ c1(g(n)) = c1.1/n => f(n) = θ(1/n) ≤ c1/n But, θ(1/n) < c1.1/n.
  • 45. => lim 𝑛→∞ n! nn ≤ lim 𝑛→∞ √2πn(1 + c1/n) en => lim 𝑛→∞ √2π(√n/en + c1/√n.en) en => √2π lim 𝑛→∞ (√n/en + lim 𝑛→∞ c1/√n.en) => √2π lim 𝑛→∞ ( 1 2n1/2 − 1 + 0)/nen – 1 (since lim 𝑛→∞ ( 1 √n. en = 0) => √2π lim 𝑛→∞ ( 1 2n1/2 − 1 + 0) => √2π/2 lim 𝑛→∞ 1 n3/2en - 1 = √π/2 * 0 = 0. Here, lim 𝑛→∞ n! nn = 0 => n! = o(nn) proved.
  • 46. Recurrence:  A recurrence is an equation or inequality that describes a function in terms of its value on small inputs.  The running time of the recursive algorithm can be obtained by a recurrence.  To solve the recurrence relation means to obtain a function defined on the natural numbers that satisfies the recurrence.
  • 47. Recurrence Relation:  A recurrence relation (RR) is defined as for a sequence {an} is an equation that expresses „an‟ in terms of one or more previous elements a0, a1.. an-1 of sequence for all n ≥ n0 without any base cases.  E.g.: i) for instance, consider a recurrence relation tn = 2tn – 1. if „c‟ is a constant, then any function of the form c2n is a solution to the above recurrence relation.  By considering, mathematical induction, we have induction part, if tn – 1= c2n – 1.  If we have the initial condition t0 = 5, then the only choice for the constant „c‟ is to have value 5, so as to give the correct initial value. Thus on the basis part of the proof, we have tn = 5.2n.
  • 48.  It does not matter in which order the basis and induction are established, what matters is that both have been verified to be correct. Hence the solution for the recurrence is tn = 5.2n.  E.g.: ii) the worst-case running time T(n) of the MERGE- SORT procedure could be described by the recurrence. T(n) = θ(1), if n = 1 2T(n/2) + θ(n), if n > 1. Whose solution was claimed to be T(n) = θ(nlgn)
  • 49. Recurrence Equation Method:  We solve the recurrence equation by the following methods. That is: i. Substitution method ii. Iterative method iii. Master method iv. Recurrence or Recursion Tree 1. Substitution Method:  In this method, first we guess a solution and use mathematical induction to find the constant and show that the solution works.
  • 50.  The substitution method can be used to establish either upper or lower bounds on a recurrence.  E.g.: T(n) = 2T(└n/2┘) + n --- (1) is a recurrence relation. We guess the solution T(n) = O(nlgn), where g(n) = nlgn is the solution. That is: f(n) = T(n) ≤ cnlgn --- (2). Then we have to prove that, this solution is true, by using mathematical induction.  From equation-1, T(n) = 2T(└n/2┘) + n => T(n) ≤ 2c(└n/2┘) lg(└n/2┘) + n => T(n) ≤ 2c. n/2 lgn/2 + n = cn(lgn – lg2) + n => cnlgn – cn + n. So, T(n) ≤ cnlgn – n(c – 1) => T(n) ≤ cnlgn for c >1  Now, using mathematical induction, T(n) = T(n-1) + 1.
  • 51.  For n = 1, if T(1) = 1, then 1 ≤ c => lg1 = 0 => T(1) is not true for n = 1. But let T(1) is true.  For n = 2, if T(2) = 2T(1) + 2 = 4, then 4 ≤ c . 2lg2 => c ≥ 2. Hence, T(n) ≤ cnlgn, c > 1 is true for n = 2 => T(2) is true.  Similarly, T(3), T(4) … is true => T(k) is true. => T(k + 1) is true by using Tk. => T(n ) = T(n – 1) + 1, g(n) = n = O(n)  T(n) = T(n – 1) + 1 = T(n – 2) + 1 + 1 = T(n + 3) + 1 + 1 + 1  Let k = T(n – k) + k. Let n – k = 0 (base value). => n = k => T0 + n = n + 1
  • 52.  Let T0 = 1 (base case) T(1) = 1, n – k = 1 => k = n – 1 => T(n) = 1 + n – 1 = n  So, it is true for └n/2┘and it is true for n. If it is true for n, then it is true for 3. If it is true for 3, then it is true for 6 and 7 and if it is true for 6, then it is true for 12 and 13.  Hence, we conclude that for n ≥ c, where c ≥ 2. So, T(n) = O(nlgn) is a solution of T(n) = 2T(└n/2┘) + n.  Substitution Method is of two types. That is : i. Backward Substitution ii. Forward Substitution
  • 53. i. Backward Substitution Method:  Question: T(n + 1) = 2T(n). Solve this recurrence relation using substitution method, using backward substitution.  Ans: T(n + 1) = 2T(n), let base value T(0) = 1 => T(n + 1) = 2(2T(n – 1)) = 22T(n – 1) (1st term) = 22(2T(n – 3)) = 23T(n – 2) (2nd term) For kth term = 2k + 1T(n – k)  Let n – k = 0 => k = n. So for kth term, it is 2n + 1 T(0) = 2n + 1.1 = 2n + 1 => T(n + 1) = 2n + 1 => T(n) = 2n (Ans)
  • 54.  Now to prove this backward substitution using mathematical induction is: T(n) = 2n Let for n = 1 is true => T(1) = 21 = 2 Let for n = n is true => T(n) = 2n We have to prove, T(n + 1) = 2n + 1 => T(n + 1) = 2.T(n) = 2.2n = 2n + 1 => T(n + 1) = 2n + 1 (proved) ii. Forward Substitution Method:  Question: T(n + 1) = 2T(n). Solve this recurrence relation using forward substitution method.  Ans: T(n + 1) = 2T(n), => T(1) = 2T(0) = 2 (n = 0 and T(0) = 1) => T(2) = 2T(1) = 2 x 2 = 22 ( n = 1 and T(1) = 2) => T(3) = 2T(2) = 23
  • 55.  For k, T(k) = 2k (n = k and T(k – 1) = 2k – 1 Putting n = k => T(n) = 2n => T(n + 1) = 2n + 1  To prove this forward substitution method, we have to solve by the mathematical induction process.  That is: T(n) = 2n. Let n = 1 is true => T(1) = 21 = 2 Let n = n is true => T(n) = 2n  We have to prove, T(n + 1) = 2n + 1 => T(n + 1) = 2.T(n) = 2. 2n = 2n + 1 [T(n) = 2n] => T(n +1) = 2n + 1 (proved)
  • 56. Problems on Substitution Method: 1. T(n) = 2T(n – 1) + 1, initial value/base value T(0) = 1 using forward substitution.  Ans: T(n) = 2T(n – 1) + 1, where T(0) = 1 => T(1) = 2T(0) + 1 = 2 x 1 + 1 = 3 = 22 - 1 T(2) = 2T(1) + 1 = 2 x 3 + 1 = 7 = 23 - 1 T(3) = 2T(2) + 1 = 2 x 7 + 1 = 15 = 24 - 1 … T(k) = 2k + 1 - 1 => T(n) = 2n + 1 - 1 => T(n + 1) = 2n + 2 – 1 = 2T(n) + 1
  • 57.  Now, by using mathematical induction, T(n) = 2n + 1 – 1. Let n = 1 => T(1) = 21 + 1 – 1.  Similarly, let „n‟ is true => T(n) = 2n + 1 – 1 We have to prove, T(n + 1) = 2T(n) + 1 = 2(2n + 1 – 1) + 1 => 2. 2n + 1 – 2 + 1 = 2n + 2 – 1 => T(n + 1) = 2n + 2 – 1 (proved) 2. Consider the recurrence T(n) = 3T(n/2) + n, n ≥ 1 with initial condition T(0) = 0, obtain the solution for the above recurrence.  Ans: T(n) = 3T(n/2) + n, n ≥ 1and T(0) = 0 Let for n = 1, T(1) = 3T(1/2) + 1 = 1 n = 2, T(2) = 3T(1) + 2 = 3 x 1 + 2 = 5 n = 22 = 4, T(4) = 3T(2) + 3 = 3(3 x 1 + 2) + 22 = 32 x 1 + 3 x 2 + 22 = 9 + 6 + 4 = 19
  • 58. n = 23 = 8, T(8) = 65 = 33 x 1 + 32 x 2 + 3 x 22 + 23 n = 24 = 16, T(8) = 211 = 34 x 1 + 33 x 2 + 32 x 22 + 3 x 23 + 24 (We take the values represented in the powers of 2).  From the above computation, we guess the solution is: T(n) = 3n + 1 – 2n + 1 So, for n = 0, T(0) = 30 – 20 = 0, is satisfy the initial condition. So the statement is correct6 for n = 1. Let us assume for some k > 0 then: T(2k) = 3k20 + 3k – 121 + 3k - 222 + … + 3 x 2k - 1 + 30 x 2k = 3k (2/3)𝑖𝑘 𝑖=𝑜 = 3k + 1 – 2k + 1  So it can be observed that, T(k) = 3k + 1 – 2k + 1 (if „n‟ is not power of 2). By this, we guess as the solution is correct. Thus, T(n) = 3n + 1 – 2n + 1
  • 59. 3. Consider the recurrence T(n) = 1, n = 1 and 2T(└n/2┘), n > 1. We have to find an asymptotic bound on T(n).  T(n) = 1, n = 1 and 2T(└n/2┘), n > 1  For the above recurrence, we guess that it satisfies O(nlgn). Thus, for this we have to show that there exists a constant „c‟, such that T(m < n) ≤ cmlogm, which implies, T(n) ≤ cnlogn => T(n) ≤ c2└n/2┘log└n/2┘+ n => T(n) ≤ c2logn – cnlog2 + n => T(n) ≤ cnlogn – (clog2 – 1)n => T(n) ≤ cnlogn, for all c > 1/log2  By mathematical induction, we can check T(2) = 4 and T(3) = 5.
  • 60.  As for n = 1, T(n) ≤ cnlogn yields 0. Thus, the inductive proof T(n) ≤ cnlogn for constant c ≥ 1 is completed by choosing „c‟ large enough such that: T(2) ≤ c2log2 and T(2) ≤ c3log3. Since, the above relation holds for c ≥ 2. Thus, T(n) ≤ cnlogn holds true.  Thus, our guess as the solution T(n) = O(nlogn) is correct. 4. Consider the recurrence T(n) = 2T(└n/2┘+ 16) + n. We have to show that it is asymptotically bound on O(nlogn).  T(n) = 2T(└n/2┘+ 16) + n. For, T(n) = O(nlogn), we have to show that for some constant, T(n) ≤ cnlogn => T(n) ≤ c2(└n/2┘+ 16) log(└n/2┘+ 16) + n = cnlog(n/2) + 32 + n – cnlog2 + n = cnlogn – cn + 32 + n
  • 61. = cnlogn – (c – 1)n + 32 = cnlogn – b ≤ cnlogn (if c ≥ 1, b is constant) Thus, T(n) = O(nlogn) 5. Consider the recurrence T(n) = 2T(└n/2┘) + n, we have to show that it is asymptotically bound on O(nlogn).  T(n) = 2T(└n/2┘) + n. For, T(n) = Ω(nlogn), we have to show that for some constant „c‟, T(n) ≥ cnlogn => T(n) ≤ c2(└n/2┘)log(└n/2┘) + n => T(n) = cnlog(n/2) + n – cnlog2 + n = cnlogn – cnlog2 + n = cnlogn – cn + n => T(n) = cnlogn for c = 1. Thus T(n) = Ω(nlogn)
  • 62. 6. Consider the recurrence T(n) = 2T(└n/2┘) + 1, we have to show that it is asymptotically bound on O(logn).  T(n) = 2T(└n/2┘) + 1. For, T(n) = O(logn), we have to show that for some constant „c‟, T(n) ≤ cnlogn => T(n) ≤ clog(└n/2┘) + 1 = clogn – clog2 + 1 => T(n) ≤ clogn for c ≥ 1. Thus T(n) = O(logn) 2. Iterative Method:  An iterative method is the method, where the recurrence relation is solved by considering 3 steps. That is: I. Step 1: expand the recurrence II. Step 2: express is as a summation (∑) of terms, dependent only on „n‟ and the initial condition. III. Step 3: evaluate the summation (∑).
  • 63. Problems on Iteration Method: 1. T(n) = 0, n = 0 – (i) (initial condition) and = c + T(n – 1), n > 0 – (ii) => T(n) = T(n – 1) + c = T(n – 2) + c + c => T(n – 2) + 2c = T(n – 3) + c + c + c => T(n – 3) + 3c = T(n – 4) + c + c + c + c => T(n – 4) + 4c For kth term, T(n – k) + c + c + c … k times = T(n – k) + 𝑐𝑘 𝑖=1 From the base case, n – k = 0 => n = k => T(n) = T(0) + 𝑐𝑛 𝑖=1 = 0 + cn => T(n) = cn.
  • 64. 2. T(n) = 0, n = 0 – (i) (initial condition) and = T(n – 1) + n, n > 0 – (ii) => T(n) = T(n – 1) + n T(n – 1) = T(n – 2) + n – 1 => T(n) = T(n – 2) + n – 1 + n T(n – 2) = T(n – 3) + n – 2 => T(n) = T(n – 3) + n – 2 + n – 1 + n …. For kth term, T(n – k) + n – k – 1 + ….. + n – 1 + n => T(n) = T(n – k) + 𝑛 − 𝑖𝑘+1 𝑖=0 From the base case, n – k = 0 => n = k => T(n) = T(0) + 𝑛 − 𝑖𝑛+1 𝑖=0 = 0 + 𝑛 − 𝑖𝑛+1 𝑖=0 => T(n) = 𝑛 − 𝑖𝑛+1 𝑖=0
  • 65. We know that, 𝑖𝑛 𝑖=1 = n (n + 1) /2 T(n) = 𝑛 − 𝑖𝑛+1 𝑖=0 = n + n – 1 + n – 2 + … + 1 + 0 + (-1) => T(n) = n(n + 1)/2 – 1 3. T(n) = c, n = 1 – (i) (initial condition) and = 2T(n/2) + c, n > 1 – (ii) => T(n) = 2T(n/2) + c = 2T(2T(n/22) + c) + c = 22T(n/22) + 2c + c = 22 (2T(n/23) + c) + 2c + c = 23T(n/23) + 22c + 21c + 20c …. For kth term, 2kT(n/2k) + 2k - 1c + … + 21c + 20c => n/2k = 1 => n = 2k => k = log2n
  • 66. That is, 2logn + 2logn – 1/c + …. + 21c + 20c = 2log2n = nlog22 = n1 = n (since, alogbc = clogba) = 2kT(n/2k) + c 2𝑖 − 1𝑘 𝑖=1 (since, a + ar + ar2 + … + arn = a(rn+ 1 – 1)/r – 1) = 2kT(n/2k) + c(2k – 1/2 – 1) = 2kT(n/2k) + c(2k – 1) Let k = logn, = 2lognT(n/2logn) +c2logn – c = nT(1) + cn – c = 2cn – c = c(2n – 1)
  • 67. 4. T(n) = 2T(└n/3┘) + n Expanding the above terms, we get T(n) = n + 2/3n + 4T(n/9) = n + 2/3n + 4/9n + 8T(n/27) It is to be noticed that, we can meet the boundary condition, where (n/3i) ≤ 1. i.e.: after performing ┌log3n┐ expansions. Thus, T(n) = n 2/3 𝑖 𝑛 + 2𝑙𝑜𝑔3 𝑛θ(1) ┌log3n┐ 𝑖=0 ≤ n 2/3 𝑖 𝑛 + 2𝑙𝑜𝑔3 𝑛θ(1) log3n 𝑖=0 ≤ n 2/3 𝑖 𝑛 + 𝑛𝑙𝑜𝑔32θ(1)∞ 𝑖=0 = 3n + O(n) = O(n). So, T(n) = O(n).
  • 68. 5. T(n) = T(n – 1) + 1, T(1) = θ(1 Expanding the above terms T(n – 1) = T(n – 2) + 1 So, T(n) = (T(n – 2) + 1) + 1 = T(n – 2) + 1 T(n – 2) = T(n – 3) + 1 => T(n) = (T(n – 3) + 1) + 2 = T(n – 3) + 3 For kth term, T(n) = T(n – k) + k When k = n – 1 => T(n – k) = T(1) = θ(1) Thus, T(n) = θ(1) + (n – 1) = θ(n). Hence, T(n) = θ(n) 6. T(n) = T(n/2) + n, T(1) = θ(1) Expanding the above terms T(n/2) = T(n/4) + n/2 Thus, T(n) = T(n/4) + n/2 + n T(n/4) = T(n/8) + n/4 => T(n) = T(n/8) + n/4 + n/2 + n
  • 69. T(n/8) = T(n/16) + n/8 => T(n) = T(n/16) + n/8 + n/4 + n/2 + n T(n/16) = T(n/32) + n/16 => T(n) = T(n/32) + n/16 + n/8 + n/4 + n/2 + n ….. For kth term, T(n) = T(n/2k) + ( 𝑛 2 𝑗)𝑘 −1 𝑗=0 It can be observed that the recursion stops, when we get to T(1). This happens when n/2k = 1. That is n = 2k => k = logn Thus, T(n) = θ(1) + ( 𝑛 2 𝑗) 𝑙𝑜𝑔𝑛 −1 𝑗=0 < θ(1) + ( 𝑛 2 𝑗)∞ 𝑗=0 < θ(1) + 2n = θ(n) Hence, T(n) = θ(n)
  • 70. 7. T(n) = 3T(└n/4┘) + n Expanding the above terms, we get T(n) = 3(3T(└n/16┘)) + (└n/4┘) + n = 3(3(3T(└n/164┘))) + (└n/16┘) + (└n/4┘) + n = n + 3(└n/4┘) + 9(└n/16┘) + 2T(T(└n/164┘)) The recursion stops, when n/4i ≤ 1, which implies n ≤ 4i => log4n = i Thus, T(n) = n + 3└n/4┘+ … + 3i└n/4i┘+ 3log4n.θ(1) T(n) ≤ (n + 3n/4 + 9n/16 + … + 3log4n)θ(1) ≤ n (3/4) 𝑘∞ 𝑘=0 + θ(nlog43) {as 3log4n = nlog43} ≤ n(1/(1 – ¾)) + O(n) = 4n + O(n) {as log43 < 1} = O(n) Hence, T(n) = O(n) (proved)
  • 71. 3. Master Method:  The master method is used for solving the following types of recurrences, T(n) = aT(n/b) + f(n), where „a‟ and „b‟ are constants and a ≥ 1, b > 1.  In the above recurrence, the problem of size „n‟ is divided into „a‟ sub-problems each of size „n/b‟.  Each sub-problem of size „n/b‟ can be solved recursively in time T(n/b).  The cost of dividing or splitting the problem and combine the solutions or result is described by the function f(n).  Here the size is interpreted as └n/b┘or ┌n/b┐. The T(n) can be bounded asymptotically by the following 3 cases.
  • 72. 1. CASE I: if f(n) = O(nlog b a – є) for some constant є > 0 then T(n) = θ(nlog b a) 2. CASE II: if f(n) = θ(nlog b a) then T(n) = θ(nlog b a . logn) 3. CASE III: if f(n) = Ω(nlog b a + є) for some constant є > 0 if af(n/b) ≤ cf(n), for some constant c > 0 and c < 1 and n = sufficiently very large, then T(n) = θ(f(n)) NOTES:  If the recurrence is of following form i.e.: T(n) = aT(n/b) + cnd, n > n0  Then the solution of the recurrence is, T(n) = θ(nd), if a < (b)d θ(ndlogn), if a = (b)d θ(nlog b a), if a > (b)d
  • 73.  E.g.: T(n) = 3T(n1/3) + log3n Let us assume m = log3n => n = 3m. Thus, n1/3 = 3n/3 => T(3m) = 3T(3m/3) + m. Again, consider s(m) = T(3m). We have s(m) = 3s(m/3) + m. Using master method, s(m) є θ(mlogm) => T(n) є θ(log3n(log log3n)) (Ans)  Let T(n) = 4T(n/2) + nlogn. If f(n) = logarithmic part or polynomial part, then master method did not work or can‟t be apply.  So for the solution we have to apply substitution or iterative method.
  • 74.  Here a = 4, b = 2, f(n) = nlogn, nlog b a = n2. Here we can‟t compare n2 with nlogn.  So we can‟t say that those two terms are equal or greater than or less than. So we go for iterative or substitution method.  T(n) = 2T(3n/2) + 3. Here a = 2, b = 2/3, f(n) = 3. Now nlog b a = nlog 2/3 2.  It can‟t be solved in master method, because logarithmic function can‟t take fractional value.  It always took integer form. So this type of problem is solved by either iterative or substitution method.
  • 75. Problems on Master Method: 1. T(n) = 3T(n/2) + n2. Here a = 3, b = 2, f(n) = n2. Now, nlog b a = nlog 2 3 = n1.585. But f(n) = n2, so it will be in case-3, where є > 0. => є = .415 => f(n) = Ω(nlog b a + є) => T(n) = θ(f(n)) => T(n) = θ(n2) and af(n/b) ≤ cf(n) => 3 * f(n/2) ≤ cn2. => 3n2/4 ≤ cn2 = 3/4 ≤ c. (Since c > 0 and c < 1). So, af(n/b) ≤ cf(n) is satisfied. 2. T(n) = 4T(n/2) + n2. Here a = 4, b = 2, f(n) = n2. Now, nlog b a = nlog 2 4 = nlog 222 = n2log 2 2 = n2 x 1 = n2. (since, log22 = 1). So it satisfied case – 2. i.e.: f(n) = θ(nlog b a) => f(n) = θ(n2) => T(n) = θ(nlog b a . logn) => T(n) = θ(n2logn) (Ans)
  • 76. 3. T(n) = 2T(n/2) + n. Here a = 2, b = 2, f(n) = n. Now, nlog b a = nlog 2 2 = n = f(n). So it satisfies case – 2. That is: f(n) = θ(nlog b a) = n => T(n) = θ(nlog b a . logn) = θ(nlogn). 4. T(n) = 16T(n/4) + n. Here a = 16, b = 4, f(n) = n. Now, nlog b a = nlog 4 16 = nlog 442 = n2log 4 4 = n2 x 1 = n2. It satisfies case – 1. That is f(n) = O(nlog b a – є) => n = O(n2 – є) = O(n2 – 1) => n = O(n) (since, є = 1 and є > 0) => T(n) = θ(nlog b a) => T(n) = θ(n2) (Ans) 5. T(n) = 2T(n/2) + n – 1. Here a = 2, b = 2, f(n) = n – 1. Now, nlog b a = nlog 2 2 = n. Since, f(n) = n – 1 that does not belongs to O(nlog b a – є). So case – 1 does not apply. But as f(n) є θ(n). So according to case – 2, we have T(n) = O(nlogn) (Ans)
  • 77. 6. T(n) = T(3n/4) + 1 and T(1) = θ(1). We have to find its asymptotic bound. Using the Master method we have, a = 1, b = 4/3 and f(n) = 1. Now, nlog b a = nlog 4/3 1 = n0 = 1. So case – 2 applies. Since 1 = θ(1). So T(n) = θ(logn) 7. T(n) = 4T(n/2) + n. Here a = 4, b = 2, f(n) = n. Now, nlog b a = nlog 2 4 = nlog 222 = n2log 2 2 = n2 . Since, f(n) = n. So it satisfies case – 1. That is f(n) = O(nlog b a – є) => n = O(n2 – 1) => n = O(n). (since є > 0 and є = 1). Thus T(n) = θ(nlog b a) => T(n) = θ(n2) (Ans). 8. T(n) = 4T(n/2) + n3. Here a = 4, b = 2, f(n) = n3. Now, nlog b a = nlog 2 4 = nlog 222 = n2log 2 2 = n2 . Since, f(n) = n3. So it satisfies case – 3. That is f(n) = Ω(nlog b a + є) = Ω(n2 + 1) => n = Ω(n3). Thus, T(n) = θ(n3) and af(n/b) ≤ cf(n). => 4f(n/2) ≤ cn3 => 4f(n3/8) ≤ cn3 => 4/8n3 ≤ cn3 => 1/2n3 = cn3 => c = 1/2. (since c > 0 and c < 1). So, af(n/b) ≤ cf(n) is satisfied.
  • 78. 4. Recursion Tree Method:  Recursion Tree Method is pictorial representation of an iteration method, which is in the form of a tree, where at each levels, nodes are expanded.  It is used to keep track of the size of the remaining arguments in the recurrence and the non-recursive costs. In a recursion tree, each node represents the cost of a single sub-problem.  We add the cost within each level of the tree to obtain a set of pre-level cost and then we add all the levels of costs to determine the total cost of all levels of recursion.  In general, T(n) = aT(n/b) + f(n)
  • 79. T(1) f(n) f(n/b)f(n/b) f(n/b2)f(n/b2) f(n/b2) f(n/b) f(n/b2)f(n/b2) f(n/b2) f(n/b2)f(n/b2) f(n/b2) af(n/b) a2f(n/b2) f(n/bn)f(n/bn) f(n/bn) anf(n/bn) T(n) = aT(n/b) + f(n) T(n/b) = aT(n/b2) + f(n/b)
  • 80. Theorem: 1. Let a ≥ 1 and b > 1 be constants. Let f(n) be a non- negative function defined on exact powers of „b‟. Define T(n), be an exact power of „b‟, by the recurrence. T(n) = θ(1) , n = 1 aT(n/b) + f(n), if n = bi, i = +ve integer => T(n) = θ(nlog b a) + 𝑎𝑖 𝑓( 𝑛 𝑏𝑖) 𝑙𝑜𝑔 𝑏 𝑛−1 𝑖=0 Since, alogbn = nlogba θ(nlog b a) = Total cost for the leafs, = 𝑙𝑜𝑔 𝑏 𝑛−1 𝑖=0 sum over of all levels aif(n/bi) = cost per level
  • 81. Problems on Recursion Tree: 1. T(n) = T(n/3) + T(2n/3) + n. The recursion tree for this is: => T(n) = n + n + n + ….. log3/2n times = θ(nlogn). T(n)/n n/3 2n/3 n/9 2n/9 2n/9 4n/9 n n n Total = θ(nlogn) log3/2n
  • 82. 2. T(n) = 2T(n/2) + n2. The recursion tree for this is: So, the above recurrence has the solution T(n) = θ(n2). n2 (n/2)2 (n/2)2 (n/4)2 (n/4)2 (n/4)2 (n/4)2 n2/2 n2/4 n2 Total = θ(n2) log2n
  • 83. 3. T(n) = 4T(n/2) + n. The recursion tree for this is: We have, n + 2n + 4n + … + logn times = n(1 + 2 + 4 ….. logn times) = n(2log22 – 1)/(2 – 1) = n2 – n = θ(n2) => T(n) = θ(n2) n 2(n/2) 2(n/2) 4(n/4) 4(n/4) 4(n/4) 4(n/4) 2n 4n n Total = θ(n2) logn 1 1 logn 1 1 1
  • 84. 4. T(n) = 3T(n/4) + n. The recursion tree for this is: => T(n) = θ(n4log3) + ( 3 4 )𝑖 𝑛 4𝑙𝑜𝑔𝑛−1 𝑖=0 => T(n) < θ(n4log3) + ( 3 4 )𝑖 𝑛∞ 𝑖=0 => T(n) < θ(n4log3) + (1/(1 – ¾))n => T(n) < θ(n4log3) + (1/4n) => T(n) є O(n). Here, we have the linear worst case complexity. n n/4n/4 (n/16)(n/16) (n/16) n/4 (n/16)(n/16) (n/16) (n/16)(n/16) (n/16) 3n/4 9n/16 T(1)T(1) T(1) n4log3 leaves n 4logn
  • 85. 5. Solve the factorial with recursion tree with recurrence relation. Factorial: The term „n‟ factorial indicates the product of the positive integers from 1 to n inclusive and is denoted by n!. Factorial of number (n) is a recursive manner is defined by the recurrence relation is: Factorial(n) = T(n) = 1, n = 0 T(n – 1) * 1, n > 0 The algorithm for the factorial is: fact(n) { if n = 0 then return 1 else if n > 0 return n * fact(n – 1) }
  • 86. From the recurrence relation, T(n) = 1, n = 0 n * T(n – 1) * 1, n > 0 Put, n = 1, 2, …. N T(1) = T(0) + 1, T(2) = T(1) + 1 …. T(n) = T(n – 1) + 1 Using recurrence tree, the solution is: => T(n) = T(n – 1) + 1 + T(n – 1) = T(n – 2) + 1 + T(n – 2) = T(n – 3) + 1 …. + T(2) = T(1) + 1 + T(1) = T(0) + 1 --------------------------- T(n) = T(0) + 1 + 1 + 1 + …. + n => T(n) = T(0) + n => T(n) = n + 1
  • 87. If, T(1) = 1 is given, then we have to calculate upto T(2) = T(1) + 1 => T(n) = T(1) + 1 + 1 + 1 + … + n – 1 => T(n) = T(1) + n – 1 = 1 + n – 1 = n => T(n) = n : Calculate if for „n‟ value n(1) = n T(n) T(n – 1) T(n – 2) T(n – 3) T(0) 1 1 1 1
  • 88. 6. Solve the Fibonacci series with recursion tree with recurrence relation. Fibonacci Series: Fibonacci Series is a series of positive integer in manner that the next term of a series is the addition of previous 2 terms. i.e.: 0, 1, 1, 2, 3, 5, 8, 13, 21. The algorithm for Fibonacci series is: Fibseq(n) { if n = 0 then return 0 else if n = 1 then return 1 else if n > 1 then return (Fibseq(n – 1) + Fibseq(n – 2))
  • 89. The Fibonacci sequence in recursive manner is defined by the recurrence relation is: T(n) = 0, if n = 0 1, if n = 1 T(n – 1) * T(n – 2), if n > 1. From the recurrence relation, Put, n = 2, 3, …. n. So, T(2) = T(0) + T(1), T(3) = T(1) + T(2) …. T(n) = T(n – 1) + T(n – 2)
  • 90. T(n) T(n – 2) T(n – 2) T(n – 3) T(n – 1) T(n – 3) T(n – 4) T(0) T(1) T(0) T(1) T(0) T(1) T(0) T(1) => T(n) = T(n – 1) + T(n – 2) + T(n – 1) = T(n – 2) + T(n – 3) + T(n – 2) = T(n – 3) + T(n – 4) …. + T(3) = T(1) + T(2) + T(2) = T(0) + T(1) --------------------------- T(n) = T(0) + T(n – 1) = T(n – 1) => T(n) = T(n – 1)
  • 91. Binary Search Algorithm:  Binary_search(a[], n, x) begin low ← 1, high ← n, j =0; while (low ≤ high AND j =0) begin mid ← └(low + high)/2┘; if (x = a[mid]) j ← mid; else if (x < a[mid]) high ← mid – 1; else low ← mid + 1; end while return j; end
  • 92. Binary Search Analysis:  For 1st iteration – n 2nd iteration - └n/2┘ 3rd iteration - └└n/2┘/2┘ = └n/4┘ …. For jth iteration - └n/2j - 1┘= 1  The floor definition, 1 ≤ n/2j – 1 ≤ 2 => 2j – 1 ≤ n ≤ 2j => log2j – 1 ≤ log2n ≤ log2j (Taking logarithmic value) => j – 1 ≤ n < j => j ≤ logn + 1 => j = └logn┘ + 1 So, the time complexity is: O(logn)
  • 93. Insertion Sort Algorithm:  Insertion_sort(a[], n) begin for (i = 2 to n) begin j ← i; temp ← a[i]; while(j > 1 AND a[j – 1] > temp) begin a[j] ← a[j – 1]; j ← j – 1; end while a[j] ← temp; end for end
  • 94. Insertion sort Analysis:  Insertion_sort(a[], n) Sl.no Code Cost times 1 begin 0 1 2 for (i = 2 to n) C1 n 3 begin 0 n – 1 4 j ← i C2 n – 1 5 temp ← a[i] C3 n – 1 6 while(j > 1 AND a[j – 1] > temp) begin C4 𝑡𝑗 𝑛 j=2 7 a[j] ← a[j – 1] C5 𝑡𝑗 𝑛 𝑗=2 - 1 8 j ← j – 1 C6 𝑡𝑗 𝑛 𝑗=2 - 1 9 end while 0 𝑡𝑗 𝑛 𝑗=2 - 1 10 a[j] ← temp C7 n – 1 11 end for 0 n – 1 12 end 0 1
  • 95.  Total time = ∑ cost x time = 0 + C1n + 0 + C2(n – 1) + C3(n – 1) + C4 𝑡𝑗 𝑛 j=2 + C5 𝑡𝑗 − 1𝑛 j=2 + C6 𝑡𝑗 − 1𝑛 j=2 + 0 + C7(n – 1) + 0 + 0 = C1n + C2(n – 1) + C3(n – 1) + C7(n – 1) + C4 𝑡𝑗 𝑛 j=2 + C5 𝑡𝑗 − 1𝑛 j=2 + C6 𝑡𝑗 − 1𝑛 j=2 = (C1 + C2 + C3 + C7)n – (C2 + C3 + C7) + C4 𝑡𝑗 𝑛 j=2 + C5 (𝑡𝑗 − 1)𝑛 j=2 + C6 (𝑡𝑗 − 1)𝑛 j=2 = C8n – C9 + C4 𝑡𝑗 𝑛 j=2 + C5 𝑡𝑗 𝑛 j=2 − C5 1𝑛 j=2 + C6 𝑡𝑗 𝑛 j=2 − C6 1𝑛 j=2 = C8n – C9 + (C4 + C5 + C6) 𝑡𝑗 𝑛 j=2 − C5 1𝑛 j=2 − C6 1𝑛 j=2 = C8n – C9 + C10 𝑡𝑗 𝑛 j=2 − C5n + C5 − C6n + C6
  • 96. = C8n − C5n − C6n – C9 + C5 + C6 + C10 𝑡𝑗 𝑛 j=2 = n(C8 − C5 − C6) + (C5 + C6 – C9)+ C10 𝑡𝑗 𝑛 j=2 = C11 + C12n + C10 𝑡𝑗 𝑛 j=2 = C11 + C12n + C10n(n + 1)/2 − C10 = C11 − C10 + C12n + C10n2/2 + C10n/2 = C13 + C12n + C10n2/2 + C10n/2 = C13 + C12n + C10n/2 + C10n2/2 = C10n2/2 + C14n/2 + C13 ≈ An2 + Bn + C ≈ O(n2)  So, the best case run time of insertion sort is: T(n) = C1n + C2(n – 1) + C3(n – 1) + C7(n – 1) = (C1 + C2 + C3 + C7)n – (C2 + C3 + C7)  The running time, can be expressed as „an + b‟ for constant „a‟ and „b‟ that depend on the statement costs „Ci‟, it is thus a linear function of „n‟.
  • 97.  The worst case run time of insertion sort: T(n) = C1n + C2(n – 1) + C3(n – 1) + C4((n(n + 1) – 1/2) + C5((n(n – 1)/2) + C6((n(n – 1)/6) + C7(n – 1) = (C4/2 + C5/2 + C6/2)n2 + (C1 + C2 + C3 + C4/2 – C5/2 + C6/2 + C7)  The worst case running time can be expressed as: an2 + bn + c for constants a, b, and c and again depend on the statement costs „Ci‟, it is thus a quadratic function of „n‟.
  • 98. Bubble Sort Algorithm:  This algorithm sorts the element of an array „A‟, having elements in ascending or increasing order.  Step 1: Initialization (p = pass counter, E = count the no. of exchanges, l = unsorted element)  Step 2: loop, Repeat through step 4, while (p ≤ n – 1) Set E ← 0 : initializing exchange variables  Step 3: Comparison loop Repeat for i ← 1, … l – 1 if (A[i] > A[i + 1]) then set A[i] ← A[i + 1] : Exchanging values set E ← E + 1
  • 99.  Step 4: Finish or reduce the size if (E – 0), then exit else set l ← l - 1  Here, pass refers to the search for the element with next smallest key.  At a time, each pass places one element in its proper position. Thus, for performing the above sort „n – 1‟ passes are required.  In pass 1 the adjacent elements are compared such as A[1] and A[2] and the elements are arranged in proper order like A[2], A[1] (if A[2] < A[1]). After that A[1] and A[3] are compared.
  • 100.  The process continues until the greatest element is placed at the last position. Thus A[n] contains the largest element. In this pass (n – 1) comparisons are required.  In pass 2, the second largest element is placed at A[n – 1] by performing (n – 2) comparisons. After (n – 1) passes, we get the final sorted list as: A1 ≤ A2 ≤ A3 ≤ … An - 1 ≤ An.  The whole list of „n‟ elements of an array „A‟ is sorted after (n – 1) passes. So the time complexity of Bubble sort is: i. For pass 1: n elements ii. For pass 2: n elements iii. i.e.: n x n = n2 => O(n2)
  • 101. Analysis Design Technique:  Given a problem, the algorithm is largely influenced by the choice of data structure. With a chosen data structure one can develop a no. of different algorithm for set problem.  The 1st intuitive algorithm may not be the best one far as memory and time efficiency is concerned. There are some general techniques for development of algorithms. Those are:  Divide and Conquer  Greedy Strategy  Dynamic Programming  Back Tracking  Branch and Bound
  • 102. Divide and Conquer:  The Divide and conquer method includes 3 steps. That are: 1. Step 1: Divide the problem into no. of sub-problems. 2. Step 2: Conquer the sub-problem by solving them recursively, only if the problem sizes are small enough to be solved in a straight forward manner, otherwise step-1 is executed. 3. Step 3: Combine the solutions obtained by sub- problems and create a final solution to the original problem.  Example: Merge Sort, Quick Sort and Heap Sort a. Merge Sort: 1. Step 1: The whole list is divided into two sub-lists of n/2 elements each for sorting.
  • 103. 1. Step 2: Sort the sub-list recursively using merge sort. 2. Step 3: Now merge the two sorted sub-lists to generate the sorted answer.  For accomplishing the whole task, we are using two procedures „Merge Sort‟ and „Merge‟. The procedure „Merge‟ is used for combining the sub-lists.  The analysis is the part of the Merge Sort is solved by recursion tree method. Merge Sort Algorithm: Merge_Sort(A, p, r) i. if p < r ii. then q ← └p + r/2┘ iii. Merge_Sort(A, p, q) iv. Merge_Sort(A, q + 1, r) v. Merge(A, p, q, r)
  • 104. Algorithm for Merge: Merge (A, p, q, r) i. n1 ← q – p + 1 ii. n2 ← r – q iii. create arrays L[1 … n1 + 1] and R[1 … n2 + 1] iv. for i ← 1 to n1 v. do L[i] ← A[p + i – 1] vi. for j ← 1 to n2 vii. do R[j] ← A[ q + j] viii.L[n1 + 1] ← ∞ ix. R[n2 + 1] ← ∞ x. i ← 1 xi. j ← 1 xii. for k ← p to r
  • 105. xiii.do if L[i] ≤ R[j] xiv. then A[k] ← L[i] xv. i ← i + 1 xvi. else A[k] ← R[j] xvii.j ← j + 1 Analysis for Merge Sort:  The time complexity of Merge Sort (total time) is: T(n) = θ(nlogn) and Total cost = cnlogn. The divide steps compute the middle of the sub-array which takes constant time i.e.: θ(1).  We recursively solve 2 sub-problem, each of size n/2, which contributes 2T(n/2) to the running time. Merge Procedure on a „n‟ elements sub-array takes time θ(n).  So total time = T(n) = 2T(n/2) + θ(n) and recurrence relation for Merge Sort = T(n) = 2T(n/2) + cn.
  • 106. Solve the Merge Sort with Recursion Tree with recurrence relation:  The recurrence relation of Merge Sort is:  T(n) = 2T(n/2) + cn. So, the recursion tree for this is: T(n) T(n/2) T(n/4) T(n/4) T(n/2) T(n/4) T(n/4) cn cn/2 cn/4 cn/4 cn/2 cn/4 cn/4 cn cn cn cn logn
  • 107.  Divide this until and unless a single sub-point is not coming. Let assume that height of the tree is logn. Total Cost = Total cost of f(n) part * height of tree Here, total cost = cn * logn Here, cn = cost of f(n) part and f(n) = cn logn = height of the tree => Total Cost = cnlogn and Total Time = nlogn => T(n) = θ(nlogn) Example: Sort the elements 2, 4, 5, 7, 1, 2, 3, 6 using merge sort. Ans: 2, 4, 5, 7, 1, 2, 3, 6. Now, the array is: p ←1 2 3 4 (q) 5 6 7 8 → r 2 4 5 7 1 2 3 6
  • 108. p = 1, r = 8, q = └(1 + 8)/2┘= └4.5┘= 4 Merge_sort(A, p, q) = Merge_sort(A, 1, 4) Merge_sort(A, q + 1, r) = Merge_sort(A, 5, 8) Merge(A, p, q, r) n1 = q – p + 1 = 4, n2 = r – q = 8 – 4 = 4 Create Arrays L[1 … n1 + 1] and R[1 … n2 + 1] => L[1 to 5] and R[1 to 5] for i = 1to n1 = 1 to 4 for j = 1to n2 = 1 to 4 L[i] = A[p + i – 1] R[j] = A[q + j] L[1] = A[1] = 2 R[1] = A[5] = 1 L[2] = A[2] = 4 R[2] = A[6] = 2 L[3] = A[3] = 5 R[3] = A[7] = 3 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞
  • 109. L[4] = A[4] = 7 R[4] = A[8] = 6 L[5] = ∞ R[5] = ∞ Now, k i j  Now i ← 1, j ← 1. For k = p to r = 1 to 8. If L[i] ≤ R[j], then A[k] ← R[i] and i = i + 1 else A[k] = R[j] and j = j + 1.  Here, L[i] = L[1] = 2 and R[j] = R[1] = 1 => L[i] ≤ R[j] (false).  So, A[k] = R[j] => A[1] = R[1] = 1 and j = j + 1 = 1 + 1 = 2. 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞ 1 2 3 4 5 6 7 8 A 2 4 5 7 1 2 3 6
  • 110. Now, k i j  Now i = 1, j =2, k = p to r = 2 to 8.  L[i] ≤ R[j] => L[1] ≤ R[2] => 2 ≤ 2 (true).  So, A[k] = L[i] => A[2] = L[1] = 2 and i = i + 1= 1 + 1 = 2. Now, k 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞ 1 2 3 4 5 6 7 8 A 1 4 5 7 1 2 3 6 1 2 3 4 5 6 7 8 A 1 2 5 7 1 2 3 6
  • 111. i j  Now i = 2, j =2, k = p to r = 3 to 8.  L[i] ≤ R[j] => L[2] ≤ R[2] => 4 ≤ 2 (false).  So, A[k] = R[j] => A[3] = R[2] = 2 and j = j + 1= 2 + 1 = 3. Now, k i j 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞ 1 2 3 4 5 6 7 8 A 1 2 2 7 1 2 3 6 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞
  • 112.  Now i = 2, j =3, k = p to r = 4 to 8.  L[i] ≤ R[j] => L[2] ≤ R[3] => 4 ≤ 3 (false).  So, A[k] = R[j] => A[4] = R[3] = 3 and j = j + 1= 3 + 1 = 4. Now, k i j  Now i = 2, j =4, k = p to r = 5 to 8.  L[i] ≤ R[j] => L[2] ≤ R[4] => 4 ≤ 6 (true).  So, A[k] = L[i] => A[5] = L[2]= 4 and i = i + 1= 3. 1 2 3 4 5 6 7 8 A 1 2 2 3 1 2 3 6 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞
  • 113. Now, k i j  Now i = 3, j =4, k = p to r = 6 to 8.  L[i] ≤ R[j] => L[3] ≤ R[4] => 5 ≤ 6 (true).  So, A[k] = L[i] => A[6] = L[3]= 5 and i = i + 1= 4. Now, k 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞ 1 2 3 4 5 6 7 8 A 1 2 2 3 4 2 3 6 1 2 3 4 5 6 7 8 A 1 2 2 3 4 5 3 6
  • 114. i j  Now i = 4, j =4, k = p to r = 7 to 8.  L[i] ≤ R[j] => L[4] ≤ R[4] => 7 ≤ 6 (false).  So, A[k] = R[j] => A[7] = R[4]= 6 and j = j + 1= 5. Now, k i j 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞ 1 2 3 4 5 6 7 8 A 1 2 2 3 4 5 6 6 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞
  • 115.  Now i = 4, j =5, k = p to r = 8 to 8.  L[i] ≤ R[j] => L[4] ≤ R[5] => 7 ≤ ∞ (true).  So, A[k] = L[i] => A[8] = L[4]= 7 and i = i + 1= 5. Now, i j  Now the elements are sorted using merge sort.  That is: 1, 2, 2, 3, 4, 5, 6, 7. 1 2 3 4 5 6 7 8 A 1 2 2 3 4 5 6 7 1 2 3 4 5 L 2 4 5 7 ∞ 1 2 3 4 5 R 1 2 3 6 ∞
  • 116.  The representation of tree structure is:  The recurrence relation is: T(n) = θ(1), if n ≤ C aT(n + b) + D(n) + C(n), otherwise 1 2 2 3 4 5 6 7 2 4 5 7 2 4 5 7 2 5 4 7 1 3 2 6 5 2 4 7 1 3 2 6 Merge Merge Merge Merge Merge Merge
  • 117. b. Quick Sort:  Quick Sort technique is based on the divide and conquers design technique, which works recursively on a longer list.  Here first we select the “pivot element” from the list, then it partitions the list into elements that are less than the pivot element and greater than the pivot. Here the problem of sorting a given list is reduced to the problem of sorting two sub-lists.  The reduction step in the quick sort finds the final position of particular element, which can be accomplished by scanning that last element of the list from the right to left and checks the elements.  The comparisons of elements with the first element stops, when we obtain the elements smaller than the first element. Thus, in this case, exchange of both the elements takes place.
  • 118.  The whole procedure continues until all the elements of the list are arranged in such a way that on the left side of the pivot element, the elements are lesser and on the right side, the elements are greater than the pivot. Thus, the list is sub-divided into two lists.  The sorting technique is considered as an in-place, since it used no other array storage. Given an array Q[p … r], on the basis of Divide and Conquer the quick sort works as follows: i. Divide Q[p … r] into Q[p … q] and Q[q + 1 … r], all have „q‟ determined as a part of the division. ii. In conquer method, Q[p … q] and Q[q + 1 … r] are then sorted recursively. iii. In combine method, none as all this leaves sorted array in-place.
  • 119. Procedure for Quick Sort: 1. While pivot > a[down], then down++. 2. While pivot < a[up], then up--. 3. If the position of down < position of up, then swap the value of up and down. Then again the conditions 1 and 2 etc. are performed. 4. If position of down < position of up is false then swap up the value with pivot value. 5. Then after the first pivot element is selected then the array is divided into 2 parts and again these 2 parts are again sorted. 7 5 3 2 9 8 10 -5 4 1 pivot down up
  • 120. Quick Sort Algorithm: Quick_Sort(A, p, r) i. if p < r ii. q ← partition(A, p, r) iii. Quick_Sort(A, p, q – 1) iv. Quick_Sort(A, q + 1, r) Algorithm for Partition of Quick sort: Partition(A, p, r) i. x ← A[r] ii. i ← p – 1 iii. for j ← p to r – 1 iv. do if A[j] ≤ x v. then i ← i + 1 vi. exchange A[i] ↔ A[j] vii. exchange A[i + 1] ↔ A[r] viii. return i + 1
  • 121. Analysis for Quick Sort:  The running time of Quick sort depends on whether the partition is balanced or unbalanced and it depends on which elements are used for partitioning. I. Best Case Analysis:  Partitioning produces two sub-problems, each of size number, more than n/2. The recurrence for the running time, T(n) = 2T(n/2) + θ(n). => T(n) = θ(nlogn). [Applying master method].  E.g.: T(n) = T(n – 1) + T(0) + θ(n)  => T(n) = T(n – 1) + θ(n)  => T(n – 1) = T(n – 2) + θ(n - 1) II. Worst Case Analysis: It occurs when the partitioning routine or algorithm or pseudo-code produces one sub-problem with n – 1 elements and one with zero elements.
  • 122. Let us assume that this unbalanced partitioning arises in each recursive call. The partitioning costs, θ(n) times, since T(1) = θ(1) and the recurrence for the running time is: E.g.: T(n) = T(n – 1) + T(0) + θ(n) => T(n) = T(n – 1) + θ(n) => T(n) = θ(n2) [Using substitution method]. III. Average Case Analysis: In average case analysis, the array is partitioned by choosing any random number. In this case, at each level some of the partitions are well balanced while some are fairly unbalanced. Let us assume that the partition of array to be q : 1, the recurrence so obtained is: T(n) = T(9n/10) + T(n/10) + n => T(n) = θ(nlogn). Each step is about „n‟ and (9/10)i . n = 1. => log(10/9) . logn steps.
  • 123. ≤ n n 9n/10 n/100 9n/100 n/10 9n/100 81n/100 n n n θ(nlogn)
  • 124. c. Heap Sort:  The heap sort is accomplished by using 2 other function, that is: i. Build-Heap: For maintaining a max heap ii. Max Heapify: For fixing the heap  The heap is created when we input an array of „n‟ element where „n‟ represents the length of the array „A‟. i.e.: n = length[A]. Algorithm for Build a heap: Build_Max_Heap(A) i. heapsize[A] ii. for i ← length [A]/2 down to 1 iii. do Max_Heapify(A, i)
  • 125. Algorithm for Max_Heapify: Max_Heapify(A, i) i. l ← left(i) ii. r ← right(i) iii. if l ≤ heapsize[A] and A[l] > A[i] iv. then largest ← l v. else largest ← i vi. if r ≤ heapsize[A] and A[r] ≥ A[largest] vii. then largest ← r viii.if largest ≠ i ix. then exchange A[i] ← A[largest] x. Max_Heapify(A, largest)
  • 126. Algorithm for Heapsort: Heapsort(A) i. Build_Max_Heap(A) ii. for i ← length [A] down to 2 iii. do exchange A[1] ↔ A[i] iv. heapsize[A] ← [A] – 1 v. Max_Heapify(A, 1) Analysis for Heapsort: I. Running Time of Max_Heapify: The running time of Max_Heapify on a sub-tree of size „n‟ rooted at given node (i) is θ(1) time to fix up the relationship among the elements A[i], A[left(i)] and A[right(i)] + the time to run Max_Heapify on a sub-tree rooted at one of the children of node „i‟.
  • 127. The children‟s sub-tree is having size 2n/3. So the worst case occurs when the last row of the tree is exactly half full. If „n‟ is the heapsize, then T(n) ≤ T(2n/3) + θ(1) that implies T(n) = O(logn) [Using Master Method]. Time required by Max_Heapify, when called on a node of height „b‟ is O(h). II. Running time of Build_Max_Heap: The running time of Max_Heapify is O(logn). Here heapify is invoked n/2 times, thus the time complexity of Build-Max-Heap is O(nlogn) and takes time O(n). III. Running time for Heapsort: The heap sort procedure takes time O(nlogn). Since the call to Build-Max-Heap takes time θ(n) and each of the n – 1 call to Max_Heapify takes O(logn). So the total running time for Heapsort is O(nlogn).
  • 128. Lower Bound of Sorting:  Before going to lower bound for sorting, some important concepts are explained below. That are:  Internal sorting: It refers to the sorting operation performed over a list which is stored in a primary memory.  External sorting: When a list is stored in a file is accommodated in the secondary memory, the sorting technique is referred to external sorting.  In-place: The sorting algorithm is in-place, only if a constant number of data elements of an input array are ever stored is required and hence it is possible to sort a large list without the need of additional working language.
  • 129.  Stable: A sorting algorithm can be divided into 2 main classes. That is: in-place and stable. A sorting algorithm is stable, in which the two elements that are equal remain in the same relative position after performing the sorting.  In a comparison sort, we use only comparisons between elements to gain order information about an input sequence <a1, a2, … an> that is given two element ai and aj.  We perform one of the tests ai < aj, ai ≤ aj, ai = aj, ai ≥ aj, or ai > aj to determine their relative order.  Here ai = aj is useless and the comparisons ai ≤ aj, ai ≥ aj, ai < aj, ai > aj are all equivalent in that they yield identical information about the relative order „ai‟ and „aj‟. We therefore assume that all comparisons have the from ai ≤ aj.
  • 130.  Here we will present on abstract model to represent comparison based sorts referred to as “Decision tree Model”. Comparison Based Sorts Algorithm Running Time Worst Case Average case Best Case In-place Insertion Sort O(n2) O(n2) O(n) √ Merge Sort O(nlogn) O(nlogn) O(nlogn) X Heap Sort O(nlogn) O(nlogn) O(nlogn) √ Quick Sort O(n2) O(nlogn) O(nlogn) √
  • 131. The Decision Tree Model:  The decision tree can represent any comparison based algorithm behavior on an input of a certain size for a particular value of „h‟.  The decision tree is a fully binary tree. Each node in the decision tree corresponds to one of the comparisons in the algorithm. The sorting algorithm starts at the root node and does the first comparison. i. If ai ≤ aj, then take left branch ii. If ai > aj, then take right branch  The whole process is repeated until a leaf is encountered. Each leaf represents correct/one ordering of the input.  It should be noted that the sorting algorithm is proved to be correct only when each of n! permutations on „n‟ element must appear as one of the leaves of the decision trees and also that each of these leaves are reachable from root node.
  • 132. Example:  The decision tree for inserting sort operating on 3 elements. An internal node annotated by i : j indicates a comparison between ai and aj.  A leaf annotated by the permutation <π(1), π(2), … π(n)> indicates the ordering aπ(1) ≤ aπ(2) ≤ … aπ(n). The shaded path indicates the decisions made when sorting the input sequence (a1 = 6, a2 = 8, a3 = 5).  The permutation <3, 1, 2> at the leaf indicates that the sorted ordering is a3 = 5 ≤ a1 = 6 ≤ a2 = 8. There are 3! = 6 possible permutations of the input elements, so the decision tree must have at least 6 leaves.
  • 133. 1 : 2 1 : 32 : 3 1 : 3 2 : 3<1, 2, 3> <2, 1, 3> <1, 3, 2> <3, 1, 2> <2, 3, 1> <3, 2, 1> ≤ ≤ ≤ ≤ ≤ > > > > >
  • 134. A Lower Bound of Worst Case:  The length of the longest path from the root of a decision tree to any of its reachable leaves represents the worst case number of comparisons, that the corresponding sorting algorithm performs.  Consequently, the worst-case number of comparisons for a given comparison sort algorithm equals the height of its decision tree.  A lower bound on the heights of all decision trees in which each permutation appears as a reachable leaf is therefore a lower bound on the running time of any comparison sorts algorithm.
  • 135.  Any comparison sort algorithm requires Ω(nlogn) comparisons in the worst case, which contains following properties. That is: i. There must be n! permutation leaves, one corresponding for each possible ordering of „n‟ elements. ii. Length (no. of edges) of longest path in decision tree (its height) is either equal to the worst case number of operations of algorithm(Lower Bound on time).  Heap sort and Merge sort are asymptotically optimal comparison sorts.  When O(nlogn) upper bounds on the running times for heap sort and merge sort match Ω(nlogn) worst-case lower bound.
  • 136. Priority Queue:  Priority Queue is defined as a set „P‟ of elements where each element is associated with a key.  Two variants of priority queue occur maximum priority queue and minimum priority queue.  The main operations supported by the maximum priority are as follows: i. Insert(p, x) ii. Maximum(P) iii. Extract_Maximum(P) iv. Increase_Key(P, x, k)
  • 137. Algorithm for Priority Queue: 1. Algorithm for insert(p, x) in maximum priority queue:  This operation inserts the element „x‟ into the set „p‟. That is: p ← p U {x}. The algorithm for this is:  Procedure_Insert(H, k) The above procedure inserts an element with key value „k‟ in a given maximum heap. The heap size is incremented by 1 after the insertion of the element with key „k‟.  Step 1: Incrementing the array size, assuming size does not exceed the maximum array size. Set heapsize[H] ← heapsize[H] + 1
  • 138.  Step 2: Initialization set i ← heapsize[H]  Step 3: Loop obtaining proper position while(i > 1 and H[parent(i)] < k set H[i] ← H[parent(i)] set i ← parent(i)  Step 4: Insertion set H[i] ← k  Step 5: return at the point of call return Analysis for insertion:  It is noticeable that while inserting an element the process follows the path from a leaf to the root of the tree. Recall that the height of the tree is O(logn) which yields the total running time of O(logn).
  • 139. 2. Algorithm for function Maximum(p):  This operation returns the element having largest key value from the set „p‟. The algorithm for this is:  Function_Maximum(H) The above function returns the element having largest key value from the given heap.  Step 1: Return the value at the point of call. return(H[1]) Analysis for Function Maximum:  The running time for the above algorithm is θ(1), as always the largest key value is stored at the root only. 3. Algorithm for Extract_Maximum(p):  This operation removes and returns the element having largest value from the set „p‟. The algorithm for this is:
  • 140.  Function_Heap_Extract_Maximum(H) The above function removes and returns the element having largest key value from the given heap. The heap size is decremented by 1, after removing the element. The function call “Heapify” for fixing the new heap.  Step 1: Is empty? if(heapsize[H] < 1) then message “underflow heap” else goto step 2  Step 2: Initialization and adjusting the values set max ← H[1] set H[1] ← H[heapsize[H]] set heapsize[H] ← heapsize[H] - 1
  • 141.  Step 3: Fixing new heap call to Heapify(H, 1)  Step 4: return value at the point of call return(max) Analysis for Extract_Maximum:  It can be observed that „Heapify‟ takes O(logn) time and in the above algorithm it is called only at once.  The rest of the instructions are performed only once, which takes θ(1) time. Thus, extracting the element having maximum key value from the heap is performed in O(logn) time. 4. Algorithm for Increase_Key(p):  This operation increases the value of the element x‟s key with the new value „k‟, which is assumed to be at least as large as element x‟s current value. The algorithm for this is:
  • 142.  Procedure_Heap_Increase_Key(p, x, k) The above procedure increases value of the element is key with the new value „k‟, which is assumed to be at least as large as element i‟s current value.  Step 1: Is smaller? if(k < H[i]) then message: “new key „k‟ is smaller than the current key”. return else goto step 2  Step 2: Adjusting the new key „k‟ set H[i] ← k while(i > 1 and H[parent(i) < H(i)] set H[i] ↔ H[parent(i)] set i ← parent(i)
  • 143.  Step 3: return at the point of call return Analysis for Increase_Key:  It can be seen that the above algorithm runs in O(logn) time.  The element with key is adjusted in O(logn) as the path is traced from the node to the root.
  • 144. Counting Sort:  Counting Sort assumes that each of „n‟ input elements in an integer in the range 0 to k, for some integer „k‟ = O(n), the sort runs in θ(n) times.  The basic idea of counting sort is to determine, for each input element „x‟, the number of elements less than x. This is used to place element „x‟ directly into its position in the output array.  The algorithm for counting sort is; Counting_Sort(A, B, k) i. for i ← 0 to k ii. do c[i] ← 0 iii. for i ← 1 to length[A] iv. do c[A[j]] ← c[A[j]] + 1
  • 145. v. c[i] now contains the number of elements equals to i. vi. for i ← 1 to z vii. do c[i] ← c[i] + c[i – 1] viii. c[i] now contains the number of elements less than or equal to i. ix. for j ← length[A] down to 1 x. do B[c[A[j]]] ← A[j] xi. c[A[j]] ← c[A[j] – 1  Here, in the code for counting sort A[1…n] = input in an array, length[A] = n, array B[1…n] = holds the sorted output, array c[0…k] = temporary working storage and k is the elements range from 1 to k elements value ≤ k.  Counting sort is a stable sort and is used in Radix sort.
  • 146.  Analysis of Counting sort:  It can be observed that two for loops of size „k‟ and two for loops for size length = n, exist in the above algorithm. Thus the running time for the counting sort is O(n + k).  Since, it is a non-comparison sort, it can beat the lower bound of Ω(nlogn) time, thus for k = O(nlogn). [can get close to O(n) linear time for larger „k‟ using hash tables].  Example: (i) 1 2 3 4 5 6 7 8 A 2 5 3 0 2 3 0 3 0 1 2 3 4 5 C 2 0 2 3 0 1 1 2 3 4 5 6 7 8 B 0 3
  • 147. (ii) (iii) (iv) => 0 1 2 3 4 5 C 2 2 4 7 7 8 1 2 3 4 5 6 7 8 B 0 3 0 1 2 3 4 5 C 2 2 4 6 7 8 1 2 3 4 5 6 7 8 B 0 3 3 0 1 2 3 4 5 C 1 2 4 6 7 8 0 1 2 3 4 5 C 1 2 4 5 7 8 1 2 3 4 5 6 7 8 B 0 0 2 2 3 3 3 5
  • 148. Radix Sort:  In Radix Sort technique, the list consists of „n‟ integers and each integer has „d‟ digits (or digits in any base).  We start the sorting repeatedly starting at the lower order digit and finishing with the highest order.  It is noticeable that the sorting is stable, thus if the numbers are already sorted with respect to low order digits and then later we sort with respect to high order digits will remain sorted with respect to their lower order digit.  The algorithm for Radix Sort: Radix_Sort(A, d) i. for i ← 1 to d ii. do use a stable sort to sort array „A‟ on digit „i‟.
  • 149.  Analysis of Radix sort:  Consider the running time for stable sort (internal sort) is given by „Ts‟, we know the counting sort runs in O(k + n) time. Thus, for counting sort, Ts(n) = O(k + n). So for d- digits, O(d(Ts(n)) = O(d(k + n)).  If d = O(1) and k = O(n), then Ts = O(n). If d = O(logn) and k = 2, then Ts = O(d(k + n)) = O(nlogn)  Example: The elements for radix sort is:  For d = 1 unit  For d = 2 unit  For d = 3 unit 725 831 711 215 055 783 222 444 303 125 110 324 110 831 711 222 783 303 324 444 725 215 055 125 303 110 711 215 222 324 725 125 831 444 055 783 055 110 125 215 222 303 324 444 711 725 783 831
  • 150. Bucket Sort:  In Bucket Sort, the assumption is that the input elements are distributed uniformly over some known range, for example: (0, 1).  The basic idea of this sorting technique is to divide the internal [0, 1) into an n-equal sized sub-interval or bucket and then distributed the „n‟ numbers into the buckets so created. The algorithm for this is: Bucket_Sort(A) i. n ← length[A] ii. for i ← 1 to n iii. do insert A[i] into list B[LA[i]] iv. for i ← 0 to n – 1 v. do sort list B[i] with insertion sort vi. concatenate the lists B[0], B[1], … B[n – 1] together in order.
  • 151.  Analysis of Bucket sort:  It can be observed that except the sorting using insertion sort, all instructions are executed in O(n) time.  Total „n‟ calls are made to the insertion sort. So in order to calculate the cost of calling the insertion sort, consider „ni‟ a random variable which denotes the number of elements placed in bucket B[i].  Recall that the insertion runs in quadratic time. Thus, the running time for bucket sort is: T(n) = θ(n) + 𝑂 𝑛𝑖 2𝑛 −1 𝑖=0  The expected time to sort elements in B[i] is: E[T(n)] = E[θ(n) + 𝑂 𝑛𝑖 2𝑛 −1 𝑖=0 ] = θ(n) + 𝐸[𝑂 𝑛𝑖 2𝑛 −1 𝑖=0 ] = θ(n) + 𝑂(𝐸[ 𝑛𝑖 2𝐸 −1 𝑖=0 ]) [By linearity of expectation]
  • 152.  We know that, there are „n‟ elements and „n‟ buckets, so the probability that an element acquires bucket B[i] is 1/n.  For the binomial distribution B(k : n, p) where ni = k and p = 1/n.  The expected time for distribution of random variable is: E(ni) = np = 1.  The variance is calculated as: var[ni] = np(1 – p) = n.1/n(1 – 1/n) = 1 – 1/n.  It is noticeable that for any random variable: E[ni 2] = var[ni] + E2[ni] = (1 – 1/n) + 12 = 2 – 1/n.  The expected time for the Bucket sort is: T(n) = θ(n) + 𝑂 2 − 1/𝑛 2𝑛 −1 𝑖=0 = θ(n)
  • 153.  Thus, the entire Bucket sort algorithm runs in θ(n) linear expected time. Algorithm Non-Comparison Based Sorts Running Time Worst Case Average Case Best Case In-place Counting Sort O(n + k) O(n + k) O(n + k) X Radix Sort O(d(n + k)) O(d(n + k)) O(d(n + k)) – Bucket Sort – O(n) – –