SlideShare a Scribd company logo
1 of 128
Download to read offline
Introduction
Algorithms-Design &
Analysis
By:
Dr. Pankaj Agarwal
Professor & Head,
Dept. of Computer Sc & Engineering
IMS Engineering College
Problem − Design an algorithm to add two
numbers and display the result.
• Step 1 − START
• Step 2 − declare three integers a, b & c
• Step 3 − define values of a & b
• Step 4 − add values of a & b
• Step 5 − store output of step 4 to c
• Step 6 − print c
• Step 7 − STOP
Alternatively, the algorithm can be written as
• Step 1 − START ADD
• Step 2 − get values of a & b
• Step 3 − c ← a + b
• Step 4 − display c
• Step 5 − STOP
usually the second method is used to describe an
algorithm.
It makes it easy for the analyst to analyze the
algorithm ignoring all unwanted definitions
Writing step numbers, is optional.
The term “Algorithm”
• first used in about 1230 and then by Chaucer in 1391.
• English adopted the French term, but it wasn't until the
late 19th century that "algorithm" took on the meaning
that it has in modern English.
• Algorithms are essential to the way computers process
data.
• any sequence of operations that can be simulated by the
system.
• order of computation is always crucial to the functioning
of the algorithm.
Difference between Algorithm and Pseudocode
• An algorithm is a formal definition with some specific
characteristics that describes a process, which could be
executed by a computer machine to perform a specific
task. Generally, the word "algorithm" can be used to
describe any high level task in computer science.
• On the other hand, pseudocode is an informal and (often
rudimentary) human readable description of an
algorithm leaving many granular details of it.
• Writing a pseudocode has no restriction of styles and its
only objective is to describe the high level steps of
algorithm in a much realistic manner in natural language.
Algorithm Design
• Goal- design efficient algorithm using minimum
time and space.
• Some approaches can be efficient with respect to
time consumption, whereas other approaches
may be memory efficient.
• both time consumption and memory usage
cannot be optimized simultaneously.
Problem Development Steps
• Problem definition
• Development of a model
• Specification of an Algorithm
• Designing an Algorithm
• Checking the correctness of an Algorithm
• Analysis of an Algorithm
• Implementation of an Algorithm
• Program testing
• Documentation
Characteristics of Algorithms
• Algorithms must have a unique name
• Algorithms should have explicitly defined set of
inputs and outputs
• Algorithms are well-ordered with unambiguous
operations
• Algorithms halt in a finite amount of time.
Algorithms should not run for infinity, i.e., an
algorithm must end at some point
Algorithm: Insertion-Sort Input: A list L of integers of length n
Output: A sorted list L1 containing those integers present in L
Step 1: Keep a sorted list L1 which starts off empty
Step 2: Perform Step 3 for each element in the original list L
Step 3: Insert it into the correct position in the sorted list L1.
Step 4: Return the sorted list Step 5: Stop
• Here is a pseudocode which describes how the
high level abstract process mentioned above in
the algorithm Insertion-Sort could be described
in a more realistic way.
for i <- 1 to length(A)
x <- A[i]
j <- i
while j > 0 and A[j-1] > x
A[j] <- A[j-1]
j <- j - 1
A[j] <- x
Classification of algorithms
• There are various ways to classify algorithms, each
with its own merits.
By implementation
• One way to classify algorithms is by implementation means
Recursion
A recursive algorithm is one that invokes (makes reference to) itself
repeatedly until a certain condition (also known as termination
condition) matches, which is a method common to functional
programming.
Iterative algorithms use repetitive constructs like loops and sometimes
additional data structures like stacks to solve the given problems.
Classification of algorithms
Logical
An algorithm may be viewed as controlled logical deduction.
This notion may be expressed as: Algorithm = logic + control.
The logic component expresses the axioms that may be used in the
computation and the control component determines the way in which
deduction is applied to the axioms.
This is the basis for the logic programming paradigm. In pure logic
programming languages the control component is fixed and algorithms
are specified by supplying only the logic component.
The appeal of this approach is the elegant semantics: a change in the
axioms has a well-defined change in the algorithm.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers
execute one instruction of an algorithm at a time. Those computers are
sometimes called serial computers. An algorithm designed for such an
environment is called a serial algorithm, as opposed to parallel
algorithms or distributed algorithms.
Parallel algorithms take advantage of computer architectures where several
processors can work on a problem at the same time, whereas distributed
algorithms utilize multiple machines connected with a computer network.
Parallel or distributed algorithms divide the problem into more symmetrical
or asymmetrical subproblems and collect the results back together
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decision
at every step of the algorithm whereas non-deterministic
algorithms solve problems via guessing although typical
guesses are made more accurate through the use
of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation
algorithms seek an approximation that is closer to the true
solution. Approximation can be reached by either using a
deterministic or a random strategy. Such algorithms have
practical value for many hard problems.
Algorithm Analysis
• Efficiency of an algorithm can be analyzed at two different stages,
before implementation and after implementation. They are the
following −
• A Priori Analysis − This is a theoretical analysis of
an algorithm.
• A Posterior Analysis − This is an empirical analysis
of an algorithm. The selected algorithm is
implemented using programming language.
Analysis of Algorithms
Complexity analysis of an algorithm & its need-:
❑Algorithm analysis provides theoritical estimates for the resources
(Time & Memory) needed by an algorithm before actual
implementation.
❑ Understanding the behaviour & performance of the algorithm in
terms of Time (Time Complexity) and Space(Space Complexity)
requirements w.r.t growth in input size of the data.
❑Algorithm Analysis helps us to predict – how feasible & effective
the algorithm will be after its actual implementation. This helps us to
design more efficient algorithms.
❑ We only analyze correct algorithms.
• Analysis & study of algorithms is abstracted without the
use of a specific programming language or
implementation.
• it focuses on the underlying properties of the algorithm
and not on the specifics of any particular
implementation.
• Usually pseudocode is used for analysis as it is the
simplest and most general representation.
• ultimately, implemented on particular hardware /
software platforms and their algorithmic efficiency is
eventually put to the test using real code.
• Scaling from small n to large n frequently exposes
inefficient algorithms that are otherwise benign.
❑In theoretical analysis of algorithms it is common to estimate their complexity
in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily
large input. Big O notation, Big-omega notation and Big-theta notation are used.
❑ Usually asymptotic estimates are used because different implementations of
the same algorithm may differ in efficiency. However the efficiencies of any
two "reasonable" implementations of a given algorithm are related by a
constant multiplicative factor called a hidden constant.
❑ Exact (not asymptotic) : require certain assumptions concerning the particular
implementation of the algorithm, called model of computation. A model of
computation may be defined in terms of an abstract computer, e.g.,Turing
machine, and/or by postulating that certain operations are executed in unit time.
Exact VS Asymptotic Analysis
Run-time analysis
• Run-time analysis is a theoretical classification
that estimates and anticipates the increase in
running time (or run-time) of an algorithm as
its input size (usually denoted as n) increases.
Run-time efficiency is a topic of great interest
in Computer Science: A program can take
seconds, hours or even years to finish
executing, depending on which algorithm it
implements
Shortcomings of empirical metrics
• Since algorithms are platform-independent (i.e.
a given algorithm can be implemented in an
arbitrary programming language on an arbitrary
computer running an arbitrary operating
system),
• there are significant drawbacks to using an
empirical approach to gauge the comparative
performance of a given set of algorithms.
• Take as an example a program that looks up a specific entry in a
sorted list of size n. Suppose this program were implemented on
Computer A, a state-of-the-art machine, using a linear search
algorithm, and on Computer B, a much slower machine, using a
binary search algorithm.
• Benchmark testing on the two computers running their
respective programs might look something like the following
n (list size) Computer A run-time
(in nanoseconds)
Computer B run-time
(in nanoseconds)
15 7 ns 100,000 ns
65 32 ns 150,000 ns
250 128 ns 200,000 ns
1,000 500 ns 250,000 ns
Based on these metrics, it would be easy to jump to the conclusion that Computer A is
running an algorithm that is far superior in efficiency to what Computer B is running.
• However, if the size of the input-list is increased to a sufficient
number, that conclusion is dramatically demonstrated to be in
error:
n (list size)
Computer A run-
time
(in nanoseconds)
Computer B run-time
(in nanoseconds)
15 7 ns 100,000 ns
65 32 ns 150,000 ns
250 125 ns 200,000 ns
1,000 500 ns 250,000 ns
... ... ...
1,000,000 500,000 ns 500,000 ns
4,000,000 2,000,000 ns 550,000 ns
16,000,000 8,000,000 ns 600,000 ns
... ... ...
63,072 × 1012
31,536 × 1012 ns,
or 1 year
1,375,000 ns,
or 1.375 milliseconds
Exact Model of Computation
Cost models
Time efficiency estimates depend on what we define to be a step. For the
analysis to correspond usefully to the actual execution time, the time
required to perform a step must be guaranteed to be bounded above by a
constant.
Two cost models are generally used
❑Uniform cost model, also called uniform-cost measurement , assigns a
constant cost to every machine operation, regardless of the size of the
numbers involved
❑ Logarithmic cost model, also called logarithmic-cost measurement,
assigns a cost to every machine operation proportional to the number of
bits involved
The latter is more cumbersome to use, so it's only employed when
necessary, for example in the analysis of arbitrary-precision
arithmeticalgorithms, like those used in cryptography.
RAM -Model of implementation
Before implementation we need a model of implementation technology. Machine-
independent algorithm design depends upon a hypothetical computer called the Random
Access Machine or RAM
Random Access Model (RAM) is most popular
Assumptions:
❑Each ``simple'' operation (+, *, -, =, if, call) takes exactly 1 time step.
❑Serial execution of instructions
❑ Each instruction to take a constant amount of time
❑ simple data type- integer & float
❑ Memory hierarchy- not modelled. Each memory access takes exactly one time step, and we have
as much memory as we need. The RAM model takes no notice of whether an item is in cache or on
the disk, which simplifies the analysis.
Under the RAM model, we measure the run time of an algorithm by counting up the number of
steps it takes on a given problem instance. By assuming that our RAM executes a given number of
steps per second, the operation count converts easily to the actual run time.
RAM proves an excellent model for understanding how an algorithm will perform on a real
computer. It strikes a fine balance by capturing the essential behavior of computers while being
simple to work with. We use the RAM model because it is useful in practice
Important tips for analyzing algorithms
Questions to be asked while analyzing algorithms
• How does one calculate the running time of an algorithm?
• How can we compare two different algorithms?
• How do we know if an algorithm is `optimal'?
1. Count the number of basic operations performed by the algorithm
on the worst-case input
A basic operation could be:
▪ An assignment
▪ A comparison between two variables
▪ An arithmetic operation between two variables. The worst-case
input is that input assignment for which the most basic operations
are performed.
Example:
n := 5;
loop get(m);
n := n -1;
until (m=0 or n=0)
Worst-case: 5 iterations
Usually we are not concerned with the number of steps for a
single fixed case but wish to estimate the running time in terms
of the `input size'.
get(n);
loop
get(m);
n := n -1;
until (m=0 or n=0)
Worst-case: n iterations
Important tips for analyzing algorithms
2) Counting the Number of Basic Operations
a) Sequence: P and Q are two algorithm sections:
Time( P ; Q ) = Time( P ) + Time( Q )
b) Iteration:
while < condition > loop
P;
end loop;
or
for i in 1..n loop
P;
end loop
Time = Time( P ) * ( Worst-case number of iterations )
c) Conditional:
if < condition > then P;
else Q;
end if;
Time = Time(P) if < condition > =true
Time( Q ) if < condition > =false
Example:
for i in 1..n loop
for j in 1..n loop
if i < j then
swop (a(i,j), a(j,i)); -- Basic operation
end if;
end loop;
end loop;
Time < n*n*1 = n2
Recursive structures
Consider a recursive function for obtaining
Fibonacci series
long fib(int n)
{
if(n<=1) // line 1
return 1; //line 2
else
return fib(n-1)+fib(n-2); //line 3
}
Expressing Algorithm as Function of
input N
Statement
Statemen
t cost
Frequency Total
cost
1) Algorithm SUM(A,N) 0 0 0
2) { 0 0 0
3) S=0 1 1 1
4) For I=1 to N 1 N N
5) S=S+A[i]; 1 N-1 N-1
6) Return S; 1 1 1
7) } 0 0 0
Total 2N+1
Input size and basic operation examples
Problem Input size measure Basic operation
Search for key in list
of n items
Number of items in
list n
Key comparison
Multiply two
matrices of floating
point numbers
Dimensions of
matrices
Floating point
multiplication
Compute an n
Floating point
multiplication
Graph problem
#vertices and/or
edges
Visiting a vertex or
traversing an edge
• Complexity of an algorithm is analyzed in two
perspectives: Time and Space.
Time Complexity
• It’s a function describing the amount of time required to
run an algorithm in terms of the size of the input.
Space Complexity
• amount of memory an algorithm takes in terms of the
size of input to the algorithm.
• Space complexity is sometimes ignored
Time and Space Complexity
Space complexity
For any algorithm memory may be used for the following:
1.Variables (This include the constant values, temporary values)
2.Program Instruction
3.Execution
Space complexity is the amount of memory used by the
algorithm (including the input values to the algorithm) to execute
and produce the result.
Sometime Auxiliary Space is confused with Space Complexity.
But Auxiliary Space is the extra space or the temporary space
used by the algorithm during it's execution.
Space Complexity = Auxiliary Space + Input space
Memory Usage while Execution
While executing, algorithm uses memory space for three reasons:
1.Instruction Space
It's the amount of memory used to save the compiled version of
instructions.
2.Environmental Stack
Sometimes an algorithm(function) may be called inside another
algorithm(function). In such a situation, the current variables are pushed
onto the system stack, where they wait for further execution and then the
call to the inside algorithm(function) is made.
For example, If a function A() calls function B() inside it, then all th
variables of the function A()will get stored on the system stack temporarily,
while the function B() is called and executed inside the funciton A().
3.Data Space
Amount of space used by the variables and constants.
But while calculating the Space Complexity of any algorithm, we usually
consider only Data Space and we neglect the Instruction
Space and Environmental Stack.
Calculating the Space Complexity
• For calculating the space complexity, we need to know the
value of memory used by different type of datatype variables,
which generally varies for different operating systems, but the
method for calculating the space complexity remains the same.
Type Size
bool, char, unsigned char, signed char, __int8 1 byte
__int16, short, unsigned short, wchar_t,
__wchar_t
2 bytes
float, __int32, int, unsigned int, long, unsigned
long
4 bytes
double, __int64, long double, long long 8 bytes
{ int z = a + b + c; return(z); }
In the above expression, variables a, b, c and z are all integer types, hence they
will take up 2 bytes each, so total memory requirement will be (8 + 2) = 10 bytes,
this additional 2 bytes is for return value.
And because this space requirement is fixed for the above example, hence it is
called Constant Space Complexity.
Calculating the Space Complexity
// n is the length of array a[]
int sum(int a[], int n)
{ int x = 0; // 2 bytes for x
for(int i = 0; i < n; i++) // 2 bytes for i
{ x = x + a[i]; }
return(x);
}
•In the above code, 2*n bytes of space is required for the array a[] elements.
•2 bytes each for x, n, i and the return value.
Hence the total memory requirement will be (2n + 8), which is increasing
linearly with the increase in the input value n, hence it is called as Linear
Space Complexity.
Similarly, we can have quadratic and other complex space complexity as
well, as the complexity of an algorithm increases.
But we should always focus on writing algorithm code in such a way that we
keep the space complexity minimum.
Calculating the Space Complexity
Complexity Analysis of Algorithms
The Time Complexity of an Algorithm
Specifies how the running time depends on the size of the input
•To estimate how long a program will run.
•To estimate the largest input that can reasonably be given to the
program.
•To compare the efficiency of different algorithms.
•To help focus on the parts of code that are executed the largest
number of times.
•To choose an algorithm for an application.
Types of Notations for Time Complexity
• Big Oh denotes "fewer than or the same as"
<expression> iterations.
• Big Omega denotes "more than or the same as"
<expression> iterations.
• Big Theta denotes "the same as" <expression> iterations.
• Little Oh denotes "fewer than" <expression> iterations.
• Little Omega denotes "more than" <expression>
iterations.
Understanding Notations
• O(expression) is the set of functions that grow slower than or
at the same rate as expression. It indicates the maximum
required by an algorithm for all input values. It represents the
worst case of an algorithm's time complexity.
• Omega(expression) is the set of functions that grow faster
than or at the same rate as expression. It indicates the
minimum time required by an algorithm for all input values. It
represents the best case of an algorithm's time complexity.
• Theta(expression) consist of all the functions that lie in both
O(expression) and Omega(expression). It indicates the
average bound of an algorithm. It represents the average case
of an algorithm's time complexity.
Time Complexity Is a Function
Specifies how the running time depends on the
size of the input.
A function mapping
“size” of input
“time” T(n) executed
.
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size
• Basic operation: the operation that contributes most towards
the running time of the algorithm.
T(n) ≈ copC(n)
running time execution time
for basic operation
Number of times
basic operation is
executed
input size
Asymptotic Analysis of Algorithms
❑defines the mathematical boundation on run-time
performance,
❑asymptotic analysis determines the best case, average
case, and worst case scenario,
❑Understand the growth rate of the fucntions
(algorithm),
❑To compare the efficiency of different algorithms,
❑ To choose an algorithm for an application,
❑ machine dependent factors are ignored
▪ mathematical tools to represent time complexity
of algorithms for asymptotic analysis.
▪ doesn’t require algorithms to be implemented
▪ Determines the rate of growth of the function as
input tends to infinity
Asymptotic notations
• When it comes to analysing the complexity of any algorithm in
terms of time and space, we can never provide an exact
number to define the time required and the space required by
the algorithm, instead we express it using some standard
notations, also known as Asymptotic Notations.
Asymptotic Analysis
Let us take an example, if some algorithm has a time complexity of T(n) =
(n2 + 3n + 4), which is a quadratic equation. For large values of n, the 3n +
4 part will become insignificant compared to the n2 part
For n = 1000, n2 will
be 1000000 while 3n + 4 will
be 3004.
Also, When we compare the
execution times of two algorithms
the constant coefficients of higher
order terms are also neglected.
Asymptotic Computational Complexity
❑ commonly associated with the usage of the big O notation.
❑With respect to computational resources, asymptotic time
complexity and asymptotic space complexity are commonly
estimated.
❑"computational complexity" usually refers to the upper bound for
the asymptotic computational complexity of an algorithm or a
problem, which is usually written in terms of the big O notation,
❑Other types of (asymptotic) computational complexity estimates
are lower bounds ("Big Omega" notation; e.g., Ί(n)) and
asymptotically tight estimates, when the asymptotic upper and
lower bounds coincide (written using the "big Theta"; e.g.,
Θ(n log n)).
Asymptotic Time Complexity
Time complexities are classified by the nature of the
function T(n). For instance, an algorithm with T(n) = O(n)
is called a linear time algorithm, and an algorithm
with T(n) = O(Mn) for some M ≥ m > 1 is said to be an
exponential time algorithm.
Common Time Complexities
❖An algorithm is said to be constant time (also written
as O(1) time) if the value of T(n) is bounded by a value that does not
depend on the size of the input.
❖An algorithm is said to take logarithmic time if T(n) = O(log n).
The change of base for logarithms, loga n and logb n differ only by a
constant multiplier, which in big-O notation is discarded; thus
O(log n) is the standard notation for logarithmic time algorithms
regardless of the base of the logarithm.
• Logarithmic Time: O(log n) An algorithm is said to run
in logarithmic time if its time execution is proportional to
the logarithm of the input size.
• Example: binary search
• Algorithms taking logarithmic time are commonly found in
operations on binary trees or when using binary search.
• An O(log n) algorithm is considered highly efficient, as the
operations per instance required to complete decrease with each
instance.
▪ An algorithm is said to run in polylogarithmic time if T(n) = O((log n)k),
for some constant k. For example, matrix chain ordering can be solved in
polylogarithmic time on a Parallel Random Access Machine
▪ Sub-linear time if T(n) = o(n). includes algorithms with the time
complexities defined above, as well as others such as the O(n½). Sub-
linear time algorithms are typically randomized, and provide
only approximate solutions.
▪ Linear time, time complexity is O(n). For large enough input sizes the
running time increases linearly with the size of the input.
▪ Quasi linear time if T(n) = O(n logk n) for any constant k; linearithmic
time is the case k = 1. Using soft-O notation these algorithms are Õ(n).
Quasilinear time algorithms are also o(n1+Îľ) for every Îľ > 0, and thus run
faster than any polynomial in n with exponent strictly greater than 1.
▪ Linearithmic time is a special case of quasilinear time where the
exponent, k = 1 on the logarithmic term.
▪Subquadratic time if T(n) = o(n2).
For example, simple, comparison-based sorting algorithms are quadratic
(e.g. insertion sort), but more advanced algorithms can be found that are
subquadratic (e.g. Shell sort). No general-purpose sorts run in linear time, but the
change from quadratic to sub-quadratic is of great practical importance.
▪ Polynomial time if its running time is upper bounded by a polynomial
expression in the size of the input for the algorithm, i.e., T(n) = O(nk) for some
constant k. Problems for which a deterministic polynomial time algorithm exists
belong to the complexity class P
▪ superpolynomial time if T(n) is not bounded above by any polynomial. It is
ω(nc) time for all constants c, where n is the input parameter, typically the
number of bits in the input.
For example, an algorithm that runs for 2n steps on an input of size n requires
superpolynomial time.
▪ Quasi-polynomial time : run slower than polynomial time, yet not so slow as to
be exponential time. The worst case running time of a quasi-polynomial time
algorithm is for some fixed c. The best-known classical algorithm for
integer factorization
Time Complexities
Examples
O(1): Time complexity of a function (or set of statements) is considered as
O(1) if it doesn’t contain loop, recursion and call to any other non-
constant time function.
// set of non-recursive and non-loop statements
For example swap() function has O(1) time complexity.
A loop or recursion that runs a constant number of times is also
considered as O(1). For example the following loop is O(1).
// Here c is a constant
for (int i = 1; i <= c; i++)
{ // some O(1) expressions }
O(n): Time Complexity of a loop is considered as O(n) if the loop variables is
incremented / decremented by a constant amount. For example following
functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c)
{ // some O(1) expressions }
for (int i = n; i > 0; i -= c)
{ // some O(1) expressions }
O(nc): Time complexity of nested loops is equal to the number of
times the innermost statement is executed. For example the
following sample loops have O(n2) time complexity
for (int i = 1; i <=n; i += c)
{ for (int j = 1; j <=n; j += c)
{ // some O(1) expressions } }
for (int i = n; i > 0; i -= c)
{ for (int j = i+1; j <=n; j += c)
{ // some O(1) expressions }
O(Logn) Time Complexity of a loop is considered as O(Logn) if the
loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c)
{ // some O(1) expressions }
for (int i = n; i > 0; i /= c)
{ // some O(1) expressions }
For example Binary Search has O(Logn) time
complexity.
Let us see mathematically how it is O(Log n).
The series that we get in first loop is 1, c, c2, c3, … ck. If
we put k equals to Logcn, we get cLog
c
n which is n.
O(LogLogn) Time Complexity of a loop is considered as
O(LogLogn) if the loop variables is reduced / increased
exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n;i = pow(i, c))
{ // some O(1) expressions }
//Here fun is sqrt or cuberoot or any other constant
root
for (int i = n; i > 0; i = fun(i))
{ // some O(1) expressions }
In this case, i takes values 2, 2k, (2k)k = 2k2, (2k2)k = 2k3, …, 2klog
k
(log(n)). The
last term must be less than or equal to n, and we have 2klog
k
(log(n)) =
2log(n) = n, which completely agrees with the value of our last term. So
there are in total logk(log(n)) many iterations, and each iteration takes a
constant amount of time to run, therefore the total time complexity is
O(log(log(n))).
• How to combine time complexities of
consecutive loops?
When there are consecutive loops, we calculate time complexity
as sum of time complexities of individual loops.
for (int i = 1; i <=m; i += c)
{ // some O(1) expressions }
for (int i = 1; i <=n; i += c)
{ // some O(1) expressions }
Time complexity of above code is O(m) + O(n) which is
O(m+n) If m == n, the time complexity becomes O(2n)
which is O(n).
Asymptotic Notations: Big Oh Notation
We use big-O notation for asymptotic upper bounds, since it
bounds the growth of the running time from above for large
enough input sizes
Because big-O notation is asymptotic, it gives approximate
estimate.
Asymptotic notations are used when no exact estimates can
be computed.
The letter O is used because the growth rate of a function is
also referred to as order of the function.
Associated with big O notation are several related notations,
using the symbols o, Ω, ω, and Θ, to describe other kinds of
bounds on asymptotic growth rates.
Best-case, average-case, worst-case
Worst-case running time of an algorithm
The longest running time for any input of size n
An upper bound on the running time for any input
 guarantee that the algorithm will never take longer
Example: Sort a set of numbers in increasing order; and the
data is in decreasing order
The worst case can occur fairly often
E.g. in searching a database for a particular piece of
information
Best-case running time
sort a set of numbers in increasing order; and the data is
already in increasing order
Average-case running time
May be difficult, must have some assumptions about possible
inputs of size n
Average-case
❑Average Case analysis: determines average measure for the
amount of resources the algorithm uses on a random input
❑It studies the complexity of algorithms over inputs drawn
randomly from a particular probabilistic distribution
❑It differs from worst case which considers the maximal
complexity of the algorithm over all possible inputs
Why Average Case:
❑Some problems may be intractable in worst case, the inputs
which elicit this behaviour may rarely occur in practice
❑Here the average case analysis may be more accurate measure of
an algorithm’s performance
❑Most useful in Cryptographic and randomization
❑ Comparisons among algorithms based on average case analysis
can be more practical in some cases.
Big O h Notation
• Big O notation is a mathematical notation that describes
the limiting behavior of a function when the argument tends
towards a particular value or infinity.
• It is a member of a family of notations invented by Paul
Bachmann, Edmund Landau, and others, collectively
called Bachmann–Landau notation or asymptotic notation.
• big O notation is used to classify algorithms according to how their
running time or space requirements grow as the input size grows
• The letter O is used because the growth rate of a function is also
referred to as the order of the function.
• A description of a function in terms of big O notation usually only
provides an upper bound on the growth rate of the function.
• If f(N) = O(g(N))
•  c , n0 > 0 such that f(N) 
c g(N) when N  n0
• f(N) grows no faster than
g(N) for “large” N
• g(N) is an upper bound on
f(N)
• Ex: if F(n)=3n2+2n+1 then
F(n)<=4n2 for all n>1, c=4
• Therefore f(N) = O(n2)
BIG-Oh
The idea is to establish a relative order among functions
for large n. Bounds the function from above
Big O h Notation
A logarithmic algorithm – O(logn)
Runtime grows logarithmically in proportion to n.
A linear algorithm – O(n)
Runtime grows directly in proportion to n.
A superlinear algorithm – O(nlogn)
Runtime grows in proportion to n.
A polynomial algorithm – O(nc)
Runtime grows quicker than previous all based on n.
A exponential algorithm – O(cn)
Runtime grows even faster than polynomial algorithm based on n.
A factorial algorithm – O(n!)
Runtime grows the fastest and becomes quickly unusable for even
small values of n.
Little Îż asymptotic notation
• Big-Ο is used as a tight upper-bound on the growth of an
algorithm’s effort (this effort is described by the function f(n)),
even though, as written, it can also be a loose upper-bound.
“Little-ο” (ο()) notation is used to describe an upper-bound that
cannot be tight.
In mathematical relation,
f(n) = o(g(n)) means
lim f(n)/g(n) = 0
n→∞
• Is 7n + 8 ∈ o(n2)?
In order for that to be true, for any c, we have to be able to find an
n0 that makes f(n) < c * g(n) asymptotically true.
lets took some example,
If c = 100,we check the inequality is clearly true.
• then check limits,
lim f(n)/g(n) = lim (7n + 8)/(n2) = lim 7/2n = 0
n→∞ n→∞ n→∞
• hence 7n + 8 ∈ o(n2)
• express the lower bound of an algorithm's running time.
• measure of best case time complexity
Omega Notation, Ί
for a function f(n)
If f(n)= Ί(g(n) : there exists c > 0
and n0 such that c.g(n) ≤ f(n) for
all n > n0. }
Ex: if F(n)=3n2+2n+1 then
F(n)>=3n2 for all n>=1, C=1
▪ Express lower bound and the upper bound of an
algorithm's running time.
▪ Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0
such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
Theta Notation(θ)
f(n) is always between c1*g(n)
and c2*g(n) for large values of
n (n >= n0)
Notations
In typical usage, the formal definition of O notation is not used
directly; rather, the O notation for a function f is derived by the
following simplification rules:
❖If f(x) is a sum of several terms, the one with the largest growth
rate is kept, and all others omitted.
❖If f(x) is a product of several factors, any constants (terms in the
product that do not depend on x) are omitted.
Big O is the most commonly used asymptotic notation for comparing functions,
although in many cases Big O may be replaced with Big Theta Θ for
asymptotically tighter bounds.
Insertion Sort Example
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
How many times will
this loop execute?
Insertion Sort
Statement Effort
InsertionSort(A, n) {
for i = 2 to n { c1n
key = A[i] c2(n-1)
j = i - 1; c3(n-1)
while (j > 0) and (A[j] > key) { c4T
A[j+1] = A[j] c5(T-(n-1))
j = j - 1 c6(T-(n-1))
} 0
A[j+1] = key c7(n-1)
} 0
}
T = t2 + t3 + … + tn where ti is number of while expression
evaluations for the ith for loop iteration
Analyzing Insertion Sort
• What can T be?
– Best case -- inner loop body never executed
• ti = 1 ➔ T(n) is a linear function
– Worst case -- inner loop body executed for all previous
elements
• ti = i ➔ T(n) is a quadratic function
– Average case
• ???
Merge Sort
Examples
int count = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < i; j++)
count++;
Examples
Quiz & Interview
Questions
with Solutions
Consider the following three claims
1. (n + k)m = Θ(nm), where k and m are
constants2. 2n + 1 = O(2n)3. 22n + 1 = O(2n)
Which of these claims are correct ?
(A) 1 and 2
(B) 1 and 3
(C) 2 and 3
(D) 1, 2, and 3
• Answer: (A)
Explanation: (n + k)m and Θ(nm) are
asymptotically same as theta notation can always
be written by taking the leading order term in a
polynomial expression.
• 2n + 1 and O(2n) are also asymptotically same as 2n
+ 1 can be written as 2 * 2n and constant
multiplication/addition doesn’t matter in theta
notation.
• 22n + 1 and O(2n) are not same as constant is in
power.
Interview Questions
The auxiliary space of insertion sort is O(1), what
does O(1) mean ?
(A) The memory (space) required to process the
data is not constant.
(B) It means the amount of extra memory Insertion
Sort consumes doesn’t depend on the input. The
algorithm should use the same amount of memory
for all inputs.
(C) It takes only 1 kb of memory .
(D) It is the speed at which the elements are
traversed.
• Explanation: The term O(1) states that the
space required by the insertion sort is
constant i.e., space required doesn’t depend
on input.
void fun(int n, int arr[])
{
int i = 0, j = 0;
for(; i < n; ++i)
while(j < n && arr[i] < arr[j])
j++;
}
What is the time complexity of the below function?
(A) O(n)
(B) O(n^2)
(C) O(nlogn)
(D) O(n(logn)^2)
In the first look, the time complexity seems to be O(n^2) due to two loops.
But, please note that the variable j is not initialized for each value of
variable i. So, the inner loop runs at most n times. Please observe the
difference between the function given in question and the below function:
void fun(int n, int arr[])
{
int i = 0, j = 0;
for(; i < n; ++i)
{
j = 0;
while(j < n && arr[i] < arr[j])
j++;
}
}
What are time complexities of the functions?
int fun1(int n)
{
if (n <= 1) return n;
return 2*fun1(n-1);
}
int fun2(int n)
{
if (n <= 1) return n;
return fun2(n-1) + fun2(n-1);
}
(A) O(2^n) for both fun1() and fun2()
(B) O(n) for fun1() and O(2^n) for fun2()
(C) O(2^n) for fun1() and O(n) for fun2()
(D) O(n) for both fun1() and fun2()
• Explanation: Time complexity of fun1() can be
written as
T(n) = T(n-1) + C which is O(n)
• Time complexity of fun2() can be written as
T(n) = 2T(n-1) + C which is O(2^n)
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
What is time complexity of fun()?
(A) O(n^2)
(B) O(nLogn)
(C) O(n)
(D) O(nLognLogn)
• Answer: (C)
Explanation: For a input integer n, the
innermost statement of fun() is executed
following times.
• n + n/2 + n/4 + … 1
• So time complexity T(n) can be written as
• T(n) = O(n + n/2 + n/4 + … 1) = O(n)
• The value of count is also n + n/2 + n/4 + .. + 1
Solved Problems
Solved Problems
Which of the following is not O(n^2)?
(A) (15^10) * n + 12099
(B) n^1.98
(C) n^3 / (sqrt(n))
(D) (2^20) * n
Answer: (C)
Explanation: The order of growth of option c is n^2.5 which is higher
than n^2.
Now consider a QuickSort implementation where we first find median
using the above algorithm, then use median as pivot. What will be the
worst case time complexity of this modified QuickSort.
(A) O(n^2 Logn)
(B) O(n^2)
(C) O(n Logn Logn)
(D) O(nLogn)
Answer: (D)
Explanation: If we use median as a
pivot element, then the recurrence
for all cases becomes
T(n) = 2T(n/2) + O(n)
Solved Problems
What is the worst case time complexity of insertion sort where
position of the data to be inserted is calculated using binary search?
(A)N
(B) NlogN
(C) N^2
(D) N(logN)^2
Answer: (C)
Explanation: Applying binary search to calculate the position of the data to
be inserted doesn’t reduce the time complexity of insertion sort. This is
because insertion of a data at an appropriate position involves two steps:
1. Calculate the position.
2. Shift the data from the position calculated in step #1 one step right to
create a gap where the data will be inserted.
Using binary search reduces the time complexity in step #1 from O(N) to
O(logN). But, the time complexity in step #2 still remains O(N). So, overall
complexity remains O(N^2).
void fun(int n, int arr[])
{
int i = 0, j = 0;
for(; i < n; ++i)
while(j < n && arr[i] < arr[j])
j++;
}
What is the time complexity of the below function?
Run on IDE
(A) O(n)
(B) O(n^2)
(C) O(nlogn)
(D) O(n(logn)^2)
Answer: (A)
Explanation: In the first look, the time complexity seems to be O(n^2) due to
two loops. But, please note that the variable j is not initialized for each value of
variable i. So, the inner loop runs at most n times
Consider the array A[]= {6,4,8,1,3} apply the insertion sort to sort the
array . Consider the cost associated with each sort is 25 rupees , what
is the total cost of the insertion sort when element 1 reaches the first
position of the array ?
(A) 50
(B) 25
(C) 75
(D) 100
Answer: (A)
Explanation: When the element 1 reaches the first position of the
array two comparisons are only required hence 25 * 2= 50 rupees.
*step 1: 4 6 8 1 3 .
*step 2: 1 4 6 8 3.
int fun1(int n)
{
if (n <= 1) return n;
return 2*fun1(n-1);
}
int fun2(int n)
{
if (n <= 1) return n;
return fun2(n-1) + fun2(n-1);
}
Consider the following two functions. What are time complexities of the
functions?
Run on IDE
Run on IDE
(A) O(2^n) for both fun1() and fun2()
(B) O(n) for fun1() and O(2^n) for fun2()
(C) O(2^n) for fun1() and O(n) for fun2()
(D) O(n) for both fun1() and fun2()
Answer: (B)
Explanation: Time complexity of fun1() can be written as
T(n) = T(n-1) + C which is O(n)
Time complexity of fun2() can be written as
T(n) = 2T(n-1) + C which is O(2^n)
Tips for solving Questions On Asymptotic
#Tip1
• One should remenber the general order of following functions.
• O(1) < O(logn) < O( n) < O( nlogn) < O(n*n) < O(n*n*n) < O(nk)< O(2n)
#Tip2
• if f(x) = ϴ (g(x)) , we can say that f(x) is O(g(x)) and f(x) is Ω(g(x))
• and also g(x) is O(f(x)) and Ω(f(x))
#Tip3
The running time of for/while loop is number of iterations * running time of statement inside loop.
int sum=0;
for (int i=0;i<n;i++)
{
Sum=sum +i,
}
• The running time is O(n)
#Tip4: For nested loop
The running time of statement inside group of nested loop is product of size of loops.
for (i=0; i<n; i++)
for (j=0; j<k; j++)
a++;
The running time is O(n×k)
#Tip5
• sum of statements:
• Add running time of all the statements
• for (i=0;i<n; i++)
• k++ → O(n)
• for (j=0; j<n; j++)
• for (t=0, t<n; t++)
• The running time is 0(n) + 0(n2)
#Tip6
• Recursive function
• Derive recurrence relation and then solve it ,getting the running time is always better way than other structure.
• → Suppose T(n) is running time of function ABC and the variable reduces to half in next call.
• T(n) = T(n/2) + complexity rest of code
#Tip7
• An algorithm is O(log n) if it takes constant time to cut problem of half.
• Like binary score, heap.
• # T(sqrt(n )) → O(loglog n)
where T is recurrence relation
#Tip8
int doSomething (int n) { if (n≤ 2)
return 1;
else
return doSomething ( floor(sqrt (n)) + n)
}
Exercise
Exercise 2.5.1 Which grows faster 34n or 43n?
Exercise 2.5.2 Does the notation ((1))n mean the same thing as 2θ(n)?
Exercise 2.5.3 Prove that 20.5n ≤ 2n/n100 ≤ 2n for sufficiently big n.
Exercise 2.5.4 Prove that n2 ≤ 3n2 log n ≤ n3 for sufficiently big n.
Exercise 2.5.5 Sort the terms in f (n) = 100n100 + 34n + log1,000 n + 43n + 20.001n/n100.
Exercise 2.5.6 For each of the following functions, sort its terms by growth rate.
1. f (n) = 5n3 − 17n2 + 4
2. f (n) = 5n3 log n + 8n3
3. f (n) = 73 log2 n
4. f (n) = { 1 if n is odd, 2 if n is even }
5. f (n) = 2 ・ 2n ・ n2 log2 n − 7n8 + 7.3n/n2
Q: From lowest to highest, what is the correct order of the complexities O (n2), O (3n), O (2n), O (n2 lg
n), O (1), O (n lg n), O (n3), O (n!), O (lg n), O (n)?
A: From lowest to highest, the correct order of these complexities is O (1), O (lg n), O (n), O (n lg n), O
(n2), O (n2 lg n), O (n3), O (2n), O (3n), O (n!).
Q: What are the complexities of T1(n) = 3n lg n + lg n; T2(n) = 2n + n3 + 25; and T3(n, k) = k + n? From
lowest to highest, what is the correct order of the resulting complexities?
A: Using the rules of O -notation, the complexities of T1, T2, and T3 respectively are O (n lg n), O (2n),
and O (n). From lowest to highest, the correct order of these complexities is O (n), O (n lg n), and O
(2n).
Q: Suppose we have written a procedure to add m square matrices of size n x n. If adding two square
matrices requires O (n2 ) running time, what is the complexity of this procedure in terms of m and n?
A: To add m matrices of size n x n, we must perform m - 1 additions, each requiring time O (n2).
Therefore, the overall running time of this procedure is:
O(m-1)O(n2) = O(m)O(n2) = O(mn2)
Q: Suppose we have two algorithms to solve the same problem. One runs in time T1(n) = 400n,
whereas the other runs in time T2(n) = n2. What are the complexities of these two algorithms? For
what values of n might we consider using the algorithm with the higher complexity?
A: The complexity of T1 is O (n), and the complexity of T2 is O (n2). However, the algorithm described
by T1 involves such a large constant coefficient for n that when n < 400, the algorithm described by T2
Solved Exercise
for any queries
contact
pankaj7877@gmail.com

More Related Content

What's hot

Analysis and Design of Algorithms
Analysis and Design of AlgorithmsAnalysis and Design of Algorithms
Analysis and Design of AlgorithmsBulbul Agrawal
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notationsEhtisham Ali
 
Complexity of Algorithm
Complexity of AlgorithmComplexity of Algorithm
Complexity of AlgorithmMuhammad Muzammal
 
Algorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms IAlgorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms IMohamed Loey
 
Algorithm analysis
Algorithm analysisAlgorithm analysis
Algorithm analysissumitbardhan
 
Time and Space Complexity
Time and Space ComplexityTime and Space Complexity
Time and Space ComplexityAshutosh Satapathy
 
Data structure and algorithm
Data structure and algorithmData structure and algorithm
Data structure and algorithmTrupti Agrawal
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesMurtadha Alsabbagh
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)swapnac12
 
Lecture 2 role of algorithms in computing
Lecture 2   role of algorithms in computingLecture 2   role of algorithms in computing
Lecture 2 role of algorithms in computingjayavignesh86
 
Logic programming (1)
Logic programming (1)Logic programming (1)
Logic programming (1)Nitesh Singh
 
Greedy algorithms
Greedy algorithmsGreedy algorithms
Greedy algorithmssandeep54552
 
Memory management
Memory managementMemory management
Memory managementcpjcollege
 

What's hot (20)

Analysis and Design of Algorithms
Analysis and Design of AlgorithmsAnalysis and Design of Algorithms
Analysis and Design of Algorithms
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
 
Complexity of Algorithm
Complexity of AlgorithmComplexity of Algorithm
Complexity of Algorithm
 
Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
 
Algorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms IAlgorithms Lecture 4: Sorting Algorithms I
Algorithms Lecture 4: Sorting Algorithms I
 
Algorithm analysis
Algorithm analysisAlgorithm analysis
Algorithm analysis
 
Time and Space Complexity
Time and Space ComplexityTime and Space Complexity
Time and Space Complexity
 
Data structure and algorithm
Data structure and algorithmData structure and algorithm
Data structure and algorithm
 
Unit 1 chapter 1 Design and Analysis of Algorithms
Unit 1   chapter 1 Design and Analysis of AlgorithmsUnit 1   chapter 1 Design and Analysis of Algorithms
Unit 1 chapter 1 Design and Analysis of Algorithms
 
Parallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and DisadvantagesParallel Algorithms Advantages and Disadvantages
Parallel Algorithms Advantages and Disadvantages
 
Daa notes 1
Daa notes 1Daa notes 1
Daa notes 1
 
Daa
DaaDaa
Daa
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Lecture 2 role of algorithms in computing
Lecture 2   role of algorithms in computingLecture 2   role of algorithms in computing
Lecture 2 role of algorithms in computing
 
Greedy algorithm
Greedy algorithmGreedy algorithm
Greedy algorithm
 
Logic programming (1)
Logic programming (1)Logic programming (1)
Logic programming (1)
 
Computational Complexity
Computational ComplexityComputational Complexity
Computational Complexity
 
Greedy algorithms
Greedy algorithmsGreedy algorithms
Greedy algorithms
 
Memory management
Memory managementMemory management
Memory management
 

Similar to Introduction to Algorithms Complexity Analysis

Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdfNayanChandak1
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptxRahikAhmed1
 
Unit 1, ADA.pptx
Unit 1, ADA.pptxUnit 1, ADA.pptx
Unit 1, ADA.pptxjinkhatima
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptTekle12
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptTekle12
 
Algorithm - Introduction
Algorithm - IntroductionAlgorithm - Introduction
Algorithm - IntroductionMadhu Bala
 
Design and Analysis of Algorithm ppt for unit one
Design and Analysis of Algorithm ppt for unit oneDesign and Analysis of Algorithm ppt for unit one
Design and Analysis of Algorithm ppt for unit onessuserb7c8b8
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxSyed Zaid Irshad
 
Data structures algorithms basics
Data structures   algorithms basicsData structures   algorithms basics
Data structures algorithms basicsayeshasafdar8
 
01 Introduction to analysis of Algorithms.pptx
01 Introduction to analysis of Algorithms.pptx01 Introduction to analysis of Algorithms.pptx
01 Introduction to analysis of Algorithms.pptxssuser586772
 
Problem-solving and design 1.pptx
Problem-solving and design 1.pptxProblem-solving and design 1.pptx
Problem-solving and design 1.pptxTadiwaMawere
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...AntareepMajumder
 
CH-1.1 Introduction (1).pptx
CH-1.1 Introduction (1).pptxCH-1.1 Introduction (1).pptx
CH-1.1 Introduction (1).pptxsatvikkushwaha1
 
Algorithm and C code related to data structure
Algorithm and C code related to data structureAlgorithm and C code related to data structure
Algorithm and C code related to data structureSelf-Employed
 
RAJAT PROJECT.pptx
RAJAT PROJECT.pptxRAJAT PROJECT.pptx
RAJAT PROJECT.pptxSayedMohdAsim2
 
Introduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsIntroduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsswapnac12
 

Similar to Introduction to Algorithms Complexity Analysis (20)

Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx
 
Unit 1, ADA.pptx
Unit 1, ADA.pptxUnit 1, ADA.pptx
Unit 1, ADA.pptx
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
 
Python algorithm
Python algorithmPython algorithm
Python algorithm
 
Lec1.ppt
Lec1.pptLec1.ppt
Lec1.ppt
 
Algorithm - Introduction
Algorithm - IntroductionAlgorithm - Introduction
Algorithm - Introduction
 
Design and Analysis of Algorithm ppt for unit one
Design and Analysis of Algorithm ppt for unit oneDesign and Analysis of Algorithm ppt for unit one
Design and Analysis of Algorithm ppt for unit one
 
Introduction to algorithms
Introduction to algorithmsIntroduction to algorithms
Introduction to algorithms
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
 
Data structures algorithms basics
Data structures   algorithms basicsData structures   algorithms basics
Data structures algorithms basics
 
01 Introduction to analysis of Algorithms.pptx
01 Introduction to analysis of Algorithms.pptx01 Introduction to analysis of Algorithms.pptx
01 Introduction to analysis of Algorithms.pptx
 
Problem-solving and design 1.pptx
Problem-solving and design 1.pptxProblem-solving and design 1.pptx
Problem-solving and design 1.pptx
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
 
CH-1.1 Introduction (1).pptx
CH-1.1 Introduction (1).pptxCH-1.1 Introduction (1).pptx
CH-1.1 Introduction (1).pptx
 
Algorithm and C code related to data structure
Algorithm and C code related to data structureAlgorithm and C code related to data structure
Algorithm and C code related to data structure
 
RAJAT PROJECT.pptx
RAJAT PROJECT.pptxRAJAT PROJECT.pptx
RAJAT PROJECT.pptx
 
Introduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsIntroduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventions
 
lect 1-ds algo(final)_2.pdf
lect 1-ds  algo(final)_2.pdflect 1-ds  algo(final)_2.pdf
lect 1-ds algo(final)_2.pdf
 

Recently uploaded

Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayMakMakNepo
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 

Recently uploaded (20)

Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up Friday
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 

Introduction to Algorithms Complexity Analysis

  • 1. Introduction Algorithms-Design & Analysis By: Dr. Pankaj Agarwal Professor & Head, Dept. of Computer Sc & Engineering IMS Engineering College
  • 2. Problem − Design an algorithm to add two numbers and display the result. • Step 1 − START • Step 2 − declare three integers a, b & c • Step 3 − define values of a & b • Step 4 − add values of a & b • Step 5 − store output of step 4 to c • Step 6 − print c • Step 7 − STOP
  • 3. Alternatively, the algorithm can be written as • Step 1 − START ADD • Step 2 − get values of a & b • Step 3 − c ← a + b • Step 4 − display c • Step 5 − STOP usually the second method is used to describe an algorithm. It makes it easy for the analyst to analyze the algorithm ignoring all unwanted definitions Writing step numbers, is optional.
  • 4. The term “Algorithm” • first used in about 1230 and then by Chaucer in 1391. • English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. • Algorithms are essential to the way computers process data. • any sequence of operations that can be simulated by the system. • order of computation is always crucial to the functioning of the algorithm.
  • 5. Difference between Algorithm and Pseudocode • An algorithm is a formal definition with some specific characteristics that describes a process, which could be executed by a computer machine to perform a specific task. Generally, the word "algorithm" can be used to describe any high level task in computer science. • On the other hand, pseudocode is an informal and (often rudimentary) human readable description of an algorithm leaving many granular details of it. • Writing a pseudocode has no restriction of styles and its only objective is to describe the high level steps of algorithm in a much realistic manner in natural language.
  • 6. Algorithm Design • Goal- design efficient algorithm using minimum time and space. • Some approaches can be efficient with respect to time consumption, whereas other approaches may be memory efficient. • both time consumption and memory usage cannot be optimized simultaneously.
  • 7. Problem Development Steps • Problem definition • Development of a model • Specification of an Algorithm • Designing an Algorithm • Checking the correctness of an Algorithm • Analysis of an Algorithm • Implementation of an Algorithm • Program testing • Documentation
  • 8. Characteristics of Algorithms • Algorithms must have a unique name • Algorithms should have explicitly defined set of inputs and outputs • Algorithms are well-ordered with unambiguous operations • Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an algorithm must end at some point
  • 9. Algorithm: Insertion-Sort Input: A list L of integers of length n Output: A sorted list L1 containing those integers present in L Step 1: Keep a sorted list L1 which starts off empty Step 2: Perform Step 3 for each element in the original list L Step 3: Insert it into the correct position in the sorted list L1. Step 4: Return the sorted list Step 5: Stop
  • 10. • Here is a pseudocode which describes how the high level abstract process mentioned above in the algorithm Insertion-Sort could be described in a more realistic way. for i <- 1 to length(A) x <- A[i] j <- i while j > 0 and A[j-1] > x A[j] <- A[j-1] j <- j - 1 A[j] <- x
  • 11. Classification of algorithms • There are various ways to classify algorithms, each with its own merits. By implementation • One way to classify algorithms is by implementation means Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems.
  • 12. Classification of algorithms Logical An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms has a well-defined change in the algorithm.
  • 13. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a computer network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together
  • 14. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. Approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems.
  • 15. Algorithm Analysis • Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation. They are the following − • A Priori Analysis − This is a theoretical analysis of an algorithm. • A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is implemented using programming language.
  • 16. Analysis of Algorithms Complexity analysis of an algorithm & its need-: ❑Algorithm analysis provides theoritical estimates for the resources (Time & Memory) needed by an algorithm before actual implementation. ❑ Understanding the behaviour & performance of the algorithm in terms of Time (Time Complexity) and Space(Space Complexity) requirements w.r.t growth in input size of the data. ❑Algorithm Analysis helps us to predict – how feasible & effective the algorithm will be after its actual implementation. This helps us to design more efficient algorithms. ❑ We only analyze correct algorithms.
  • 17. • Analysis & study of algorithms is abstracted without the use of a specific programming language or implementation. • it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. • Usually pseudocode is used for analysis as it is the simplest and most general representation. • ultimately, implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code. • Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
  • 18. ❑In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used. ❑ Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. ❑ Exact (not asymptotic) : require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g.,Turing machine, and/or by postulating that certain operations are executed in unit time. Exact VS Asymptotic Analysis
  • 19. Run-time analysis • Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in Computer Science: A program can take seconds, hours or even years to finish executing, depending on which algorithm it implements
  • 20. Shortcomings of empirical metrics • Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), • there are significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
  • 21. • Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. • Benchmark testing on the two computers running their respective programs might look something like the following n (list size) Computer A run-time (in nanoseconds) Computer B run-time (in nanoseconds) 15 7 ns 100,000 ns 65 32 ns 150,000 ns 250 128 ns 200,000 ns 1,000 500 ns 250,000 ns Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to what Computer B is running.
  • 22. • However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: n (list size) Computer A run- time (in nanoseconds) Computer B run-time (in nanoseconds) 15 7 ns 100,000 ns 65 32 ns 150,000 ns 250 125 ns 200,000 ns 1,000 500 ns 250,000 ns ... ... ... 1,000,000 500,000 ns 500,000 ns 4,000,000 2,000,000 ns 550,000 ns 16,000,000 8,000,000 ns 600,000 ns ... ... ... 63,072 × 1012 31,536 × 1012 ns, or 1 year 1,375,000 ns, or 1.375 milliseconds
  • 23. Exact Model of Computation
  • 24. Cost models Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual execution time, the time required to perform a step must be guaranteed to be bounded above by a constant. Two cost models are generally used ❑Uniform cost model, also called uniform-cost measurement , assigns a constant cost to every machine operation, regardless of the size of the numbers involved ❑ Logarithmic cost model, also called logarithmic-cost measurement, assigns a cost to every machine operation proportional to the number of bits involved The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmeticalgorithms, like those used in cryptography.
  • 25. RAM -Model of implementation Before implementation we need a model of implementation technology. Machine- independent algorithm design depends upon a hypothetical computer called the Random Access Machine or RAM Random Access Model (RAM) is most popular Assumptions: ❑Each ``simple'' operation (+, *, -, =, if, call) takes exactly 1 time step. ❑Serial execution of instructions ❑ Each instruction to take a constant amount of time ❑ simple data type- integer & float ❑ Memory hierarchy- not modelled. Each memory access takes exactly one time step, and we have as much memory as we need. The RAM model takes no notice of whether an item is in cache or on the disk, which simplifies the analysis. Under the RAM model, we measure the run time of an algorithm by counting up the number of steps it takes on a given problem instance. By assuming that our RAM executes a given number of steps per second, the operation count converts easily to the actual run time. RAM proves an excellent model for understanding how an algorithm will perform on a real computer. It strikes a fine balance by capturing the essential behavior of computers while being simple to work with. We use the RAM model because it is useful in practice
  • 26. Important tips for analyzing algorithms Questions to be asked while analyzing algorithms • How does one calculate the running time of an algorithm? • How can we compare two different algorithms? • How do we know if an algorithm is `optimal'? 1. Count the number of basic operations performed by the algorithm on the worst-case input A basic operation could be: ▪ An assignment ▪ A comparison between two variables ▪ An arithmetic operation between two variables. The worst-case input is that input assignment for which the most basic operations are performed.
  • 27. Example: n := 5; loop get(m); n := n -1; until (m=0 or n=0) Worst-case: 5 iterations Usually we are not concerned with the number of steps for a single fixed case but wish to estimate the running time in terms of the `input size'. get(n); loop get(m); n := n -1; until (m=0 or n=0) Worst-case: n iterations
  • 28. Important tips for analyzing algorithms 2) Counting the Number of Basic Operations a) Sequence: P and Q are two algorithm sections: Time( P ; Q ) = Time( P ) + Time( Q ) b) Iteration: while < condition > loop P; end loop; or for i in 1..n loop P; end loop Time = Time( P ) * ( Worst-case number of iterations )
  • 29. c) Conditional: if < condition > then P; else Q; end if; Time = Time(P) if < condition > =true Time( Q ) if < condition > =false Example: for i in 1..n loop for j in 1..n loop if i < j then swop (a(i,j), a(j,i)); -- Basic operation end if; end loop; end loop; Time < n*n*1 = n2
  • 30. Recursive structures Consider a recursive function for obtaining Fibonacci series long fib(int n) { if(n<=1) // line 1 return 1; //line 2 else return fib(n-1)+fib(n-2); //line 3 }
  • 31. Expressing Algorithm as Function of input N Statement Statemen t cost Frequency Total cost 1) Algorithm SUM(A,N) 0 0 0 2) { 0 0 0 3) S=0 1 1 1 4) For I=1 to N 1 N N 5) S=S+A[i]; 1 N-1 N-1 6) Return S; 1 1 1 7) } 0 0 0 Total 2N+1
  • 32. Input size and basic operation examples Problem Input size measure Basic operation Search for key in list of n items Number of items in list n Key comparison Multiply two matrices of floating point numbers Dimensions of matrices Floating point multiplication Compute an n Floating point multiplication Graph problem #vertices and/or edges Visiting a vertex or traversing an edge
  • 33.
  • 34.
  • 35. • Complexity of an algorithm is analyzed in two perspectives: Time and Space. Time Complexity • It’s a function describing the amount of time required to run an algorithm in terms of the size of the input. Space Complexity • amount of memory an algorithm takes in terms of the size of input to the algorithm. • Space complexity is sometimes ignored Time and Space Complexity
  • 36. Space complexity For any algorithm memory may be used for the following: 1.Variables (This include the constant values, temporary values) 2.Program Instruction 3.Execution Space complexity is the amount of memory used by the algorithm (including the input values to the algorithm) to execute and produce the result. Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the extra space or the temporary space used by the algorithm during it's execution. Space Complexity = Auxiliary Space + Input space
  • 37. Memory Usage while Execution While executing, algorithm uses memory space for three reasons: 1.Instruction Space It's the amount of memory used to save the compiled version of instructions. 2.Environmental Stack Sometimes an algorithm(function) may be called inside another algorithm(function). In such a situation, the current variables are pushed onto the system stack, where they wait for further execution and then the call to the inside algorithm(function) is made. For example, If a function A() calls function B() inside it, then all th variables of the function A()will get stored on the system stack temporarily, while the function B() is called and executed inside the funciton A(). 3.Data Space Amount of space used by the variables and constants. But while calculating the Space Complexity of any algorithm, we usually consider only Data Space and we neglect the Instruction Space and Environmental Stack.
  • 38. Calculating the Space Complexity • For calculating the space complexity, we need to know the value of memory used by different type of datatype variables, which generally varies for different operating systems, but the method for calculating the space complexity remains the same. Type Size bool, char, unsigned char, signed char, __int8 1 byte __int16, short, unsigned short, wchar_t, __wchar_t 2 bytes float, __int32, int, unsigned int, long, unsigned long 4 bytes double, __int64, long double, long long 8 bytes
  • 39. { int z = a + b + c; return(z); } In the above expression, variables a, b, c and z are all integer types, hence they will take up 2 bytes each, so total memory requirement will be (8 + 2) = 10 bytes, this additional 2 bytes is for return value. And because this space requirement is fixed for the above example, hence it is called Constant Space Complexity. Calculating the Space Complexity // n is the length of array a[] int sum(int a[], int n) { int x = 0; // 2 bytes for x for(int i = 0; i < n; i++) // 2 bytes for i { x = x + a[i]; } return(x); }
  • 40. •In the above code, 2*n bytes of space is required for the array a[] elements. •2 bytes each for x, n, i and the return value. Hence the total memory requirement will be (2n + 8), which is increasing linearly with the increase in the input value n, hence it is called as Linear Space Complexity. Similarly, we can have quadratic and other complex space complexity as well, as the complexity of an algorithm increases. But we should always focus on writing algorithm code in such a way that we keep the space complexity minimum. Calculating the Space Complexity
  • 41. Complexity Analysis of Algorithms The Time Complexity of an Algorithm Specifies how the running time depends on the size of the input •To estimate how long a program will run. •To estimate the largest input that can reasonably be given to the program. •To compare the efficiency of different algorithms. •To help focus on the parts of code that are executed the largest number of times. •To choose an algorithm for an application.
  • 42. Types of Notations for Time Complexity • Big Oh denotes "fewer than or the same as" <expression> iterations. • Big Omega denotes "more than or the same as" <expression> iterations. • Big Theta denotes "the same as" <expression> iterations. • Little Oh denotes "fewer than" <expression> iterations. • Little Omega denotes "more than" <expression> iterations.
  • 43. Understanding Notations • O(expression) is the set of functions that grow slower than or at the same rate as expression. It indicates the maximum required by an algorithm for all input values. It represents the worst case of an algorithm's time complexity. • Omega(expression) is the set of functions that grow faster than or at the same rate as expression. It indicates the minimum time required by an algorithm for all input values. It represents the best case of an algorithm's time complexity. • Theta(expression) consist of all the functions that lie in both O(expression) and Omega(expression). It indicates the average bound of an algorithm. It represents the average case of an algorithm's time complexity.
  • 44. Time Complexity Is a Function Specifies how the running time depends on the size of the input. A function mapping “size” of input “time” T(n) executed .
  • 45. Theoretical analysis of time efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size • Basic operation: the operation that contributes most towards the running time of the algorithm. T(n) ≈ copC(n) running time execution time for basic operation Number of times basic operation is executed input size
  • 46. Asymptotic Analysis of Algorithms ❑defines the mathematical boundation on run-time performance, ❑asymptotic analysis determines the best case, average case, and worst case scenario, ❑Understand the growth rate of the fucntions (algorithm), ❑To compare the efficiency of different algorithms, ❑ To choose an algorithm for an application, ❑ machine dependent factors are ignored
  • 47. ▪ mathematical tools to represent time complexity of algorithms for asymptotic analysis. ▪ doesn’t require algorithms to be implemented ▪ Determines the rate of growth of the function as input tends to infinity Asymptotic notations
  • 48. • When it comes to analysing the complexity of any algorithm in terms of time and space, we can never provide an exact number to define the time required and the space required by the algorithm, instead we express it using some standard notations, also known as Asymptotic Notations. Asymptotic Analysis Let us take an example, if some algorithm has a time complexity of T(n) = (n2 + 3n + 4), which is a quadratic equation. For large values of n, the 3n + 4 part will become insignificant compared to the n2 part For n = 1000, n2 will be 1000000 while 3n + 4 will be 3004. Also, When we compare the execution times of two algorithms the constant coefficients of higher order terms are also neglected.
  • 49. Asymptotic Computational Complexity ❑ commonly associated with the usage of the big O notation. ❑With respect to computational resources, asymptotic time complexity and asymptotic space complexity are commonly estimated. ❑"computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, ❑Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ί(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(n log n)).
  • 50. Asymptotic Time Complexity Time complexities are classified by the nature of the function T(n). For instance, an algorithm with T(n) = O(n) is called a linear time algorithm, and an algorithm with T(n) = O(Mn) for some M ≥ m > 1 is said to be an exponential time algorithm. Common Time Complexities ❖An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input. ❖An algorithm is said to take logarithmic time if T(n) = O(log n). The change of base for logarithms, loga n and logb n differ only by a constant multiplier, which in big-O notation is discarded; thus O(log n) is the standard notation for logarithmic time algorithms regardless of the base of the logarithm.
  • 51. • Logarithmic Time: O(log n) An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size. • Example: binary search • Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search. • An O(log n) algorithm is considered highly efficient, as the operations per instance required to complete decrease with each instance.
  • 52.
  • 53. ▪ An algorithm is said to run in polylogarithmic time if T(n) = O((log n)k), for some constant k. For example, matrix chain ordering can be solved in polylogarithmic time on a Parallel Random Access Machine ▪ Sub-linear time if T(n) = o(n). includes algorithms with the time complexities defined above, as well as others such as the O(n½). Sub- linear time algorithms are typically randomized, and provide only approximate solutions. ▪ Linear time, time complexity is O(n). For large enough input sizes the running time increases linearly with the size of the input. ▪ Quasi linear time if T(n) = O(n logk n) for any constant k; linearithmic time is the case k = 1. Using soft-O notation these algorithms are Õ(n). Quasilinear time algorithms are also o(n1+Îľ) for every Îľ > 0, and thus run faster than any polynomial in n with exponent strictly greater than 1. ▪ Linearithmic time is a special case of quasilinear time where the exponent, k = 1 on the logarithmic term.
  • 54. ▪Subquadratic time if T(n) = o(n2). For example, simple, comparison-based sorting algorithms are quadratic (e.g. insertion sort), but more advanced algorithms can be found that are subquadratic (e.g. Shell sort). No general-purpose sorts run in linear time, but the change from quadratic to sub-quadratic is of great practical importance. ▪ Polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, i.e., T(n) = O(nk) for some constant k. Problems for which a deterministic polynomial time algorithm exists belong to the complexity class P ▪ superpolynomial time if T(n) is not bounded above by any polynomial. It is ω(nc) time for all constants c, where n is the input parameter, typically the number of bits in the input. For example, an algorithm that runs for 2n steps on an input of size n requires superpolynomial time. ▪ Quasi-polynomial time : run slower than polynomial time, yet not so slow as to be exponential time. The worst case running time of a quasi-polynomial time algorithm is for some fixed c. The best-known classical algorithm for integer factorization
  • 55.
  • 57. O(1): Time complexity of a function (or set of statements) is considered as O(1) if it doesn’t contain loop, recursion and call to any other non- constant time function. // set of non-recursive and non-loop statements For example swap() function has O(1) time complexity. A loop or recursion that runs a constant number of times is also considered as O(1). For example the following loop is O(1). // Here c is a constant for (int i = 1; i <= c; i++) { // some O(1) expressions } O(n): Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount. For example following functions have O(n) time complexity. // Here c is a positive integer constant for (int i = 1; i <= n; i += c) { // some O(1) expressions } for (int i = n; i > 0; i -= c) { // some O(1) expressions }
  • 58. O(nc): Time complexity of nested loops is equal to the number of times the innermost statement is executed. For example the following sample loops have O(n2) time complexity for (int i = 1; i <=n; i += c) { for (int j = 1; j <=n; j += c) { // some O(1) expressions } } for (int i = n; i > 0; i -= c) { for (int j = i+1; j <=n; j += c) { // some O(1) expressions }
  • 59. O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables is divided / multiplied by a constant amount. for (int i = 1; i <=n; i *= c) { // some O(1) expressions } for (int i = n; i > 0; i /= c) { // some O(1) expressions } For example Binary Search has O(Logn) time complexity. Let us see mathematically how it is O(Log n). The series that we get in first loop is 1, c, c2, c3, … ck. If we put k equals to Logcn, we get cLog c n which is n.
  • 60. O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is reduced / increased exponentially by a constant amount. // Here c is a constant greater than 1 for (int i = 2; i <=n;i = pow(i, c)) { // some O(1) expressions } //Here fun is sqrt or cuberoot or any other constant root for (int i = n; i > 0; i = fun(i)) { // some O(1) expressions } In this case, i takes values 2, 2k, (2k)k = 2k2, (2k2)k = 2k3, …, 2klog k (log(n)). The last term must be less than or equal to n, and we have 2klog k (log(n)) = 2log(n) = n, which completely agrees with the value of our last term. So there are in total logk(log(n)) many iterations, and each iteration takes a constant amount of time to run, therefore the total time complexity is O(log(log(n))).
  • 61. • How to combine time complexities of consecutive loops? When there are consecutive loops, we calculate time complexity as sum of time complexities of individual loops. for (int i = 1; i <=m; i += c) { // some O(1) expressions } for (int i = 1; i <=n; i += c) { // some O(1) expressions } Time complexity of above code is O(m) + O(n) which is O(m+n) If m == n, the time complexity becomes O(2n) which is O(n).
  • 62. Asymptotic Notations: Big Oh Notation We use big-O notation for asymptotic upper bounds, since it bounds the growth of the running time from above for large enough input sizes Because big-O notation is asymptotic, it gives approximate estimate. Asymptotic notations are used when no exact estimates can be computed. The letter O is used because the growth rate of a function is also referred to as order of the function. Associated with big O notation are several related notations, using the symbols o, Ί, ω, and Θ, to describe other kinds of bounds on asymptotic growth rates.
  • 63. Best-case, average-case, worst-case Worst-case running time of an algorithm The longest running time for any input of size n An upper bound on the running time for any input  guarantee that the algorithm will never take longer Example: Sort a set of numbers in increasing order; and the data is in decreasing order The worst case can occur fairly often E.g. in searching a database for a particular piece of information Best-case running time sort a set of numbers in increasing order; and the data is already in increasing order Average-case running time May be difficult, must have some assumptions about possible inputs of size n
  • 64. Average-case ❑Average Case analysis: determines average measure for the amount of resources the algorithm uses on a random input ❑It studies the complexity of algorithms over inputs drawn randomly from a particular probabilistic distribution ❑It differs from worst case which considers the maximal complexity of the algorithm over all possible inputs Why Average Case: ❑Some problems may be intractable in worst case, the inputs which elicit this behaviour may rarely occur in practice ❑Here the average case analysis may be more accurate measure of an algorithm’s performance ❑Most useful in Cryptographic and randomization ❑ Comparisons among algorithms based on average case analysis can be more practical in some cases.
  • 65. Big O h Notation • Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. • It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. • big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows • The letter O is used because the growth rate of a function is also referred to as the order of the function. • A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
  • 66. • If f(N) = O(g(N)) •  c , n0 > 0 such that f(N)  c g(N) when N  n0 • f(N) grows no faster than g(N) for “large” N • g(N) is an upper bound on f(N) • Ex: if F(n)=3n2+2n+1 then F(n)<=4n2 for all n>1, c=4 • Therefore f(N) = O(n2) BIG-Oh The idea is to establish a relative order among functions for large n. Bounds the function from above
  • 67. Big O h Notation A logarithmic algorithm – O(logn) Runtime grows logarithmically in proportion to n. A linear algorithm – O(n) Runtime grows directly in proportion to n. A superlinear algorithm – O(nlogn) Runtime grows in proportion to n. A polynomial algorithm – O(nc) Runtime grows quicker than previous all based on n. A exponential algorithm – O(cn) Runtime grows even faster than polynomial algorithm based on n. A factorial algorithm – O(n!) Runtime grows the fastest and becomes quickly unusable for even small values of n.
  • 68.
  • 69.
  • 70.
  • 71. Little Îż asymptotic notation • Big-Ο is used as a tight upper-bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper-bound. “Little-ο” (Îż()) notation is used to describe an upper-bound that cannot be tight. In mathematical relation, f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞
  • 72. • Is 7n + 8 ∈ o(n2)? In order for that to be true, for any c, we have to be able to find an n0 that makes f(n) < c * g(n) asymptotically true. lets took some example, If c = 100,we check the inequality is clearly true. • then check limits, lim f(n)/g(n) = lim (7n + 8)/(n2) = lim 7/2n = 0 n→∞ n→∞ n→∞ • hence 7n + 8 ∈ o(n2)
  • 73. • express the lower bound of an algorithm's running time. • measure of best case time complexity Omega Notation, Ί for a function f(n) If f(n)= Ί(g(n) : there exists c > 0 and n0 such that c.g(n) ≤ f(n) for all n > n0. } Ex: if F(n)=3n2+2n+1 then F(n)>=3n2 for all n>=1, C=1
  • 74.
  • 75. ▪ Express lower bound and the upper bound of an algorithm's running time. ▪ Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0} Theta Notation(θ) f(n) is always between c1*g(n) and c2*g(n) for large values of n (n >= n0)
  • 76.
  • 77.
  • 79. In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f is derived by the following simplification rules: ❖If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted. ❖If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted. Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds.
  • 80.
  • 81.
  • 82.
  • 83. Insertion Sort Example InsertionSort(A, n) { for i = 2 to n { key = A[i] j = i - 1; while (j > 0) and (A[j] > key) { A[j+1] = A[j] j = j - 1 } A[j+1] = key } } How many times will this loop execute?
  • 84. Insertion Sort Statement Effort InsertionSort(A, n) { for i = 2 to n { c1n key = A[i] c2(n-1) j = i - 1; c3(n-1) while (j > 0) and (A[j] > key) { c4T A[j+1] = A[j] c5(T-(n-1)) j = j - 1 c6(T-(n-1)) } 0 A[j+1] = key c7(n-1) } 0 } T = t2 + t3 + … + tn where ti is number of while expression evaluations for the ith for loop iteration
  • 85. Analyzing Insertion Sort • What can T be? – Best case -- inner loop body never executed • ti = 1 ➔ T(n) is a linear function – Worst case -- inner loop body executed for all previous elements • ti = i ➔ T(n) is a quadratic function – Average case • ???
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 92.
  • 93. Examples int count = 0; for (int i = 0; i < N; i++) for (int j = 0; j < i; j++) count++;
  • 94.
  • 96.
  • 97.
  • 99. Consider the following three claims 1. (n + k)m = Θ(nm), where k and m are constants2. 2n + 1 = O(2n)3. 22n + 1 = O(2n) Which of these claims are correct ? (A) 1 and 2 (B) 1 and 3 (C) 2 and 3 (D) 1, 2, and 3
  • 100. • Answer: (A) Explanation: (n + k)m and Θ(nm) are asymptotically same as theta notation can always be written by taking the leading order term in a polynomial expression. • 2n + 1 and O(2n) are also asymptotically same as 2n + 1 can be written as 2 * 2n and constant multiplication/addition doesn’t matter in theta notation. • 22n + 1 and O(2n) are not same as constant is in power.
  • 101. Interview Questions The auxiliary space of insertion sort is O(1), what does O(1) mean ? (A) The memory (space) required to process the data is not constant. (B) It means the amount of extra memory Insertion Sort consumes doesn’t depend on the input. The algorithm should use the same amount of memory for all inputs. (C) It takes only 1 kb of memory . (D) It is the speed at which the elements are traversed.
  • 102. • Explanation: The term O(1) states that the space required by the insertion sort is constant i.e., space required doesn’t depend on input.
  • 103. void fun(int n, int arr[]) { int i = 0, j = 0; for(; i < n; ++i) while(j < n && arr[i] < arr[j]) j++; } What is the time complexity of the below function? (A) O(n) (B) O(n^2) (C) O(nlogn) (D) O(n(logn)^2)
  • 104. In the first look, the time complexity seems to be O(n^2) due to two loops. But, please note that the variable j is not initialized for each value of variable i. So, the inner loop runs at most n times. Please observe the difference between the function given in question and the below function: void fun(int n, int arr[]) { int i = 0, j = 0; for(; i < n; ++i) { j = 0; while(j < n && arr[i] < arr[j]) j++; } }
  • 105. What are time complexities of the functions? int fun1(int n) { if (n <= 1) return n; return 2*fun1(n-1); } int fun2(int n) { if (n <= 1) return n; return fun2(n-1) + fun2(n-1); } (A) O(2^n) for both fun1() and fun2() (B) O(n) for fun1() and O(2^n) for fun2() (C) O(2^n) for fun1() and O(n) for fun2() (D) O(n) for both fun1() and fun2()
  • 106. • Explanation: Time complexity of fun1() can be written as T(n) = T(n-1) + C which is O(n) • Time complexity of fun2() can be written as T(n) = 2T(n-1) + C which is O(2^n)
  • 107. int fun(int n) { int count = 0; for (int i = n; i > 0; i /= 2) for (int j = 0; j < i; j++) count += 1; return count; } What is time complexity of fun()? (A) O(n^2) (B) O(nLogn) (C) O(n) (D) O(nLognLogn)
  • 108. • Answer: (C) Explanation: For a input integer n, the innermost statement of fun() is executed following times. • n + n/2 + n/4 + … 1 • So time complexity T(n) can be written as • T(n) = O(n + n/2 + n/4 + … 1) = O(n) • The value of count is also n + n/2 + n/4 + .. + 1
  • 110.
  • 111.
  • 113. Which of the following is not O(n^2)? (A) (15^10) * n + 12099 (B) n^1.98 (C) n^3 / (sqrt(n)) (D) (2^20) * n Answer: (C) Explanation: The order of growth of option c is n^2.5 which is higher than n^2.
  • 114. Now consider a QuickSort implementation where we first find median using the above algorithm, then use median as pivot. What will be the worst case time complexity of this modified QuickSort. (A) O(n^2 Logn) (B) O(n^2) (C) O(n Logn Logn) (D) O(nLogn) Answer: (D) Explanation: If we use median as a pivot element, then the recurrence for all cases becomes T(n) = 2T(n/2) + O(n) Solved Problems
  • 115. What is the worst case time complexity of insertion sort where position of the data to be inserted is calculated using binary search? (A)N (B) NlogN (C) N^2 (D) N(logN)^2 Answer: (C) Explanation: Applying binary search to calculate the position of the data to be inserted doesn’t reduce the time complexity of insertion sort. This is because insertion of a data at an appropriate position involves two steps: 1. Calculate the position. 2. Shift the data from the position calculated in step #1 one step right to create a gap where the data will be inserted. Using binary search reduces the time complexity in step #1 from O(N) to O(logN). But, the time complexity in step #2 still remains O(N). So, overall complexity remains O(N^2).
  • 116. void fun(int n, int arr[]) { int i = 0, j = 0; for(; i < n; ++i) while(j < n && arr[i] < arr[j]) j++; } What is the time complexity of the below function? Run on IDE (A) O(n) (B) O(n^2) (C) O(nlogn) (D) O(n(logn)^2) Answer: (A) Explanation: In the first look, the time complexity seems to be O(n^2) due to two loops. But, please note that the variable j is not initialized for each value of variable i. So, the inner loop runs at most n times
  • 117. Consider the array A[]= {6,4,8,1,3} apply the insertion sort to sort the array . Consider the cost associated with each sort is 25 rupees , what is the total cost of the insertion sort when element 1 reaches the first position of the array ? (A) 50 (B) 25 (C) 75 (D) 100 Answer: (A) Explanation: When the element 1 reaches the first position of the array two comparisons are only required hence 25 * 2= 50 rupees. *step 1: 4 6 8 1 3 . *step 2: 1 4 6 8 3.
  • 118. int fun1(int n) { if (n <= 1) return n; return 2*fun1(n-1); } int fun2(int n) { if (n <= 1) return n; return fun2(n-1) + fun2(n-1); } Consider the following two functions. What are time complexities of the functions? Run on IDE Run on IDE (A) O(2^n) for both fun1() and fun2() (B) O(n) for fun1() and O(2^n) for fun2() (C) O(2^n) for fun1() and O(n) for fun2() (D) O(n) for both fun1() and fun2() Answer: (B) Explanation: Time complexity of fun1() can be written as T(n) = T(n-1) + C which is O(n) Time complexity of fun2() can be written as T(n) = 2T(n-1) + C which is O(2^n)
  • 119. Tips for solving Questions On Asymptotic #Tip1 • One should remenber the general order of following functions. • O(1) < O(logn) < O( n) < O( nlogn) < O(n*n) < O(n*n*n) < O(nk)< O(2n) #Tip2 • if f(x) = Ď´ (g(x)) , we can say that f(x) is O(g(x)) and f(x) is Ω(g(x)) • and also g(x) is O(f(x)) and Ω(f(x)) #Tip3 The running time of for/while loop is number of iterations * running time of statement inside loop. int sum=0; for (int i=0;i<n;i++) { Sum=sum +i, } • The running time is O(n) #Tip4: For nested loop The running time of statement inside group of nested loop is product of size of loops. for (i=0; i<n; i++) for (j=0; j<k; j++) a++; The running time is O(n×k)
  • 120. #Tip5 • sum of statements: • Add running time of all the statements • for (i=0;i<n; i++) • k++ → O(n) • for (j=0; j<n; j++) • for (t=0, t<n; t++) • The running time is 0(n) + 0(n2) #Tip6 • Recursive function • Derive recurrence relation and then solve it ,getting the running time is always better way than other structure. • → Suppose T(n) is running time of function ABC and the variable reduces to half in next call. • T(n) = T(n/2) + complexity rest of code #Tip7 • An algorithm is O(log n) if it takes constant time to cut problem of half. • Like binary score, heap. • # T(sqrt(n )) → O(loglog n) where T is recurrence relation #Tip8 int doSomething (int n) { if (n≤ 2) return 1; else return doSomething ( floor(sqrt (n)) + n) }
  • 121. Exercise Exercise 2.5.1 Which grows faster 34n or 43n? Exercise 2.5.2 Does the notation ((1))n mean the same thing as 2θ(n)? Exercise 2.5.3 Prove that 20.5n ≤ 2n/n100 ≤ 2n for sufficiently big n. Exercise 2.5.4 Prove that n2 ≤ 3n2 log n ≤ n3 for sufficiently big n. Exercise 2.5.5 Sort the terms in f (n) = 100n100 + 34n + log1,000 n + 43n + 20.001n/n100. Exercise 2.5.6 For each of the following functions, sort its terms by growth rate. 1. f (n) = 5n3 − 17n2 + 4 2. f (n) = 5n3 log n + 8n3 3. f (n) = 73 log2 n 4. f (n) = { 1 if n is odd, 2 if n is even } 5. f (n) = 2 ポ 2n ポ n2 log2 n − 7n8 + 7.3n/n2
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127. Q: From lowest to highest, what is the correct order of the complexities O (n2), O (3n), O (2n), O (n2 lg n), O (1), O (n lg n), O (n3), O (n!), O (lg n), O (n)? A: From lowest to highest, the correct order of these complexities is O (1), O (lg n), O (n), O (n lg n), O (n2), O (n2 lg n), O (n3), O (2n), O (3n), O (n!). Q: What are the complexities of T1(n) = 3n lg n + lg n; T2(n) = 2n + n3 + 25; and T3(n, k) = k + n? From lowest to highest, what is the correct order of the resulting complexities? A: Using the rules of O -notation, the complexities of T1, T2, and T3 respectively are O (n lg n), O (2n), and O (n). From lowest to highest, the correct order of these complexities is O (n), O (n lg n), and O (2n). Q: Suppose we have written a procedure to add m square matrices of size n x n. If adding two square matrices requires O (n2 ) running time, what is the complexity of this procedure in terms of m and n? A: To add m matrices of size n x n, we must perform m - 1 additions, each requiring time O (n2). Therefore, the overall running time of this procedure is: O(m-1)O(n2) = O(m)O(n2) = O(mn2) Q: Suppose we have two algorithms to solve the same problem. One runs in time T1(n) = 400n, whereas the other runs in time T2(n) = n2. What are the complexities of these two algorithms? For what values of n might we consider using the algorithm with the higher complexity? A: The complexity of T1 is O (n), and the complexity of T2 is O (n2). However, the algorithm described by T1 involves such a large constant coefficient for n that when n < 400, the algorithm described by T2 Solved Exercise