SlideShare a Scribd company logo
Advanced Algorithms
Module-1
Data Structures
• What is the "Data Structure" ?
– Ways to represent data
• Why data structure ?
– To design and implement large-scale computer
system
– Have proven correct algorithms
– The art of programming
• How to master in data structure ?
– practice, discuss, and think
Algorithms
• Algorithms are the building blocks of computer programs. They
are as important to programming as recipes are to cooking.
• An algorithm is a well-defined procedure that takes input and
produces output. ... The main difference here is that algorithms are
mathematical or textual in nature.
• A programming algorithm is a computer procedure that is a lot
like a recipe (called a procedure) and tells your computer precisely
what steps to take to solve a problem or reach a goal.
• The ingredients are called inputs, while the results are called the
outputs.
Applications/Use of algorithms:
• In mathematics and computer science, an algorithm is a step-by-
step procedure for calculations.
• Algorithms are used for calculation, data processing, and
automated reasoning.
Algorithm Specification
• Definition
– An algorithm is a finite set of instructions that, if
followed, accomplishes a particular task. In
addition, all algorithms must satisfy the following
criteria:
(1)Input. There are zero or more quantities that are
externally supplied.
(2)Output. At least one quantity is produced.
(3)Definiteness. Each instruction is clear and
unambiguous.
Algorithm Specification
(4)Finiteness. If we trace out the instructions of an
algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
(5)Effectiveness. Every instruction must be basic
enough to be carried out, in principle, by a person
using only pencil and paper. It is not enough that
each operation be definite as in (3); it also must
be feasible.
Describing Algorithms
• Natural language
– English
• Instructions must be definite and effectiveness
• Graphic representation
– Flowchart
• work well only if the algorithm is small and simple
• Pseudo language
– Readable
– Instructions must be definite and effectiveness
• Combining English and C++
– In this text
Translating a Problem into an
Algorithm
• Problem
– Devise a program that sorts a set of n>= 1 integers
• Step I - Concept
– From those integers that are currently unsorted, find the
smallest and place it next in the sorted list
• Step II - Algorithm
– for (i= 0; i< n; i++){
Examine list[i] to list[n-1] and suppose that the smallest
integer is list[min];
Interchange list[i] and list[min];
}
Recursive Algorithms
• Direct recursion
– Functions call themselves
• Indirect recursion
– Functions call other functions that invoke the calling
function again
• When is recursion an appropriate mechanism?
– The problem itself is defined recursively
– Statements: if-else and while can be written
recursively
– Art of programming
• Why recursive algorithms ?
– Powerful, express an complex process very clearly
10
Analysis of algorithms
• Issues:
– correctness
– time efficiency
– space efficiency
– optimality
• Approaches:
– theoretical analysis
– empirical analysis
11
Theoretical analysis of time
efficiency
Time efficiency is analyzed by determining the
number of repetitions of the basic operation
as a function of input size
• Basic operation: the operation that contributes
most towards the running time of the
algorithm
12
Theoretical analysis of time
efficiency
T(n) ≈ copC(n)
running time execution time
for basic operation
Number of times
basic operation is
executed
input size
13
Input size and basic operation
examples
Problem Input size measure Basic operation
Searching for key in a
list of n items
Number of list’s items, i.e.
n
Key comparison
Multiplication of two
matrices
Matrix dimensions or total
number of elements
Multiplication of two
numbers
Checking primality of a
given integer n
n’size = number of digits
(in binary representation)
Division
Typical graph problem #vertices and/or edges
Visiting a vertex or
traversing an edge
14
Empirical analysis of time efficiency
• Select a specific (typical) sample of inputs
• Use physical unit of time (e.g., milliseconds)
or Count actual number of basic
operation’s executions
• Analyze the empirical data
• We mostly do theoretical analysis (may do
empirical in assignment)
A. Levitin “Introduction to the Design &
Analysis of Algorithms,” 3rd ed., Ch. 2
©2012 Pearson Education, Inc. Upper
15
Best-case, average-case, worst-
case
For some algs C(n) is independent of the input set. For others, C(n)
depends on which input set (of size n) is used. Example on next
slide (Search). Consider three possibilities:
• Worst case: Cworst(n) – maximum over inputs of size n
• Best case: Cbest(n) – minimum over inputs of size n
• Average case: Cavg(n) – “average” over inputs of size n
– Number of times the basic operation will be executed on
typical input
16
Best-case, average-case, worst-
case
– NOT the average of worst and best case
– Expected number of basic operations considered as a
random variable under some assumption about the
probability distribution of all possible inputs of size n
– Consider all possible input sets of size n, average C(n)
for all sets
• Some algorithms are same for all three (eg all
case performance)
17
Example: Find maximum
• Worst case
• Best case
• Average case: depends on assumputions
about input (eg proportion of found vs not-
found keys)
• All case
18
Order of growth
• Most important: Order of growth within a
constant multiple as n→∞
• Examples:
– How much faster will algorithm run on computer that
is twice as fast? What say you?
• Time = …
– How much longer does it take to solve problem of
double input size? What say you?
• Time =
Performance Analysis
• Performance evaluation
– Performance analysis
– Performance measurement
• Performance analysis - prior
– an important branch of CS, complexity theory
– estimate time and space
– machine independent
• Performance measurement -posterior
– The actual time and space requirements
– machine dependent
Time Complexity
Definition
The time complexity, T(p), taken by a program P is
the sum of the compile time and the run time
Total time
T(P)= compile time + run (or execution) time
= c + tp(instance characteristics)
Compile time does not depend on the instance
characteristics
How to evaluate?
Use the system clock
Number of steps performed
machine-independent
Cont..
Definition of a program step
A program step is a syntactically or semantically
meaningful program segment whose execution
time is independent of the instance characteristics
(10 additions can be one step, 100 multiplications
can also be one step)
Comp 122
Asymptotic Complexity
• Running time of an algorithm as a function of
input size n for large n.
• Expressed using only the highest-order term
in the expression for the exact running time.
– Instead of exact running time, say Q(n2).
• Describes behavior of function in the limit.
• Written using Asymptotic Notation.
Asymptotic Notation(O, , Q)
• motivation
– Target: Compare the time complexity of two programs that
computing the same function and predict the growth in run time
as instance characteristics change
– Determining the exact step count is difficult task
– Not very useful for comparative purpose
ex: C1n2+C2n <= C3n for n <= 98, (C1=1, C2=2, C3=100)
C1n2+C2n > C3n for n > 98,
– Determining the exact step count usually not worth(can not get
exact run time)
• Asymptotic notation
– Big "oh“ O
• upper bound(current trend)
– Omega 
• lower bound
– Theta Q
• upper and lower bound
Comp 122
Asymptotic Notation
• Q, O, 
• Defined for functions over the natural numbers.
– Ex: f(n) = Q(n2).
– Describes how f(n) grows in comparison to n2.
• Define a set of functions; in practice used to compare
two function sizes.
• The notations describe different rate-of-growth
relations between the defining function and the
defined set of functions.
Comp 122
Q-notation
Q(g(n)) = {f(n) :
 positive constants c1, c2,
and n0, such that n  n0,
we have 0  c1g(n)  f(n) 
c2g(n)
}
For function g(n), we define
Q(g(n)), big-Theta of n, as the set:
g(n) is an asymptotically tight bound for f(n).
Intuitively: Set of all functions that
have the same rate of growth as g(n).
Comp 122
O-notation
O(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n0,
we have 0  f(n)  cg(n) }
For function g(n), we define
O(g(n)), big-O of n, as the set:
g(n) is an asymptotic upper bound for f(n).
Intuitively: Set of all functions
whose rate of growth is the
same as or lower than that of
g(n).
f(n) = Q(g(n))  f(n) = O(g(n)).
Q(g(n))  O(g(n)).
Comp 122
 -notation
g(n) is an asymptotic lower bound for f(n).
Intuitively: Set of all functions
whose rate of growth is the
same as or higher than that of
g(n).
f(n) = Q(g(n))  f(n) = (g(n)).
Q(g(n))  (g(n)).
(g(n)) = {f(n) :
 positive constants c and n0,
such that n  n0,
we have 0  cg(n)  f(n)}
For function g(n), we define
(g(n)), big-Omega of n, as the
set:
Comp 122
Relations Between Q, O, 
Comp 122
Relations Between Q, , O
• I.e., Q(g(n)) = O(g(n))  (g(n))
• In practice, asymptotically tight bounds are
obtained from asymptotic upper and lower
bounds.
Theorem : For any two functions g(n) and
f(n),
f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = (g(n)).
Standard Notation and
Common Functions
• Monotonicity
A function f(n) is monotonically increasing if m  n
implies f(m)  f(n) .
A function f(n) is monotonically decreasing if m  n
implies f(m)  f(n) .
A function f(n) is strictly increasing
if m < n implies f(m) < f(n) .
A function f(n) is strictly decreasing
if m < n implies f(m) > f(n) .
Cont..
• Floors and ceilings
For any real number x, the greatest integer less than
or equal to x is denoted by x.
For any real number x, the least integer greater than
or equal to x is denoted by x.
For all real numbers x,
x1 < x  x  x < x+1.
Both functions are monotonically increasing.
Cont..
• Exponentials
For all n and a1, the function an is the exponential function
with base a and is monotonically increasing.
• Logarithms
Textbook adopts the following convention
lg n = log2n (binary logarithm),
ln n = logen (natural logarithm),
lgk n = (lg n)k (exponentiation),
lg lg n = lg(lg n) (composition),
lg n + k = (lg n)+k (precedence of lg).
ai
• Important relationships
For all real constants a and b such that a>1,
nb = o(an)
that is, any exponential function with a base
strictly greater than unity grows faster than any
polynomial function.
For all real constants a and b such that a>0,
lgbn = o(na)
that is, any positive polynomial function grows
faster than any polylogarithmic function.
Cont..
Cont..
• Factorials
For all n the function n! or “n factorial” is given by
n! = n  (n1)  (n  2)  (n  3)  …  2  1
It can be established that
n! = o(nn)
n! = (2n)
lg(n!) = Q(nlgn)
• Functional iteration
The notation f (i)(n) represents the function f(n) iteratively applied
i times to an initial value of n, or, recursively
f (i)(n) = n if n=0
f (i)(n) = f(f (i1)(n)) if n>0
Example:
If f(n) = 2n
then f (2)(n) = f(2n) = 2(2n) = 22n
then f (3)(n) = f(f (2)(n)) = 2(22n) = 23n
then f (i)(n) = 2in
Cont..
• Iterated logarithmic function
The notation lg* n which reads “log star of n” is defined as
lg* n = min {i0 : lg(i) n  1
Example:
lg* 2 = 1
lg* 4 = 2
lg* 16 = 3
lg* 65536 = 4
lg* 265536 = 5
Cont..
Comp 122
Solving recurrences
Comp 122
Cont..
Comp 122
• There are mainly three ways for solving recurrences.
• 1) Substitution Method: We make a guess for the
solution and then we use mathematical induction to
prove the guess is correct or incorrect.
• For example consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(n Log n).
• Now we use induction to prove our guess. We need to
prove that T(n) <= cn Log n.
• We can assume that it is true for values smaller than n.
Substitution Method
Comp 122
Cont..
Comp 122
Cont..
Comp 122
Cont..
L2.43
Recursion-tree method
• A recursion tree models the costs (time) of a
recursive execution of an algorithm.
• The recursion tree method is good for generating
guesses for the substitution method.
• The recursion-tree method can be unreliable, just
like any method that uses ellipses (…).
• The recursion-tree method promotes intuition,
however.
L2.44
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
L2.45
Example of recursion tree
T(n)
Solve T(n) = T(n/4) + T(n/2) + n2:
L2.46
Example of recursion tree
T(n/4) T(n/2)
n2
Solve T(n) = T(n/4) + T(n/2) + n2:
L2.47
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2
T(n/16) T(n/8) T(n/8) T(n/4)
L2.48
Example of recursion tree
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
L2.49
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
2
n
n2
L2.50
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2 (n/2)2
Q(1)
2
16
5 n
2
n
n2
L2.51
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2
Q(1)
2
16
5 n
2
n
2
256
25 n
n2
(n/2)2
…
L2.52
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
(n/16)2 (n/8)2 (n/8)2 (n/4)2
(n/4)2
Q(1)
2
16
5 n
2
n
2
256
25 n
   
 
1
3
16
5
2
16
5
16
5
2





n
…
Total =
= Q(n2)
n2
(n/2)2
geometric series
L2.53
The master method
Master Theorem-
Master’s Theorem is a popular method for solving the
recurrence relations.
Master’s theorem solves recurrence relations of the form-
Here, a >= 1, b > 1, k >= 0 and p is a real number.
L2.54
Cont..
Master Theorem Cases-
To solve recurrence relations using Master’s theorem, we
compare a with bk.
Then, we follow the following cases-
Case-01:
If a > bk, then T(n) = θ (nlog
b
a)
Case-02:
If a = bk and
If p < -1, then T(n) = θ (nlog
b
a)
L2.55
Cont..
If p = -1, then T(n) = θ (nlog
b
a.log2n)
If p > -1, then T(n) = θ (nlog
b
a.logp+1n)
Case-03:
If a < bk and
If p < 0, then T(n) = O (nk)
If p >= 0, then T(n) = θ (nklogpn)
L2.56
Cont..
PRACTICE PROBLEMS BASED ON MASTER
THEOREM-
Problem-01:
Solve the following recurrence relation using Master’s
theorem-
T(n) = 3T(n/2) + n2
Solution-
We compare the given recurrence relation with T(n) =
aT(n/b) + θ (nklogpn).
Then, we have-
a = 3
L2.57
Cont..
b = 2
k = 2
p = 0
Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
Thus,
T(n) = θ (n2)
L2.58
Cont..
Problem-02:
Solve the following recurrence relation using Master’s
theorem-
T(n) = 2T(n/2) + nlogn
Solution-
We compare the given recurrence relation with T(n) =
aT(n/b) + θ (nklogpn).
Then, we have-
a = 2
b = 2
k = 1
p = 1
L2.59
Cont..
Now, a = 2 and bk = 21 = 2.
Clearly, a = bk.
So, we follow case-02.
Since p = 1, so we have-
T(n) = θ (nlog
b
a.logp+1n)
T(n) = θ (nlog
2
2.log1+1n)
Thus,
T(n) = θ (nlog2n)
Amortized Analysis
• Amortized Analysis is used for algorithms where an
occasional operation is very slow, but most of the other
operations are faster.
• In Amortized Analysis, we analyze a sequence of
operations and guarantee a worst case average time
which is lower than the worst case time of a particular
expensive operation.
• The example data structures whose operations are
analyzed using Amortized Analysis are Hash Tables,
Disjoint Sets and Splay Trees.
Cont..
• Not just consider one operation, but a sequence of operations
on a given data structure.
• Average cost over a sequence of operations.
• Probabilistic analysis:
– Average case running time: average over all possible inputs
for one algorithm (operation).
– If using probability, called expected running time.
• Amortized analysis:
– No involvement of probability
– Average performance on a sequence of operations, even
some operation is expensive.
– Guarantee average performance of each operation among
the sequence in worst case.
Three Methods of Amortized Analysis
• Aggregate analysis:
– Total cost of n operations/n,
• Accounting method:
– Assign each type of operation an (different) amortized cost
– overcharge some operations,
– store the overcharge as credit on specific objects,
– then use the credit for compensation for some later operations.
• Potential method:
– Same as accounting method
– But store the credit as “potential energy” and as a whole.
Example for amortized analysis
• Stack operations:
– PUSH(S,x), O(1)
– POP(S), O(1)
– MULTIPOP(S,k), min(s,k)
• while not STACK-EMPTY(S) and k>0
• do POP(S)
• k=k-1
• Let us consider a sequence of n PUSH, POP, MULTIPOP.
– The worst case cost for MULTIPOP in the sequence is O(n),
since the stack size is at most n.
– thus the cost of the sequence is O(n2). Correct, but not tight.
Aggregate Analysis
• In fact, a sequence of n operations on an initially
empty stack cost at most O(n). Why?
Each object can be POP only once (including in MULTIPOP) for each time
it is PUSHed. #POPs is at most #PUSHs, which is at most n.
Thus the average cost of an operation is O(n)/n = O(1).
Amortized cost in aggregate analysis is defined to be average cost.
Another example: increasing a binary counter
• Binary counter of length k, A[0..k-1] of bit array.
• INCREMENT(A)
1. i0
2. while i<k and A[i]=1
3. do A[i]0 (flip, reset)
4. ii+1
5. if i<k
6. then A[i]1 (flip, set)
Analysis of INCREMENT(A)
• Cursory analysis:
– A single execution of INCREMENT takes
O(k) in the worst case (when A contains all
1s)
– So a sequence of n executions takes O(nk)
in worst case (suppose initial counter is 0).
– This bound is correct, but not tight.
• The tight bound is O(n) for n executions.
Amortized (Aggregate) Analysis of INCREMENT(A)
Observation: The running time determined by #flips
but not all bits flip each time INCREMENT is called.
A[0] flips every time, total n times.
A[1] flips every other time, n/2 times.
A[2] flips every forth time, n/4 times.
….
for i=0,1,…,k-1, A[i] flips n/2i times.
Thus total #flips is i=0
k-1 n/2i
< ni=0
 1/2i
=2n.
Amortized Analysis: Accounting Method
• Idea:
– Assign differing charges to different operations.
– The amount of the charge is called amortized cost.
– amortized cost is more or less than actual cost.
– When amortized cost > actual cost, the difference is saved
in specific objects as credits.
– The credits can be used by later operations whose
amortized cost < actual cost.
• As a comparison, in aggregate analysis, all operations
have same amortized costs.
Accounting Method (cont.)
• Conditions:
– suppose actual cost is ci for the ith operation in the sequence,
and amortized cost is ci',
– i=1
n ci' i=1
n ci should hold.
• since we want to show the average cost per operation is
small using amortized cost, we need the total amortized cost
is an upper bound of total actual cost.
• holds for all sequences of operations.
– Total credits is i=1
n ci' - i=1
n ci , which should be
nonnegative,
• Moreover, i=1
t ci' - i=1
t ci ≥0 for any t >0.
Accounting Method: Stack Operations
• Actual costs:
– PUSH :1, POP :1, MULTIPOP: min(s,k).
• Let assign the following amortized costs:
– PUSH:2, POP: 0, MULTIPOP: 0.
• Similar to a stack of plates in a cafeteria.
– Suppose $1 represents a unit cost.
– When pushing a plate, use one dollar to pay the actual
cost of the push and leave one dollar on the plate as
credit.
– Whenever POP ing a plate, the one dollar on the plate is
used to pay the actual cost of the POP. (same for
MULTIPOP).
Cont..
– By charging PUSH a little more, do not charge POP or
MULTIPOP.
• The total amortized cost for n PUSH, POP, MULTIPOP is O(n),
thus O(1) for average amortized cost for each operation.
• Conditions hold: total amortized cost ≥total actual cost, and
amount of credits never becomes negative.
Accounting method: binary counter
• Let $1 represent each unit of cost (i.e., the flip of one bit).
• Charge an amortized cost of $2 to set a bit to 1.
• Whenever a bit is set, use $1 to pay the actual cost, and store
another $1 on the bit as credit.
• When a bit is reset, the stored $1 pays the cost.
• At any point, a 1 in the counter stores $1, the number of 1’s is
never negative, so is the total credits.
• At most one bit is set in each operation, so the amortized cost of
an operation is at most $2.
• Thus, total amortized cost of n operations is O(n), and average is
O(1).
The Potential Method
• Same as accounting method: something prepaid is used later.
• Different from accounting method
– The prepaid work not as credit, but as “potential energy”,
or “potential”.
– The potential is associated with the data structure as a
whole rather than with specific objects within the data
structure.
The Potential Method (cont.)
• Initial data structure D0,
• n operations, resulting in D0, D1,…, Dn with costs c1, c2,…, cn.
• A potential function : {Di}  R (real numbers)
• (Di) is called the potential of Di.
• Amortized cost ci' of the ith operation is:
ci' = ci + (Di) - (Di-1). (actual cost + potential change)
• i=1
n ci'= i=1
n (ci + (Di) - (Di-1))
• = i=1
nci + (Dn) - (D0)
The Potential Method (cont.)
• If (Dn)  (D0), then total amortized cost is an upper bound of
total actual cost.
• But we do not know how many operations, so (Di)  (D0) is
required for any i.
• It is convenient to define (D0)=0,and so (Di) 0, for all i.
• If the potential change is positive (i.e., (Di) - (Di-1)>0), then ci'
is an overcharge (so store the increase as potential),
• otherwise, undercharge (discharge the potential to pay the
actual cost).
Potential method: stack operation
• Potential for a stack is the number of objects in the stack.
• So (D0)=0, and (Di) 0
• Amortized cost of stack operations:
– PUSH:
• Potential change: (Di)- (Di-1) =(s+1)-s =1.
• Amortized cost: ci' = ci + (Di) - (Di-1)=1+1=2.
– POP:
• Potential change: (Di)- (Di-1) =(s-1) –s= -1.
• Amortized cost: ci' = ci + (Di) - (Di-1)=1+(-1)=0.
– MULTIPOP(S,k): k'=min(s,k)
• Potential change: (Di)- (Di-1) = –k'.
• Amortized cost: ci' = ci + (Di) - (Di-1)=k'+(-k')=0.
Cont..
• So amortized cost of each operation is O(1), and total
amortized cost of n operations is O(n).
• Since total amortized cost is an upper bound of actual cost,
the worse case cost of n operations is O(n).
Potential method: binary counter
• Define the potential of the counter after the ith INCREMENT is
(Di) =bi, the number of 1’s. clearly, (Di)0.
• Let us compute amortized cost of an operation
– Suppose the ith operation resets ti bits.
– Actual cost ci of the operation is at most ti +1.
– If bi=0, then the ith operation resets all k bits, so bi-1=ti=k.
– If bi>0, then bi=bi-1-ti+1
– In either case, bibi-1-ti+1.
– So potential change is (Di) - (Di-1) bi-1-ti+1-bi-1=1-ti.
– So amortized cost is: ci' = ci + (Di) - (Di-1)  ti +1+1-ti=2.
• The total amortized cost of n operations is O(n).
• Thus worst case cost is O(n).

More Related Content

Similar to Algorithms

Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity Calculation
Akhil Kaushik
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Tekle12
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
Tekle12
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
MemMem25
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
http://algorithmtraining.com/advanced-python-training-hyderabad/
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
http://algorithmtraining.com/advanced-python-training-hyderabad/
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
http://algorithmtraining.com/advanced-python-training-hyderabad/
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
http://algorithmtraining.com/advanced-python-training-hyderabad/
 
Data Structures and Algorithm Analysis
Data Structures  and  Algorithm AnalysisData Structures  and  Algorithm Analysis
Data Structures and Algorithm Analysis
Mary Margarat
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
Muhammad Hammad Waseem
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)
jehan1987
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
iqbalphy1
 
Algorithm chapter 2
Algorithm chapter 2Algorithm chapter 2
Algorithm chapter 2
chidabdu
 
algorithmanalysis and effciency.pptx
algorithmanalysis and effciency.pptxalgorithmanalysis and effciency.pptx
algorithmanalysis and effciency.pptx
ChSreenivasuluReddy
 
CS3114_09212011.ppt
CS3114_09212011.pptCS3114_09212011.ppt
CS3114_09212011.ppt
Arumugam90
 
Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
Afaq Mansoor Khan
 
chapter 1
chapter 1chapter 1
chapter 1
yatheesha
 
Unit 1.pptx
Unit 1.pptxUnit 1.pptx
Unit 1.pptx
DeepakYadav656387
 
Data Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm AnalysisData Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm Analysis
Ferdin Joe John Joseph PhD
 
Cis435 week01
Cis435 week01Cis435 week01
Cis435 week01
ashish bansal
 

Similar to Algorithms (20)

Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity Calculation
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...Algorithm Class at KPHB  (C, C++ Course Training Institute in KPHB, Kukatpall...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpall...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
Algorithm Class at KPHB (C, C++ Course Training Institute in KPHB, Kukatpally...
 
Data Structures and Algorithm Analysis
Data Structures  and  Algorithm AnalysisData Structures  and  Algorithm Analysis
Data Structures and Algorithm Analysis
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Algorithm chapter 2
Algorithm chapter 2Algorithm chapter 2
Algorithm chapter 2
 
algorithmanalysis and effciency.pptx
algorithmanalysis and effciency.pptxalgorithmanalysis and effciency.pptx
algorithmanalysis and effciency.pptx
 
CS3114_09212011.ppt
CS3114_09212011.pptCS3114_09212011.ppt
CS3114_09212011.ppt
 
Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
 
chapter 1
chapter 1chapter 1
chapter 1
 
Unit 1.pptx
Unit 1.pptxUnit 1.pptx
Unit 1.pptx
 
Data Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm AnalysisData Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm Analysis
 
Cis435 week01
Cis435 week01Cis435 week01
Cis435 week01
 

Recently uploaded

basic-wireline-operations-course-mahmoud-f-radwan.pdf
basic-wireline-operations-course-mahmoud-f-radwan.pdfbasic-wireline-operations-course-mahmoud-f-radwan.pdf
basic-wireline-operations-course-mahmoud-f-radwan.pdf
NidhalKahouli2
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
University of Maribor
 
Heat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation pptHeat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation ppt
mamunhossenbd75
 
Recycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part IIRecycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part II
Aditya Rajan Patra
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Yasser Mahgoub
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
Hitesh Mohapatra
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
wisnuprabawa3
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
IJECEIAES
 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
MDSABBIROJJAMANPAYEL
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
nooriasukmaningtyas
 
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdfBPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
MIGUELANGEL966976
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
Victor Morales
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
Madan Karki
 
The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.
sachin chaurasia
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
JamalHussainArman
 
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMTIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
HODECEDSIET
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
NazakatAliKhoso2
 

Recently uploaded (20)

basic-wireline-operations-course-mahmoud-f-radwan.pdf
basic-wireline-operations-course-mahmoud-f-radwan.pdfbasic-wireline-operations-course-mahmoud-f-radwan.pdf
basic-wireline-operations-course-mahmoud-f-radwan.pdf
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
 
Heat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation pptHeat Resistant Concrete Presentation ppt
Heat Resistant Concrete Presentation ppt
 
Recycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part IIRecycled Concrete Aggregate in Construction Part II
Recycled Concrete Aggregate in Construction Part II
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
 
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdfBPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
 
The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.The Python for beginners. This is an advance computer language.
The Python for beginners. This is an advance computer language.
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
 
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMTIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
 

Algorithms

  • 2. Data Structures • What is the "Data Structure" ? – Ways to represent data • Why data structure ? – To design and implement large-scale computer system – Have proven correct algorithms – The art of programming • How to master in data structure ? – practice, discuss, and think
  • 3. Algorithms • Algorithms are the building blocks of computer programs. They are as important to programming as recipes are to cooking. • An algorithm is a well-defined procedure that takes input and produces output. ... The main difference here is that algorithms are mathematical or textual in nature. • A programming algorithm is a computer procedure that is a lot like a recipe (called a procedure) and tells your computer precisely what steps to take to solve a problem or reach a goal. • The ingredients are called inputs, while the results are called the outputs.
  • 4. Applications/Use of algorithms: • In mathematics and computer science, an algorithm is a step-by- step procedure for calculations. • Algorithms are used for calculation, data processing, and automated reasoning.
  • 5. Algorithm Specification • Definition – An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In addition, all algorithms must satisfy the following criteria: (1)Input. There are zero or more quantities that are externally supplied. (2)Output. At least one quantity is produced. (3)Definiteness. Each instruction is clear and unambiguous.
  • 6. Algorithm Specification (4)Finiteness. If we trace out the instructions of an algorithm, then for all cases, the algorithm terminates after a finite number of steps. (5)Effectiveness. Every instruction must be basic enough to be carried out, in principle, by a person using only pencil and paper. It is not enough that each operation be definite as in (3); it also must be feasible.
  • 7. Describing Algorithms • Natural language – English • Instructions must be definite and effectiveness • Graphic representation – Flowchart • work well only if the algorithm is small and simple • Pseudo language – Readable – Instructions must be definite and effectiveness • Combining English and C++ – In this text
  • 8. Translating a Problem into an Algorithm • Problem – Devise a program that sorts a set of n>= 1 integers • Step I - Concept – From those integers that are currently unsorted, find the smallest and place it next in the sorted list • Step II - Algorithm – for (i= 0; i< n; i++){ Examine list[i] to list[n-1] and suppose that the smallest integer is list[min]; Interchange list[i] and list[min]; }
  • 9. Recursive Algorithms • Direct recursion – Functions call themselves • Indirect recursion – Functions call other functions that invoke the calling function again • When is recursion an appropriate mechanism? – The problem itself is defined recursively – Statements: if-else and while can be written recursively – Art of programming • Why recursive algorithms ? – Powerful, express an complex process very clearly
  • 10. 10 Analysis of algorithms • Issues: – correctness – time efficiency – space efficiency – optimality • Approaches: – theoretical analysis – empirical analysis
  • 11. 11 Theoretical analysis of time efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size • Basic operation: the operation that contributes most towards the running time of the algorithm
  • 12. 12 Theoretical analysis of time efficiency T(n) ≈ copC(n) running time execution time for basic operation Number of times basic operation is executed input size
  • 13. 13 Input size and basic operation examples Problem Input size measure Basic operation Searching for key in a list of n items Number of list’s items, i.e. n Key comparison Multiplication of two matrices Matrix dimensions or total number of elements Multiplication of two numbers Checking primality of a given integer n n’size = number of digits (in binary representation) Division Typical graph problem #vertices and/or edges Visiting a vertex or traversing an edge
  • 14. 14 Empirical analysis of time efficiency • Select a specific (typical) sample of inputs • Use physical unit of time (e.g., milliseconds) or Count actual number of basic operation’s executions • Analyze the empirical data • We mostly do theoretical analysis (may do empirical in assignment)
  • 15. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper 15 Best-case, average-case, worst- case For some algs C(n) is independent of the input set. For others, C(n) depends on which input set (of size n) is used. Example on next slide (Search). Consider three possibilities: • Worst case: Cworst(n) – maximum over inputs of size n • Best case: Cbest(n) – minimum over inputs of size n • Average case: Cavg(n) – “average” over inputs of size n – Number of times the basic operation will be executed on typical input
  • 16. 16 Best-case, average-case, worst- case – NOT the average of worst and best case – Expected number of basic operations considered as a random variable under some assumption about the probability distribution of all possible inputs of size n – Consider all possible input sets of size n, average C(n) for all sets • Some algorithms are same for all three (eg all case performance)
  • 17. 17 Example: Find maximum • Worst case • Best case • Average case: depends on assumputions about input (eg proportion of found vs not- found keys) • All case
  • 18. 18 Order of growth • Most important: Order of growth within a constant multiple as n→∞ • Examples: – How much faster will algorithm run on computer that is twice as fast? What say you? • Time = … – How much longer does it take to solve problem of double input size? What say you? • Time =
  • 19. Performance Analysis • Performance evaluation – Performance analysis – Performance measurement • Performance analysis - prior – an important branch of CS, complexity theory – estimate time and space – machine independent • Performance measurement -posterior – The actual time and space requirements – machine dependent
  • 20. Time Complexity Definition The time complexity, T(p), taken by a program P is the sum of the compile time and the run time Total time T(P)= compile time + run (or execution) time = c + tp(instance characteristics) Compile time does not depend on the instance characteristics How to evaluate? Use the system clock Number of steps performed machine-independent
  • 21. Cont.. Definition of a program step A program step is a syntactically or semantically meaningful program segment whose execution time is independent of the instance characteristics (10 additions can be one step, 100 multiplications can also be one step)
  • 22. Comp 122 Asymptotic Complexity • Running time of an algorithm as a function of input size n for large n. • Expressed using only the highest-order term in the expression for the exact running time. – Instead of exact running time, say Q(n2). • Describes behavior of function in the limit. • Written using Asymptotic Notation.
  • 23. Asymptotic Notation(O, , Q) • motivation – Target: Compare the time complexity of two programs that computing the same function and predict the growth in run time as instance characteristics change – Determining the exact step count is difficult task – Not very useful for comparative purpose ex: C1n2+C2n <= C3n for n <= 98, (C1=1, C2=2, C3=100) C1n2+C2n > C3n for n > 98, – Determining the exact step count usually not worth(can not get exact run time) • Asymptotic notation – Big "oh“ O • upper bound(current trend) – Omega  • lower bound – Theta Q • upper and lower bound
  • 24. Comp 122 Asymptotic Notation • Q, O,  • Defined for functions over the natural numbers. – Ex: f(n) = Q(n2). – Describes how f(n) grows in comparison to n2. • Define a set of functions; in practice used to compare two function sizes. • The notations describe different rate-of-growth relations between the defining function and the defined set of functions.
  • 25. Comp 122 Q-notation Q(g(n)) = {f(n) :  positive constants c1, c2, and n0, such that n  n0, we have 0  c1g(n)  f(n)  c2g(n) } For function g(n), we define Q(g(n)), big-Theta of n, as the set: g(n) is an asymptotically tight bound for f(n). Intuitively: Set of all functions that have the same rate of growth as g(n).
  • 26. Comp 122 O-notation O(g(n)) = {f(n) :  positive constants c and n0, such that n  n0, we have 0  f(n)  cg(n) } For function g(n), we define O(g(n)), big-O of n, as the set: g(n) is an asymptotic upper bound for f(n). Intuitively: Set of all functions whose rate of growth is the same as or lower than that of g(n). f(n) = Q(g(n))  f(n) = O(g(n)). Q(g(n))  O(g(n)).
  • 27. Comp 122  -notation g(n) is an asymptotic lower bound for f(n). Intuitively: Set of all functions whose rate of growth is the same as or higher than that of g(n). f(n) = Q(g(n))  f(n) = (g(n)). Q(g(n))  (g(n)). (g(n)) = {f(n) :  positive constants c and n0, such that n  n0, we have 0  cg(n)  f(n)} For function g(n), we define (g(n)), big-Omega of n, as the set:
  • 29. Comp 122 Relations Between Q, , O • I.e., Q(g(n)) = O(g(n))  (g(n)) • In practice, asymptotically tight bounds are obtained from asymptotic upper and lower bounds. Theorem : For any two functions g(n) and f(n), f(n) = Q(g(n)) iff f(n) = O(g(n)) and f(n) = (g(n)).
  • 30. Standard Notation and Common Functions • Monotonicity A function f(n) is monotonically increasing if m  n implies f(m)  f(n) . A function f(n) is monotonically decreasing if m  n implies f(m)  f(n) . A function f(n) is strictly increasing if m < n implies f(m) < f(n) . A function f(n) is strictly decreasing if m < n implies f(m) > f(n) .
  • 31. Cont.. • Floors and ceilings For any real number x, the greatest integer less than or equal to x is denoted by x. For any real number x, the least integer greater than or equal to x is denoted by x. For all real numbers x, x1 < x  x  x < x+1. Both functions are monotonically increasing.
  • 32. Cont.. • Exponentials For all n and a1, the function an is the exponential function with base a and is monotonically increasing. • Logarithms Textbook adopts the following convention lg n = log2n (binary logarithm), ln n = logen (natural logarithm), lgk n = (lg n)k (exponentiation), lg lg n = lg(lg n) (composition), lg n + k = (lg n)+k (precedence of lg). ai
  • 33. • Important relationships For all real constants a and b such that a>1, nb = o(an) that is, any exponential function with a base strictly greater than unity grows faster than any polynomial function. For all real constants a and b such that a>0, lgbn = o(na) that is, any positive polynomial function grows faster than any polylogarithmic function. Cont..
  • 34. Cont.. • Factorials For all n the function n! or “n factorial” is given by n! = n  (n1)  (n  2)  (n  3)  …  2  1 It can be established that n! = o(nn) n! = (2n) lg(n!) = Q(nlgn)
  • 35. • Functional iteration The notation f (i)(n) represents the function f(n) iteratively applied i times to an initial value of n, or, recursively f (i)(n) = n if n=0 f (i)(n) = f(f (i1)(n)) if n>0 Example: If f(n) = 2n then f (2)(n) = f(2n) = 2(2n) = 22n then f (3)(n) = f(f (2)(n)) = 2(22n) = 23n then f (i)(n) = 2in Cont..
  • 36. • Iterated logarithmic function The notation lg* n which reads “log star of n” is defined as lg* n = min {i0 : lg(i) n  1 Example: lg* 2 = 1 lg* 4 = 2 lg* 16 = 3 lg* 65536 = 4 lg* 265536 = 5 Cont..
  • 39. Comp 122 • There are mainly three ways for solving recurrences. • 1) Substitution Method: We make a guess for the solution and then we use mathematical induction to prove the guess is correct or incorrect. • For example consider the recurrence T(n) = 2T(n/2) + n We guess the solution as T(n) = O(n Log n). • Now we use induction to prove our guess. We need to prove that T(n) <= cn Log n. • We can assume that it is true for values smaller than n. Substitution Method
  • 43. L2.43 Recursion-tree method • A recursion tree models the costs (time) of a recursive execution of an algorithm. • The recursion tree method is good for generating guesses for the substitution method. • The recursion-tree method can be unreliable, just like any method that uses ellipses (…). • The recursion-tree method promotes intuition, however.
  • 44. L2.44 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2:
  • 45. L2.45 Example of recursion tree T(n) Solve T(n) = T(n/4) + T(n/2) + n2:
  • 46. L2.46 Example of recursion tree T(n/4) T(n/2) n2 Solve T(n) = T(n/4) + T(n/2) + n2:
  • 47. L2.47 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2: n2 (n/4)2 (n/2)2 T(n/16) T(n/8) T(n/8) T(n/4)
  • 48. L2.48 Example of recursion tree (n/16)2 (n/8)2 (n/8)2 (n/4)2 (n/4)2 (n/2)2 Q(1) Solve T(n) = T(n/4) + T(n/2) + n2: n2
  • 49. L2.49 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2: (n/16)2 (n/8)2 (n/8)2 (n/4)2 (n/4)2 (n/2)2 Q(1) 2 n n2
  • 50. L2.50 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2: (n/16)2 (n/8)2 (n/8)2 (n/4)2 (n/4)2 (n/2)2 Q(1) 2 16 5 n 2 n n2
  • 51. L2.51 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2: (n/16)2 (n/8)2 (n/8)2 (n/4)2 (n/4)2 Q(1) 2 16 5 n 2 n 2 256 25 n n2 (n/2)2 …
  • 52. L2.52 Example of recursion tree Solve T(n) = T(n/4) + T(n/2) + n2: (n/16)2 (n/8)2 (n/8)2 (n/4)2 (n/4)2 Q(1) 2 16 5 n 2 n 2 256 25 n       1 3 16 5 2 16 5 16 5 2      n … Total = = Q(n2) n2 (n/2)2 geometric series
  • 53. L2.53 The master method Master Theorem- Master’s Theorem is a popular method for solving the recurrence relations. Master’s theorem solves recurrence relations of the form- Here, a >= 1, b > 1, k >= 0 and p is a real number.
  • 54. L2.54 Cont.. Master Theorem Cases- To solve recurrence relations using Master’s theorem, we compare a with bk. Then, we follow the following cases- Case-01: If a > bk, then T(n) = θ (nlog b a) Case-02: If a = bk and If p < -1, then T(n) = θ (nlog b a)
  • 55. L2.55 Cont.. If p = -1, then T(n) = θ (nlog b a.log2n) If p > -1, then T(n) = θ (nlog b a.logp+1n) Case-03: If a < bk and If p < 0, then T(n) = O (nk) If p >= 0, then T(n) = θ (nklogpn)
  • 56. L2.56 Cont.. PRACTICE PROBLEMS BASED ON MASTER THEOREM- Problem-01: Solve the following recurrence relation using Master’s theorem- T(n) = 3T(n/2) + n2 Solution- We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn). Then, we have- a = 3
  • 57. L2.57 Cont.. b = 2 k = 2 p = 0 Now, a = 3 and bk = 22 = 4. Clearly, a < bk. So, we follow case-03. Since p = 0, so we have- T(n) = θ (nklogpn) T(n) = θ (n2log0n) Thus, T(n) = θ (n2)
  • 58. L2.58 Cont.. Problem-02: Solve the following recurrence relation using Master’s theorem- T(n) = 2T(n/2) + nlogn Solution- We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn). Then, we have- a = 2 b = 2 k = 1 p = 1
  • 59. L2.59 Cont.. Now, a = 2 and bk = 21 = 2. Clearly, a = bk. So, we follow case-02. Since p = 1, so we have- T(n) = θ (nlog b a.logp+1n) T(n) = θ (nlog 2 2.log1+1n) Thus, T(n) = θ (nlog2n)
  • 60. Amortized Analysis • Amortized Analysis is used for algorithms where an occasional operation is very slow, but most of the other operations are faster. • In Amortized Analysis, we analyze a sequence of operations and guarantee a worst case average time which is lower than the worst case time of a particular expensive operation. • The example data structures whose operations are analyzed using Amortized Analysis are Hash Tables, Disjoint Sets and Splay Trees.
  • 61. Cont.. • Not just consider one operation, but a sequence of operations on a given data structure. • Average cost over a sequence of operations. • Probabilistic analysis: – Average case running time: average over all possible inputs for one algorithm (operation). – If using probability, called expected running time. • Amortized analysis: – No involvement of probability – Average performance on a sequence of operations, even some operation is expensive. – Guarantee average performance of each operation among the sequence in worst case.
  • 62. Three Methods of Amortized Analysis • Aggregate analysis: – Total cost of n operations/n, • Accounting method: – Assign each type of operation an (different) amortized cost – overcharge some operations, – store the overcharge as credit on specific objects, – then use the credit for compensation for some later operations. • Potential method: – Same as accounting method – But store the credit as “potential energy” and as a whole.
  • 63. Example for amortized analysis • Stack operations: – PUSH(S,x), O(1) – POP(S), O(1) – MULTIPOP(S,k), min(s,k) • while not STACK-EMPTY(S) and k>0 • do POP(S) • k=k-1 • Let us consider a sequence of n PUSH, POP, MULTIPOP. – The worst case cost for MULTIPOP in the sequence is O(n), since the stack size is at most n. – thus the cost of the sequence is O(n2). Correct, but not tight.
  • 64. Aggregate Analysis • In fact, a sequence of n operations on an initially empty stack cost at most O(n). Why? Each object can be POP only once (including in MULTIPOP) for each time it is PUSHed. #POPs is at most #PUSHs, which is at most n. Thus the average cost of an operation is O(n)/n = O(1). Amortized cost in aggregate analysis is defined to be average cost.
  • 65. Another example: increasing a binary counter • Binary counter of length k, A[0..k-1] of bit array. • INCREMENT(A) 1. i0 2. while i<k and A[i]=1 3. do A[i]0 (flip, reset) 4. ii+1 5. if i<k 6. then A[i]1 (flip, set)
  • 66. Analysis of INCREMENT(A) • Cursory analysis: – A single execution of INCREMENT takes O(k) in the worst case (when A contains all 1s) – So a sequence of n executions takes O(nk) in worst case (suppose initial counter is 0). – This bound is correct, but not tight. • The tight bound is O(n) for n executions.
  • 67. Amortized (Aggregate) Analysis of INCREMENT(A) Observation: The running time determined by #flips but not all bits flip each time INCREMENT is called. A[0] flips every time, total n times. A[1] flips every other time, n/2 times. A[2] flips every forth time, n/4 times. …. for i=0,1,…,k-1, A[i] flips n/2i times. Thus total #flips is i=0 k-1 n/2i < ni=0  1/2i =2n.
  • 68. Amortized Analysis: Accounting Method • Idea: – Assign differing charges to different operations. – The amount of the charge is called amortized cost. – amortized cost is more or less than actual cost. – When amortized cost > actual cost, the difference is saved in specific objects as credits. – The credits can be used by later operations whose amortized cost < actual cost. • As a comparison, in aggregate analysis, all operations have same amortized costs.
  • 69. Accounting Method (cont.) • Conditions: – suppose actual cost is ci for the ith operation in the sequence, and amortized cost is ci', – i=1 n ci' i=1 n ci should hold. • since we want to show the average cost per operation is small using amortized cost, we need the total amortized cost is an upper bound of total actual cost. • holds for all sequences of operations. – Total credits is i=1 n ci' - i=1 n ci , which should be nonnegative, • Moreover, i=1 t ci' - i=1 t ci ≥0 for any t >0.
  • 70. Accounting Method: Stack Operations • Actual costs: – PUSH :1, POP :1, MULTIPOP: min(s,k). • Let assign the following amortized costs: – PUSH:2, POP: 0, MULTIPOP: 0. • Similar to a stack of plates in a cafeteria. – Suppose $1 represents a unit cost. – When pushing a plate, use one dollar to pay the actual cost of the push and leave one dollar on the plate as credit. – Whenever POP ing a plate, the one dollar on the plate is used to pay the actual cost of the POP. (same for MULTIPOP).
  • 71. Cont.. – By charging PUSH a little more, do not charge POP or MULTIPOP. • The total amortized cost for n PUSH, POP, MULTIPOP is O(n), thus O(1) for average amortized cost for each operation. • Conditions hold: total amortized cost ≥total actual cost, and amount of credits never becomes negative.
  • 72. Accounting method: binary counter • Let $1 represent each unit of cost (i.e., the flip of one bit). • Charge an amortized cost of $2 to set a bit to 1. • Whenever a bit is set, use $1 to pay the actual cost, and store another $1 on the bit as credit. • When a bit is reset, the stored $1 pays the cost. • At any point, a 1 in the counter stores $1, the number of 1’s is never negative, so is the total credits. • At most one bit is set in each operation, so the amortized cost of an operation is at most $2. • Thus, total amortized cost of n operations is O(n), and average is O(1).
  • 73. The Potential Method • Same as accounting method: something prepaid is used later. • Different from accounting method – The prepaid work not as credit, but as “potential energy”, or “potential”. – The potential is associated with the data structure as a whole rather than with specific objects within the data structure.
  • 74. The Potential Method (cont.) • Initial data structure D0, • n operations, resulting in D0, D1,…, Dn with costs c1, c2,…, cn. • A potential function : {Di}  R (real numbers) • (Di) is called the potential of Di. • Amortized cost ci' of the ith operation is: ci' = ci + (Di) - (Di-1). (actual cost + potential change) • i=1 n ci'= i=1 n (ci + (Di) - (Di-1)) • = i=1 nci + (Dn) - (D0)
  • 75. The Potential Method (cont.) • If (Dn)  (D0), then total amortized cost is an upper bound of total actual cost. • But we do not know how many operations, so (Di)  (D0) is required for any i. • It is convenient to define (D0)=0,and so (Di) 0, for all i. • If the potential change is positive (i.e., (Di) - (Di-1)>0), then ci' is an overcharge (so store the increase as potential), • otherwise, undercharge (discharge the potential to pay the actual cost).
  • 76. Potential method: stack operation • Potential for a stack is the number of objects in the stack. • So (D0)=0, and (Di) 0 • Amortized cost of stack operations: – PUSH: • Potential change: (Di)- (Di-1) =(s+1)-s =1. • Amortized cost: ci' = ci + (Di) - (Di-1)=1+1=2. – POP: • Potential change: (Di)- (Di-1) =(s-1) –s= -1. • Amortized cost: ci' = ci + (Di) - (Di-1)=1+(-1)=0. – MULTIPOP(S,k): k'=min(s,k) • Potential change: (Di)- (Di-1) = –k'. • Amortized cost: ci' = ci + (Di) - (Di-1)=k'+(-k')=0.
  • 77. Cont.. • So amortized cost of each operation is O(1), and total amortized cost of n operations is O(n). • Since total amortized cost is an upper bound of actual cost, the worse case cost of n operations is O(n).
  • 78. Potential method: binary counter • Define the potential of the counter after the ith INCREMENT is (Di) =bi, the number of 1’s. clearly, (Di)0. • Let us compute amortized cost of an operation – Suppose the ith operation resets ti bits. – Actual cost ci of the operation is at most ti +1. – If bi=0, then the ith operation resets all k bits, so bi-1=ti=k. – If bi>0, then bi=bi-1-ti+1 – In either case, bibi-1-ti+1. – So potential change is (Di) - (Di-1) bi-1-ti+1-bi-1=1-ti. – So amortized cost is: ci' = ci + (Di) - (Di-1)  ti +1+1-ti=2. • The total amortized cost of n operations is O(n). • Thus worst case cost is O(n).