(BCS-503)
Faculty: Devesh Garg
Course: B.Tech 5th
Semester (Section A &B)
Session 2024-25
Department of Computer Science & Engineering
Design And Analysis of
Algorithm
Syllabus
Introduction to Algorithm
Faculty: Devesh Garg
Department of Computer Science &
Engineering
LECTURE-1
Design and Analysis of Algorithms
Algorithm:
An algorithm is a procedure (a finite set of well-defined
instructions) for accomplishing some tasks which,
given an initial state terminate in a defined end-state.
Characterstics of Algorithm
Input: Zero or more quantities are externally supplied
Output: At least one quantity is produced
Definiteness: Each Instruction is clear and
unambiguous
Finiteness: If we trace out the instructions of an
algorithm, then for all cases, the algorithm terminate after
a finite number of steps
Effectiveness: Every instruction must be very basic
so that it can be carried out, in principal, by a person using
only paper and pencil.
Algorithm Design Techniques
• Divide & Conquer
• Recursive Algorithm
• Greedy Approach
• Dynamic Programming
• Backtracking
• Branch & Bound
• Randomized Algorithm
• Brute Force Algorithm
Analyze the Algorithm
• Time complexity
– How much no of steps is required to execute of an
Algorithm
• Space complexity
– How much space ( Memory Units) is required to
execute of an Algorithm
• The amount of memory units required by an algorithm to
run to completion.
• Some algorithms may be more efficient if data completely
loaded into memory
Coding example #1
//Sum of n Elements: // Take constant at every Line
Algorithm Sum(a,n) // sum of arry elements of n
numbers
• sum=0;
• for ( i=0 ; i<n ; i++ )
• sum= sum+a[i];
• Return sum;
Total number of memory units:
Sum = 1 memory unit
i= 1 memory unit
n= 1 memory unit
A[]= n memory units // size of array
Total memory units= n+3
Space Complexity
Recurring Equation
Coding example #2
Example : Recursive Sum of n elements
The total memory units will be
Rsum require at least 3 words ( space value for n, the
return address, and a pointer to a[] )
The depth of recursion is n+1
So recursion stack space needed is ≥ 3(n+1)
Kinds of analyses
Worst-case: (usually)
• T(n) = maximum time of algorithm on any
input of size n.
Average-case: (sometimes)
• T(n) = expected time of algorithm over all
inputs of size n.
• Need assumption of statistical distribution of
inputs.
Best-case: (NEVER)
• Cheat with a slow algorithm that works fast
on some input.
Time Complexity
• The running time depends on the input: an
already sorted sequence is easier to sort.
• Major Simplifying Convention:
Parameterize the running time by the size of
the input, since short sequences are easier to
sort than long ones.
TA(n) = time of A on length n inputs
• Generally, we seek upper bounds on the
running time, to have a guarantee of
performance.
Efficiency Comparison of two Algorithms
• Suppose n=106
numbers:
– Insertion sort: c1n2
– Merge sort: c2 n(lg n)
– Best programmer (c1=2), machine language, one billion/second computer
A.
– Bad programmer (c2=50), high-language, ten million/second computer B.
– 2 (106
)2
instructions/109
instructions per second = 2000 seconds.
– 50 (106
lg 106
) instructions/107
instructions per second  100 seconds.
– Thus, merge sort on B is 20 times faster than insertion sort on A!
– If sorting ten million numbers, 2.3 days VS. 20 minutes.
• Conclusions:
– Algorithms for solving the same problem can differ dramatically in their
efficiency.
– much more significant than the differences due to hardware and software.
Factors that Determine
Running time of a Program
• problem size: n
• basic algorithm / actual processing
• memory access speed
• CPU/processor speed
• # of processors?
• compiler/linker optimization?
Running time of a program
or
Transaction processing time
• Amount of input: n  min. linear increase
• Basic algorithm / actual processing 
depends on algorithm!
• Memory access speed  by a factor
• CPU/processor speed  by a factor
• # of processors?  yes, if multi-threading
or multiple processes are used.
• compiler/linker optimization?  ~20%
Time Complexity
• measure of algorithm efficiency
• has a big impact on running time.
• Big-O notation is used.
• To deal with n items, time complexity can be
O(1), O(log n), O(n), O(n log n), O(n2
), O(n3
),
O(2n
), even O(nn
).
Coding example #1
//Sum of n Elements:// Take constant at every Line
Algorithm Sum(a,n) // sum of arry elements of n numbers
• Sum=0;
• for ( i=0 ; i<n ; i++ )
• Sum= sum +a[i];
• Return Sum;
Coding example #1
//Sum of n Elements: // Take constant at every Line
Algorithm Sum(a,n) // sum of arry elements of n numbers
• Sum=0; // C1*1
• for ( i=0 ; i<n ; i++ ) // C2*(n+1)
• Sum= sum +a[i]; // C3 *n
• Return Sum; // C4 *1
Total Steps:
= C1*1+C2(n+1)+C3*n+C4*1
n(C2+C3)+(C1+C2 +C4)
In Order of n
Recurring Equation
Coding example #2
Example : Recursive power method
If N = 0, then running time T(n)= 2
• T(N) = 2 + T(N-1)
= 2 + 2 + T(N-2)
= 2 + 2 + 2 + T(N-3)
= 2 + 2 + 2 + ……+ 2 (n times) + T(0)
= 2N + 2
So T(N) = 2N+2
Coding example #3
for ( i=0 ; i<n ; i++ )
for( j=0 ; j<n ; j++ )
sum[i] += entry[i][j];
Coding example #3
for ( i=0 ; i<n ; i++ ) //c1*(n+1)
for( j=0 ; j<n ; j++ )
//c2(n*(n+1))
sum[i] += entry[i][j];
//c3(n*n)
Total steps
= c1*(n+1)+ c2(n*(n+1))+ c3(n*n)
= n2
( c2+c3)+n(c1+c2)+c1
= order of n2
Coding example #4
for ( i=0 ; i<n ; i++ )
for( j=0 ; j<i ; j++ )
m += j;
Compare running time growth rates
Running time
Input
1 ms
2 ms
3 ms
4 ms
5 ms
A B C D E F G
worst-case
best-case
}average-case?
Suppose the program includes an if-then statement that may
execute or not:  variable running time
Typically algorithms are measured by their worst case
Growth Rates
• Growth rates of functions:
– Linear  n
– Quadratic  n2
– Cubic  n3
• In a log-log chart, the slope
of the line corresponds to
the growth rate of the
function
1E-1
1E+1
1E+3
1E+5
1E+7
1E+9
1E+11
1E+13
1E+15
1E+17
1E+19
1E+21
1E+23
1E+25
1E+27
1E+29
1E-1 1E+1 1E+3 1E+5 1E+7 1E+9
T(n)
n
Cubic
Quadratic
Linear
Constant Factors
• The growth rate is not
affected by
– constant factors or
– lower-order terms
• Examples
– 102
n  105
is a linear
function
– 105
n2
 108
n is a
quadratic function
1E-1
1E+1
1E+3
1E+5
1E+7
1E+9
1E+11
1E+13
1E+15
1E+17
1E+19
1E+21
1E+23
1E+25
1E-1 1E+1 1E+3 1E+5 1E+7 1E+9
T(n)
n
Quadratic
Quadratic
Linear
Linear
Machine-independent time
What is insertion sort’s worst-case time?
• Ignore machine dependent constants,
otherwise impossible to verify and to compare algorithms
• Look at growth of T(n) as n → ∞ .
“Asymptotic Analysis”

Design and Analysis of Algorithm Fundamental

  • 1.
    (BCS-503) Faculty: Devesh Garg Course:B.Tech 5th Semester (Section A &B) Session 2024-25 Department of Computer Science & Engineering Design And Analysis of Algorithm
  • 2.
  • 3.
    Introduction to Algorithm Faculty:Devesh Garg Department of Computer Science & Engineering LECTURE-1
  • 4.
    Design and Analysisof Algorithms Algorithm: An algorithm is a procedure (a finite set of well-defined instructions) for accomplishing some tasks which, given an initial state terminate in a defined end-state.
  • 5.
    Characterstics of Algorithm Input:Zero or more quantities are externally supplied Output: At least one quantity is produced Definiteness: Each Instruction is clear and unambiguous Finiteness: If we trace out the instructions of an algorithm, then for all cases, the algorithm terminate after a finite number of steps Effectiveness: Every instruction must be very basic so that it can be carried out, in principal, by a person using only paper and pencil.
  • 6.
    Algorithm Design Techniques •Divide & Conquer • Recursive Algorithm • Greedy Approach • Dynamic Programming • Backtracking • Branch & Bound • Randomized Algorithm • Brute Force Algorithm
  • 7.
    Analyze the Algorithm •Time complexity – How much no of steps is required to execute of an Algorithm • Space complexity – How much space ( Memory Units) is required to execute of an Algorithm
  • 8.
    • The amountof memory units required by an algorithm to run to completion. • Some algorithms may be more efficient if data completely loaded into memory Coding example #1 //Sum of n Elements: // Take constant at every Line Algorithm Sum(a,n) // sum of arry elements of n numbers • sum=0; • for ( i=0 ; i<n ; i++ ) • sum= sum+a[i]; • Return sum; Total number of memory units: Sum = 1 memory unit i= 1 memory unit n= 1 memory unit A[]= n memory units // size of array Total memory units= n+3 Space Complexity
  • 9.
    Recurring Equation Coding example#2 Example : Recursive Sum of n elements The total memory units will be Rsum require at least 3 words ( space value for n, the return address, and a pointer to a[] ) The depth of recursion is n+1 So recursion stack space needed is ≥ 3(n+1)
  • 10.
    Kinds of analyses Worst-case:(usually) • T(n) = maximum time of algorithm on any input of size n. Average-case: (sometimes) • T(n) = expected time of algorithm over all inputs of size n. • Need assumption of statistical distribution of inputs. Best-case: (NEVER) • Cheat with a slow algorithm that works fast on some input.
  • 11.
    Time Complexity • Therunning time depends on the input: an already sorted sequence is easier to sort. • Major Simplifying Convention: Parameterize the running time by the size of the input, since short sequences are easier to sort than long ones. TA(n) = time of A on length n inputs • Generally, we seek upper bounds on the running time, to have a guarantee of performance.
  • 12.
    Efficiency Comparison oftwo Algorithms • Suppose n=106 numbers: – Insertion sort: c1n2 – Merge sort: c2 n(lg n) – Best programmer (c1=2), machine language, one billion/second computer A. – Bad programmer (c2=50), high-language, ten million/second computer B. – 2 (106 )2 instructions/109 instructions per second = 2000 seconds. – 50 (106 lg 106 ) instructions/107 instructions per second  100 seconds. – Thus, merge sort on B is 20 times faster than insertion sort on A! – If sorting ten million numbers, 2.3 days VS. 20 minutes. • Conclusions: – Algorithms for solving the same problem can differ dramatically in their efficiency. – much more significant than the differences due to hardware and software.
  • 13.
    Factors that Determine Runningtime of a Program • problem size: n • basic algorithm / actual processing • memory access speed • CPU/processor speed • # of processors? • compiler/linker optimization?
  • 14.
    Running time ofa program or Transaction processing time • Amount of input: n  min. linear increase • Basic algorithm / actual processing  depends on algorithm! • Memory access speed  by a factor • CPU/processor speed  by a factor • # of processors?  yes, if multi-threading or multiple processes are used. • compiler/linker optimization?  ~20%
  • 15.
    Time Complexity • measureof algorithm efficiency • has a big impact on running time. • Big-O notation is used. • To deal with n items, time complexity can be O(1), O(log n), O(n), O(n log n), O(n2 ), O(n3 ), O(2n ), even O(nn ).
  • 16.
    Coding example #1 //Sumof n Elements:// Take constant at every Line Algorithm Sum(a,n) // sum of arry elements of n numbers • Sum=0; • for ( i=0 ; i<n ; i++ ) • Sum= sum +a[i]; • Return Sum;
  • 17.
    Coding example #1 //Sumof n Elements: // Take constant at every Line Algorithm Sum(a,n) // sum of arry elements of n numbers • Sum=0; // C1*1 • for ( i=0 ; i<n ; i++ ) // C2*(n+1) • Sum= sum +a[i]; // C3 *n • Return Sum; // C4 *1 Total Steps: = C1*1+C2(n+1)+C3*n+C4*1 n(C2+C3)+(C1+C2 +C4) In Order of n
  • 18.
    Recurring Equation Coding example#2 Example : Recursive power method If N = 0, then running time T(n)= 2 • T(N) = 2 + T(N-1) = 2 + 2 + T(N-2) = 2 + 2 + 2 + T(N-3) = 2 + 2 + 2 + ……+ 2 (n times) + T(0) = 2N + 2 So T(N) = 2N+2
  • 19.
    Coding example #3 for( i=0 ; i<n ; i++ ) for( j=0 ; j<n ; j++ ) sum[i] += entry[i][j];
  • 20.
    Coding example #3 for( i=0 ; i<n ; i++ ) //c1*(n+1) for( j=0 ; j<n ; j++ ) //c2(n*(n+1)) sum[i] += entry[i][j]; //c3(n*n) Total steps = c1*(n+1)+ c2(n*(n+1))+ c3(n*n) = n2 ( c2+c3)+n(c1+c2)+c1 = order of n2
  • 21.
    Coding example #4 for( i=0 ; i<n ; i++ ) for( j=0 ; j<i ; j++ ) m += j;
  • 22.
  • 23.
    Running time Input 1 ms 2ms 3 ms 4 ms 5 ms A B C D E F G worst-case best-case }average-case? Suppose the program includes an if-then statement that may execute or not:  variable running time Typically algorithms are measured by their worst case
  • 24.
    Growth Rates • Growthrates of functions: – Linear  n – Quadratic  n2 – Cubic  n3 • In a log-log chart, the slope of the line corresponds to the growth rate of the function 1E-1 1E+1 1E+3 1E+5 1E+7 1E+9 1E+11 1E+13 1E+15 1E+17 1E+19 1E+21 1E+23 1E+25 1E+27 1E+29 1E-1 1E+1 1E+3 1E+5 1E+7 1E+9 T(n) n Cubic Quadratic Linear
  • 25.
    Constant Factors • Thegrowth rate is not affected by – constant factors or – lower-order terms • Examples – 102 n  105 is a linear function – 105 n2  108 n is a quadratic function 1E-1 1E+1 1E+3 1E+5 1E+7 1E+9 1E+11 1E+13 1E+15 1E+17 1E+19 1E+21 1E+23 1E+25 1E-1 1E+1 1E+3 1E+5 1E+7 1E+9 T(n) n Quadratic Quadratic Linear Linear
  • 26.
    Machine-independent time What isinsertion sort’s worst-case time? • Ignore machine dependent constants, otherwise impossible to verify and to compare algorithms • Look at growth of T(n) as n → ∞ . “Asymptotic Analysis”