SlideShare a Scribd company logo
1 of 93
Introduction
Introduction
• Algorithm definition and its Characteristics
• Analysis of algorithm
Time complexity
Space complexity
• Recursion
• Rules and format of recursive function
• Recursion vs Iteration
• Examples
• Searching and sorting
Algorithm – Definition and Properties
• Algorithm Definition: - An algorithm is a finite set of instructions that , if followed ,
accomplishes a particular task.
• Properties of an algorithm:-
1. Input : - An algorithm has zero (0) or more inputs.
2. Output : - An algorithm must produce one (1) or more outputs.
3. Finiteness : - An algorithm must contain a finite number of steps.
4. Definiteness : - Each step of an algorithm must be clear and unambiguous.
5. Effectiveness : - An algorithm should be effective i.e. operations can be performed with the
given inputs in a finite period of time by a person using paper and pencil.
Example- Algorithm specification
1. Algorithm Max(A, n)
2. // A is an array of size n.
3. {
4 Result:=A[1];
5 for i :=2 to n do
6 if A[i] > Result then Result:=A[i];
7 return Result;
8 }
Analysis of Algorithm
• The performances of algorithms can be measured on the scales of time and space.
• The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program.
1) An analytical approach known as a priori estimates (known as performance
analysis)
2) An experimental study known as a posteriori testing(known as performance
measurement)
In a priori analysis we obtain a function (of some relevant parameters) which bounds the
algorithm's computing time.
In a posteriori testing we collect actual statistics about the algorithm's consumption of time
and space, while it is executing.( it is difficult to measure because , it contains actual running
time + other processing time of the system to be considered)
A Priori Analysis
• In a priori analysis we obtain a function (of some relevant parameters)
which bounds the algorithm's computing time.
• Uses frequency count or step count method to find the order of magnitude
of the algorithm in a function of input.
Frequency count (or) Step count:-
It denotes the number of times a statement to be executed.
Generally, count value will be given depending upon corresponding
statements.
• For comments, declarations the frequency count is zero (0).
• For Assignments, return statements the frequency count is 1.
• Ignore lower order exponents when higher order exponents are present.
• Ignore constant multipliers.
Example
2n+3 can be written as O(n)
Example -2
Statement s/e Frequency
Total
steps
Algorithm matrix_add(int a[][],int b[][]) {
int i , j;
int c[n][n];
for( i= 0 ; i < n ; i++) {
for ( j =0; j < n ; j++ ) {
c[i][j] = a[i][j]+b[i][j];
}
}
Example -2
Statement s/e Frequency
Total
steps
Algorithm matrix_add(int a[][],int b[][]) {
int i , j;
int c[n][n];
for( i= 0 ; i < n ; i++) {
for ( j =0; j < n ; j++ ) {
c[i][j] = a[i][j]+b[i][j];
}
}
0 0 0
0 - 0
0 - 0
1 n+1 n+1
1 n*(n+1) n*(n+1)
1 n*n n*n
0 - 0
0 - 0
2n2+2n+1
2n2+2n+1 can be written as O(n2)
Calculate the total number of steps using frequency method
Asymptotic Notation
• A convenient notation to represent the a priori Analysis is
Asymptotic notation.
• There are 5 types of Asymptotic notation
Big Oh notation (O)
Big Omega notation (Ω)
Theta notation (θ)
Little Oh notation (o)
Little omega notation (ω)
13
Asymptotic notations (cont..)
The function f(n) = 0(g(n)) (read as “f of n is Big oh of g of n” ) iff (if and only if)
there exist positive constants c and no Such that f(n) < =c* g(n) for all n, n >= no
Big-Oh (O) notation
Example
Let 3n2+2n <= 4n2 and f(n) = 3n2+2n and c*g(n)= 4*n2 find n0
Let n0 = 2 then 3 * (4) + 2* 2 <= 4*4 => 16 <=16
n0 = 3 then 3 * (9) + 2* 3 <= 4* 9 => 32 <=36
i.e., f(n) <= c*g(n) for all n0 >=2 . Therefore3n2+2n = O(n2)
• n4 + 100n2 + 10n + 50 is O(n4)
• 10n3 + 2n2 is O(n3)
• n3 - n2 is O(n3)
• constants
 10 is O(1)
 1273 is O(1)
f(n)= 3n2+2n can be written as O(n2)
15
Big-O Visualization
16
Asymptotic notations (cont..)
The function f(n) = (g(n)) (read as “f of n is omega of g of n” ) iff (if and only if)
there exist positive constants c and no Such that 0 < =c* g(n) <=f(n)for all n, n >= no
Big-Omega () notation
17
Examples
• 5n2 = (n)
• 100n + 5 ≠ (n2)
• n = (2n), n3 = (n2), n = (logn)
 c, n0 such that: 0  cn  5n2
 cn  5n2  c = 1 and n0 = 1
 c, n0 such that: 0  cn2  100n + 5
100n + 5  100n + 5n ( n  1) = 105n
cn2  105n  n(cn – 105)  0
Since n is positive  cn – 105  0  n  105/c
 contradiction: n cannot be smaller than a constant
O(1) < O(log n) < O(n) < O(n log n) < O(n2) < O(n3) < O(2n) < O(n!) < O(nn)
Different types of time complexities
Constant O(1)
Logarithm O(log n)
Linear O(n)
Linear Logarithm O(n *Log n)
Quadratic O(n2)
Cubic O(n3)
Exponential O(2n)
Factorial O(n!)
Exponential O(nn)
Order of growth of the functions
19
Examples
• 2n2 = O(n3):
• n2 = O(n2):
• 1000n2+1000n = O(n2):
• n = O(n2):
2n2 ≤ cn3  2 ≤ cn  c = 1 and n0= 2
n2 ≤ cn2  c ≥ 1  c = 1 and n0= 1
1000n2+1000n ≤ 1000n2+ n2 =1001n2 c=1001 and n0 = 1000
n ≤ cn2  cn ≥ 1  c = 1 and n0= 1
Examples
void function(int n) {
int i=1,s=1;
while (s<=n) {
i++;
s=s+i;
printf("*");
}
}
What is the time complexity of the following code?
Example 1: void function(int n) {
int i , count=0;
for(i=1;i*i<=n; i++)
count++;
}
Example 2:
void function(int n){
for(int i=1;i<=n;i++) {
for(int j=1;j<=n;j+=i)
printf("*");
}
}
Example 3:
void function(int n) {
int i, j, k, count=0;
for(i=n/2;i<=n; i++)
{
for(j=1;j+n/2 <=n; j++)
{
for(k=1;k<=n; k=k*2) {
count++;
}
}
}
}
Example 4:
void function(int n) {
int i, j, k, count=0;
for(i=n/2;i<=n; i++)
{
for(j=1;j <=n; j=j*2)
{
for(k=1;k<=n; k=k*2) {
count++;
}
}
}
}
Example 5:
Space complexity
• The space complexity of an algorithm is the amount of memory it needs to run to completion.
The space requirement S(P) of any algorithm P is calculated as
S(P) = c + Sp (instance characteristics) ,where c is constant.
Fixed part
1. Instruction space( space for code)
2. Independent of the characteristics (e.g., number, size) of the inputs and outputs.
3. Space for simple variables ,fixed-size component variables ,constants and so on.
Variable part
1 .Component variables whose size is dependent on the particular problem instance being solved.
2. Referenced variables ( depends on instance characteristics)
3. Recursion stack space
Examples
1. Algorithm calculate(a,b,c)
2. {
3. return a+b+b*c+(a+b-c)/(a+b)+4.0;
4. }
One word is enough to store a,b,c and result.
Sp(Instance characteristics )=0
1. Algorithm Sum(a, n)
2. {
3. s:=0.0;
4. for i := 1 to n do
5. s := s+a[i];
6. return s;
7 }
For array a of size n = n words
For n,i,and s we need 3 words
Therefore, Sp(Instance characteristics )= (n+3)
1. Algorithm Rsum(a,n)
2. {
3. if (n <= 0) then return 0.0;
4. else return RSum(a,n-1)+a[n];
5.}
Two words for formal parameter a and n (One word
for each of the formal parameters) and one word for
return address. So, total 3 words for each call.
The recursion depth is n+1 .
Therefore, Sp(Instance characteristics )= 3 * (n+1)
Recursion
• A recursive function is a function that is defined in terms of itself. Similarly, an
algorithm is said to be recursive if the same algorithm is invoked in the body.
• An algorithm that calls itself is direct recursive. Algorithm A is said to be
indirect recursive if it calls another algorithm which in turn calls A.
• It is easy to prove the correctness of the recursive algorithm.
Rules for designing recursive algorithms
1. Base cases – At least one base case must present.
2. Making progress/General case - for every recursive call , problem should
make progress towards the base case. Each call must reduce the size of the
problem.
3. Design rule - Assume that all the recursive calls work.
Limitations of Recursion
• Do not use recursion if your answer is “NO” to any of the
following:
• Is the algorithm or data structure naturally suited to recursion
(Eg: binary search, towers of hanoi for algorithms and Trees for
data structures)
• Is the recursive solution shorter and more understandable
• Does the recursive solution run within acceptable time and space
limits(Eg: Use recursion only when their efficiency is logarithmic)
Examples Algorithm factorial(n) {
if n==0
return 1;
else if n==1;
return 1;
else
return n*factorial(n-1);
}
Factorial (n) = 1 if n=0 or n=1
= n* factorial(n-1) if n>1
If and b are positive integers, then
GCD (a,b) = a if b=0
= GCD (b , a % b) otherwise
Algorithm GCD(a,b) {
if b==0
return a;
else
return GCD(b , a%b);
}
Base cases
Recursive call
Base case
Recursive call
Code
Static
Stack
Heap
Set of instructions(object code)
Static data like global variable, symbolic constants
Used for functions calls(both recursive and non recursive)
Used for dynamically created objects( memory used at run-time)
Run time memory of program
Fixed part
Variable
part
For every function call, an activation record is created and pushed(put) on the stack. On completion of function call,
the activation record is popped(deleted) off from the stack.
An activation record is a data structure that contains all the necessary information like Actual parameters, return value,
machine status etc., to support function call and return.
Code
Static
Main- AR
Heap
int main(){
int n;
scanf("%d",&n);
result=fact(n);
printf("%d",result);
return 0;
}
int fact(int n){
if(n<=1)
return 1;
else
return n*fact(n-1);
}
Calling main( ) {
fact(5)
}
fact(5) {
return 5 * fact(4)
}
fact(4){
return 4 * fact(3)
}
fact(3){
return 3 * fact(2)
}
fact(2){
return 2 * fact(1)
}
fact(1){
return 1; }
return 1
return 2
return 6
return 120
Fact(5) n=5,R.V1=5*R.V2= 120
Fact(4) n=4,R.V2=4*R.V3 =24
Fact(3) n=3,R.V3=3*R.V4 = 6
Fact(1) n=1,R.V5=1
Fact(2) n=2,R.V4=2*R.V5 = 2
return 24
Execution sequence of
Recursive functions
int main(){
int n;
scanf("%d",&n);
result=fact(n);
printf("%d",result);
return 0;
}
int fact(int n){
int result=1;
for(i=2;i<=n;i++)
result=result*I;
return result;
}
Code
Static
Main- AR
Heap
Calling main( ) {
fact(5)
}
fact(5) {
calculate result
return 120
}
Execution sequence of
Non-recursive functions
return 120
fact(5) result=1, n=5, i
Recursion vs iteration
Recursion Iteration
Programming language must support
recursion to implement recursive
algorithms because, some languages are not
supported recursion(FORTRAN language
doesn’t support recursion)
• At present almost all the languages are
supported recursion.
Every programming language supports
looping constructs for iteration.
Needs a context switch when a recursive
call is made. i.e., pushing and popping of the
activation records.
Not required.
Needs an extra memory for every recursive
call
Not required.
Towers of Hanoi - problem
• Move n disks from pole A to pole B using
auxiliary pole C.
• Rules for moving the disks
1. Only one disk will be moved at a time.
2. Higher numbered disk is not placed above the
lower number disk.
Poles are named as A,B, and C.
Disks are numbered as 1,2,3,…(lower size disk =1
and so on).
Initial Configuration
Solution
Idea to solve TowersOfHanoi problem
• Move n-1 disks from source pole A to auxiliary pole C , to make a provision for moving the last disk
from source pole A to destination pole B.
• Now, Move disk numbered n(i.e. 5 in this case) from source pole A to destination Pole B. The last
disk is placed in the destination pole B.
• Now, Move the remaining n-1 disks from auxiliary pole C to destination pole B.
A B C
4
5
1
2
3
A B C
1
2
3
4
5
A B C
1
2
3
4
5
A B C
1
2
3
4
5
Let us Assume src=‘A’, dest=‘B’ and aux=‘C’
1. Algorithm TowersOfHanoi(n, src, dest, aux)
2. // Move n disks from src Pole to dest Pole
3. {
4. if (n>=1) then
5. {
6. TowersOfHanoi(n-1,src,aux,dest);
7. write(" move top disk from pole",src,"to top of pole",dest);
8. TowersOfHanoi(n-1,aux,dest,src);
9. }
10. }
Recursive solution for Towers of Hanoi
1
2
3
A
2
3
A B C
1 3
A B C
1 2
3
A B C
1
2
B C
3
A B C
1 2
3
A B C
1
2
3
A B C
1
2
Step 1: Move disk 1 from A to B Step 2: Move disk 2 from A to C
3
A B C
1
2
Step 3: Move disk 1 from B to C
Step 4: Move disk 3 from A to B
Step 5: Move disk 1 from C to A Step 6: Move disk 2 from C to B
Step 7:Move disk 1 from A to B
Searching
• To search an element in a list of elements. Some of the searching
techniques are
Linear search
Binary search
Linear search
Algorithm LinearSearch (a,n,ele)
// a is an array of size n
// ele is the element to search
{
for i=1 to n do {
if(a[i]==ele)
return i;
}
return -1;
}
1 2 3 4 5 6 7 8 9 10
8 25 -4 9 10 15 21 19 50 31
Is a[1]==ele ?
8==15
No
Is a[2]==ele ?
25==15
No
Is a[3]==ele?
-4==15
No
Is a[4]==ele?
9==15
No
Is a[5]==ele?
10==15
No
Is a[6]==ele?
15==15
yes
Element to search ele=15
Binary Search
Algorithm BinarySearch(a,low,high,ele)
// a is an array, low is lower index of the array
// high is upper index of the array, ele is the element to
//search
{
if low > high then
return -1;
else {
mid = (low+high)/2;
if ele==a[mid] then
return mid;
else if ele < a[mid] then
return BinarySearch(a,low,mid-1,ele);
else
return BinarySearch(a,mid+1,high,ele);
}
• Input list must be sorted order.
• For every iteration, it reduces the list to half.
• For iteration 1 , size of list =n
For iteration 2, size of list =n/2
For iteration 3, size of list = n/4
………….
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
Low=1, high=10
Mid= (low +high)/2= (1+10)/2= 5
Is ele==a[5]? 62==38? No
Is ele>a[5]? yes
Low=mid+1 = 6, high=10
Mid= (low +high)/2= (6+10)/2= 8
Is ele==a[8]? 62==62? yes
return mid
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
Low=1, high=10
Is low<high? Yes
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
Low= 9, high=mid-1=8
Is low<high? No
return -1
Mid= (low +high)/2= (1+10)/2= 5
Is ele==a[5]? 69==38? No
Is ele>a[5]? 69 > 38? yes
Low=mid+1 = 6, high=10
Is low<high? Yes
Mid= (low +high)/2= (6+10)/2= 8
Is ele==a[8]? 69==62 No
Is ele>a[8]? 69>62? yes
Low=mid+1 = 9, high=10
Is low<high? Yes
Mid= (low +high)/2= (9+10)/2= 9
Is ele==a[8]? 69==70 No
Is ele>a[8]? 69>70? No
Is ele<a[8]? 69<70? Yes
Element to search ele=69
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
Low=1, high=10
Is low<=high? Yes
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
Low= mid+1 =7, high=7
Is low<=high? No
Mid= (low +high)/2= (1+10)/2= 5
Is ele==a[5]? 53==38? No
Is ele>a[5]? 53 > 38? yes
Low=mid+1 = 6, high=10
Is low<=high? Yes
Mid= (low +high)/2= (6+10)/2= 8
Is ele==a[8]? 53==62 No
Is ele>a[8]? 53>62? No
Low=6, high=mid-1=7
Is low<=high? Yes
Mid= (low +high)/2= (6+7)/2= 6
Is ele==a[6]? 53==41 No
Is ele>a[6]? 53>41? Yes
Element to search ele=53
Is ele>a[8]? 53<62? Yes
Mid= (low +high)/2= (7+7)/2= 7
Is ele==a[7]? 53==53 Yes
1 2 3 4 5 6 7 8 9 10
11 18 25 29 38 41 53 62 70 82
return mid
Sorting
• Arranging the data in ascending or descending order. The
output is a permutation or reordering of the input.
• Various sorting algorithms are
 Bubble sort
 Selection sort
 Insertion sort
 Quick Sort
 Merge Sort
 Heap Sort
Classification of Sorting Algorithms
• internal sort(RAM) or external sort(Secondary Storage)
• number of comparisons
• number of swaps
• memory usage
• recursive or non-recursive
• stability (equivalent elements retain their relative
positions even after sorting)
• adaptability (complexity changes based of pre sortedness)
Bubble sorting
• It is a comparison-based sorting algorithm.
• It needs n-1 passes .( pass means visiting all elements once)
• For every pass, it compares the adjacent elements and interchanges if
ai >aj ( where i and j are indices of array)
• After every pass, one element will be placed in the correct place.
• Time complexity is O(n2)
Bubble Sort algorithm
Algorithm Bubble_sort(a,n)
// a is an array of size n
{
int i,j;
for i:=1 to n-1 do // number of passes
{
for j:=0 to (n-1)-i do
{
if(a[j]>a[j+1]) // comparing adjacent elements
swap(a[j],a[j+1]); // Interchange a[j] and a[j+1]
}
}
print_array(a,n);
}
-8 20 29 12 9 11
-8 20 29 12 9 11
20 -8 29 12 9 11
-8 20 12 29 9 11
-8 20 12 9 29 11
-8 20 12 9 11 29
0 1 2 3 4 5 Swap?
Pass-1
No
Yes
No . of comparisons = 5
Yes
Yes
Yes
After first pass, the highest element 29 is placed in correct position.
Bubble sort –working process
No . of swaps= 4
-8 20 12 9 11 29
-8 12 20 9 11 29
-8 20 12 9 11 29
-8 12 9 20 11 29
-8 12 9 11 20 29
0 1 2 3 4 5 Swap?
Pass-2
Yes
No
Yes
Yes
For pass-2 we consider only n-1 elements i.e., 5 elements (in this case)
After second pass, the second highest element 20 is placed in correct position.
No . of comparisons = 4
No . of swaps= 3
-8 12 9 11 20 29
-8 9 12 11 20 29
-8 12 9 11 20 29
-8 9 11 12 20 29
0 1 2 3 4 5 Swap?
Pass-3
Yes
No
Yes
For pass-3 we consider only n-2 elements i.e., 4 elements (in this case)
After third pass, the second highest element 12 is placed in correct position.
No . of comparisons = 3
No . of swaps= 2
-8 9 11 12 20 29
-8 9 11 12 20 29
-8 9 11 12 20 29
0 1 2 3 4 5 Swap?
Pass-4 No
No
For pass-4 we consider only n-3 elements i.e., 3 elements (in this case)
After fourth pass, the fourth highest element 11 is placed in correct position.
-8 9 11 12 20 29
-8 9 11 12 20 29
0 1 2 3 4 5 Swap?
Pass-5
No
For pass-5 we consider only n-4 elements i.e., 2 elements (in this case)
After fifth pass, the fifth highest element 9 is placed in correct position.
No . of comparisons = 2
No . of swaps= 0
No . of comparisons = 1
No . of swaps= 0
Selection sorting
• It is a comparison sorting algorithm.
• It needs n-1 passes .
• For every pass, it selects the lowest element and places in the correct
place.
• After every pass, one element will be placed in the correct place.
• Time complexity is O(n2)
Selection Sort algorithm
Algorithm selection_sort(a,n)
// a is an array of size n a[1..n]
{
int i,j;
for i:=1 to n-1 do // number of passes
{
minIndex=i;
minVal=a[i];
for j:=i+1 to n do // find minimum element index of the list
{
if a[j]<minVal then
{
minVal=a[j];
minIndex=j;
}
}
swap(a[i],a[minIndex]); // Interchange a[j] and a[j+1]
}
print_array(a,n);
}
20 -8 29 12 9 11
0 1 2 3 4 5
Pass-1
Is -8<20 ?
yes
minIndex=1
minVal=-8
Is 29<-8?
No
minIndex=1
minVal=-8
Is 12< -8?
No
minIndex=1
minVal=-8
Is9<-8 ?
No
minIndex=1
minVal=-8
Is 11< -8?
No
minIndex=1
minVal=-8
Swap a[0] and a[minIndex]
-8 20 29 12 9 11
minIndex=0
minVal=20
20 -8 29 12 9 11
After first pass, the lowest element -8 is placed in correct position.
-8 20 29 12 9 11
0 1 2 3 4 5
Pass-2
Is 29< 20?
No
minIndex=1
minVal=20
Is 12< 20?
yes
minIndex=3
minVal=12
Is 9<12 ?
yes
minIndex=4
minVal=9
Is 11<9?
No
minIndex=4
minVal=9
Swap a[1] and a[minIndex]
-8 9 29 12 20 11
minIndex=1
minVal=20
-8 20 29 12 9 11
After Second pass, the lowest element 9 is placed in correct position.
-8 9 29 12 20 11
0 1 2 3 4 5
Pass-3
minIndex=2
minVal=29
Is 12< 29?
yes
minIndex=3
minVal=12
Is 12<20 ?
No
minIndex=3
minVal=12
Is 11<12?
Yes
minIndex=5
minVal=11
Swap a[2] and a[minIndex]
-8 9 11 12 20 29
-8 9 29 12 20 11
After third pass, the lowest element 11 is placed in correct position.
-8 9 11 12 20 29
0 1 2 3 4 5
Pass-4
minIndex=3
minVal=12
Is 12<20 ?
No
minIndex=3
minVal=12
Is 29<12?
No
minIndex=3
minVal=12
Swap a[3] and a[minIndex]
-8 9 11 12 20 29
-8 9 11 12 20 29
After fourth pass, the lowest element 12 is placed in correct position.
-8 9 11 12 20 29
0 1 2 3 4 5
Pass-5
minIndex=4
minVal=20
Is 20<29?
No
minIndex=4
minVal=20
Swap a[4] and a[minIndex]
-8 9 11 12 20 29
-8 9 11 12 20 29
After fifth pass, the lowest element 20 is placed in correct position.
Insertion sort
• It partitions(logically) the list into two lists :
Sorted and Unsorted .
• For every pass, it selects the first element in
unsorted list and places it in correct place in the
sorted list.
• Time complexity
Best case – O(n) ( input list in sorted order)
Worst case - O(n2).
Algorithm Insertion_sort(a,n)
// a is an array of size n
{
for i=1 to n-1 do
{
key=a[i];
j=i-1;
while(j>=0 && a[j]>key)
{
a[j+1]=a[j];
j=j-1;
}
a[j+1]=key;
}
}
20 -8 29 12 9 11
Pass-1
Sorted UnSorted
swap
-8 20 29 12 9 11
Sorted UnSorted
Pass-2
-8 20 29 12 9 11
-8 20 29 12 9 11
Sorted UnSorted
20 -8 29 12 9 11
No swap
Stop the process/pass
-8 20 29 12 9 11
-8 20 29 12 9 11
swap
Pass-3
-8 20 12 29 9 11
swap
-8 12 20 29 9 11
No swap
-8 12 20 29 9 11
Sorted UnSorted
-8 20 29 12 9 11
Sorted UnSorted
Input to pass-3
swap
Pass-4
swap
No swap
-8 12 20 29 9 11
Sorted UnSorted
-8 12 20 29 9 11
-8 12 20 9 29 11
-8 12 9 20 29 11
swap
-8 9 12 20 29 11
-8 9 12 20 29 11
Sorted UnSorted
Input to pass-4
swap
Pass-5
swap
No swap
-8 9 12 20 29 11
Sorted UnSorted
-8 9 12 20 29 11
-8 9 12 20 11 29
-8 9 12 11 20 29
swap
-8 9 11 12 20 29
-8 9 11 12 20 29
Sorted
Input to pass-5
Stop the iteration/pass
20 30 42 58 68 71
Pass-1
Sorted
UnSorted
No swap
20 30 42 58 68 71
Pass-4 20 30 42 58 68 71
20 30 42 58 68 71
20 30 42 58 68 71
20 30 42 58 68 71
Stop the iteration/pass
Pass-2
No swap Stop the iteration/pass
No swap Stop the iteration/pass
No swap Stop the iteration/pass
No swap Stop the iteration/pass
Pass-5
Pass-3
Advantage of Insertion sort
- Suitable for sorted /partially
Sorted list.
Pass-1
Sorted UnSorted
-8 20 29 12 9 11
Sorted UnSorted
20 -8 29 12 9 11
Initial
configuration
• Pick the first element in unsorted list . Let key =-8
• Compare the key element with the sorted list to find its location
20 -8 29 12 9 11
Is 20 > -8
Yes
20 20 29 12 9 11
Copy 20 to its immediate
right
Now copy key element at
the location stopped -8 20 29 12 9 11
Repeat the above process until comparison says No or completion of the sorted list
Pass-1
-8 20 29 12 9 11
Sorted UnSorted
Input to pass-2
• Pick the first element in unsorted list . Let key = 29
• Compare the key element with the sorted list to find its location
-8 20 29 12 9 11
Is 20 > 29
No
Now copy key element at
the location stopped -8 20 29 12 9 11
Algorithm Insertion_sort(a,n)
// a is an array of size n
{
for i=1 to n-1 do
{
key=a[i];
j=i-1;
while(j>=0 && a[j]>key)
{
a[j+1]=a[j];
j=j-1;
}
a[j+1]=key;
}
}
While loop moves the elements to right
which are greater than key , to make a
room for new element(known as key)
Insert the key element in its correct position
Key - new element to insert into sorted list
Advantages
• Simple implementation
• Efficient for small data
• Adaptive: If the input list is presorted [may not be completely] then
insertions sort takes O(n + d), where d is the number of inversions
• Practically more efficient than selection and bubble sorts, even
though all of them have O(n 2 ) worst case complexity
• Stable: Maintains relative order of input data if the keys are same
• In-place: It requires only a constant amount O(1) of additional
memory space
• Online: Insertion sort can sort the list as it receives it
Comparisons to Other Sorting Algorithms
• Bubble sort takes n2/2 comparisons and n2/2 swaps (inversions) in
both average case and in worst case.
• Selection sort takes n2/2 comparisons and n swaps.
• Insertion sort takes n2/4 comparisons and n2/8 swaps in average case
and in the worst case they are double.
• Insertion sort is almost linear for partially sorted input.
• Selection sort is best suits for elements with bigger values and small
keys.
Quick sort
It is also called partition exchange sort.
Quick sort uses divide-and-conquer algorithmic technique.
Divide: The array A[low ...high] is partitioned into two non-empty sub arrays A[low ...q] and A[q
+ 1... high], such that each element of A[low ... high] is less than or equal to each element of A[q+
1... high]. The index q is computed as part of this partitioning procedure.
Conquer: The two sub arrays A[low ...q] and A[q + 1 ...high] are sorted by recursive calls to
Quick sort.
Combine : No work for combining is needed because all the subarrays are in sorted order.
The recursive algorithm consists of four steps:
1) If there are one or no elements in the array to be sorted, return.
2) Pick an element in the array to serve as the “pivot” point. (Usually, the left-most
element in the array is used.)
3) Split the array into two parts – one with elements larger than the pivot and the other
with elements smaller than the pivot.
4) Recursively repeat the algorithm for both halves of the original array.
Algorithm for Partitioning the array
• step1: Select the first element as a pivot in the list
• step2: Start a pointer (the left pointer) at the second item in the list
• step3: Start a pointer (the right pointer) at the last item in the list
• step4: While the value at the left pointer in the list is less than the
pivot value, move the left pointer to the right(add 1). Continue this
process until the value at the left pointer is greater than or equal to
the pivot value.
• step5: While the value at the right pointer in the list is greater than
the pivot value, move the right pointer to the left(subtract 1).
Continue this process until the value at the right pointer is less than
or equal to the pivot value.
• step6: If the left pointer value is less than right pointer value, then
swap the value at these locations in the list.
• step7: If the left pointer and right pointer don’t meet, go to step 1.
Algorithm QuickSort(a,p,q)
// sorts the elements a[p],….,a[q] which
// reside in the global array a[1:n]
// into ascending order
{
if ( p < q ) then // If there are more than one element
{
// divide P into two subproblems.
j:= partition(a,p,q);
QuickSort(a,p,j-1);
QuickSort(a,j+1,q);
// There is no need for combining Solutions.
}
}
Quick sort Algorithm
Algorithm Partition(a,low,high)
{
pivot= a[l];
i=low;
j=high;
while(i<j)
{
while (a[i]<=pivot)
i++;
while(a[j]>pivot)
j--;
if(i<j) then
swap(a[i],a[j]);
}
a[low]=a[j];
a[j]=pivot;
return j;
}
Quicksort - example
10 5 8 21 -7 12 24 9
17
Pivot
5
8 -7
9
10
21
12 24
17
5
-7
8 9 10 21
12 17 24
8 9
-7
10 12 17 21 24
Pivot
Pivot
5
Pivot
Pivot
-7 5 8 9 10 12 21 24
17
<= >
10 8 -5 20 7 14 31 19 -3 11
10 8 -5 -3 7 14 31 19 20 11
Is 8<10?
Yes
Is -5<10?
Yes
Is 20<10?
No
Is 11>10?
yes
Is -3>10?
No
i j Swap if i<j
3 8 a[3] & a[8]
5 4 No swap
Is14<10?
No
Is19>10?
Yes
Is31>10?
Yes
Is14>10?
Yes
Swap pivot &a[4]
pivot
Is 7<10?
yes
Is i <j ? Yes
Is7>10?
No
Is i <j ? No
7 8 -5 -3 10 14 31 19 20 11
23 18 51 20 17 42 31 19 -3 36 1 9 23
23 18 51 20 17 42 31 19 -3 36 1 9 23
23 18 51 20 17 42 31 19 -3 36 2 9 23
23 18 51 20 17 42 31 19 -3 36 2 8 23
23 18 -3 20 17 42 31 19 51 36 4 7 23
23 18 -3 20 17 42 31 19 51 36 3 7 23
23 18 -3 20 17 42 31 19 51 36 5 7 23
23 18 51 20 17 19 31 42 51 36 6 6 23
23 18 51 20 17 19 31 42 51 36 6 5 23
19 18 51 20 17 23 31 42 51 36 6 5 23
Stop the iteration/pass because i>j
Swap a[2] & a[8]
Time Complexity
Recurrence Equation for Divide-and-Conquer Algorithms
T(n) = c if n=1 where c is constant
= a T(n/b) + f(n) if n>1
f(n) is the time for dividing P and combining the solutions to subproblems
Time complexity of Quick sort
Recurrence Equation for Quick sort (best case)
Let us assume that partition divides the list into two halves(approx.)
T(n) = 1 if n<=1
= T(n/2) + T(n/2)+c*n if n>1
T(n/2) – time for solving first partition list
T(n/2) – time for solving second partition list
n is for time for partitioning the list.
Recurrence Equation for Quick sort (worst case)
Let us assume the list is in sorted order and partition divides n elements into 0 and n-1
T(n) = 1 if n<=1
= T(0) + T(n-1) + n if n>1
T(0) – time for solving first partition list
T(n-1) – time for solving second partition list
n is for time for partitioning the list.
Quicksort –
Worst case
example
14 18 28 39 51 59 90
72
Pivot
14 18 28 39 51 59 90
72
Pivot
14 18 28 39 51 59 90
72
Pivot
39 51 59 90
72
28
14 18
Pivot
51 59 90
72
39
28
14 18
39
28
14 18
Pivot
51 59 90
72
Pivot
39
28
14 18 51 59 90
72
39
28
14 18 51 59 90
72
Pivot
<=
>
Benefits:
• It works rapidly and effectively.
• It has the best time complexity when compared to other sorting
algorithms.
• Quick sort has a space complexity of O(logn), making it an excellent
choice for situations when space is limited.
Limitations:
• This sorting technique is considered unstable since it does not
maintain the key-value pairs initial order.
• When the pivot element is the largest or smallest, or when all of the
components have the same size. The performance of the quicksort is
significantly impacted by these worst-case scenarios.
• It’s difficult to implement since it’s a recursive process, especially if
recursion isn’t available.
Applications:
• Quicksort is a cache-friendly algorithm as it has a good locality of reference[1]
when used for arrays.
• It is tail -recursive and hence all the call optimization can be done.
• It is an in-place sort that does not require any extra storage memory.
• It is used in operational research and event-driven simulation.
• Numerical computations and in scientific research, for accuracy in calculations
most of the efficiently developed algorithm uses priority queue and quick sort is
used for sorting.
• Variants of Quicksort are used to separate the Kth smallest or largest elements.
• It is used to implement primitive type methods.
• If data is sorted then the search for information became easy and efficient.
Note: [1] the tendency of the computer program to access the same set of memory
locations for a particular time period
Performance
Worst Case complexity O(n2)
Best Case complexity O(nlogn)
Average Case complexity O(nlogn)
Worst Case space complexity O(1)
Merge-sort
Merge sort uses divide-and-conquer algorithmic technique.
1.Divide the array into 2 parts of lengths n/2 and n - n/2 respectively (here if
n is odd, we round off the value of n/2). Let us call these arrays as left half and
right half, respectively.
2.Recursively sort the left half array and the right half array.
3.Merge the left half array and right half-array to get the full array sorted.
Merge-sort Algorithm
23 18 51 20 17 42 31 19 -3 36
0 1 2 3 4 5 6 7 8 9
23 18 51 20 17 42 31 19 -3 36
mid=(low +high)/2=(0+4)/2 = 2
23 18 51 20 17 42 31 19 -3 36
51
mid=(0+2)/2 = 1
23 18
18
23
18 23
18 23 51
20 17
mid=(3+4)/2 = 3
17 20
17 18 20 23 51
mid=(5+7)/2 = 6
42 31 19
42 31
31 42
31 42
19
-3 36
mid=(8+9)/2 = 8
-3 19 31 36 42
-3 36
mid=(5+6)/2 = 5
Merge
Merge
Merge
Merge
Merge
Merge
Merge
Merge
17
-3 18 19 20 23 31 36 42 51
Merge
mid=(low +high)/2=(0+9)/2 = 4
mid=(low +high)/2=(5+9)/2 = 7
Time complexity of Merge sort
Recurrence Equation for Merge sort (best case)
Let us assume that partition divides the list into two halves(approx.)
T(n) = 0 if n=1
= T(n/2) + T(n/2)+c*n if n>1
T(n/2) – time for solving first partition list
T(n/2) – time for solving second partition list
c*n is for time for merging the list.
23 18 51 20 17 42 31 19 -3 36
0 1 2 3 4 5 6 7 8 9
23 18 51 20 17 42 31 19 -3 36
23 18 51 20 17 42 31 19 -3 36
51
23 18
18
23
18 23
18 23 51
20 17
17 20
17 18 20 23 51
42 31 19
42 31
31 42
31 42
19
-3 36
-3 19 31 36 42
-3 36
-3 17 18 19 20 23 31 36 42 51
Advantages of Merge Sort:
• Stability: Merge sort is a stable sorting algorithm, which means it
maintains the relative order of equal elements in the input array.
• Guaranteed worst-case performance: Merge sort has a worst-case
time complexity of O(N logN), which means it performs well even on
large datasets.
• Parallelizable: Merge sort is a naturally parallelizable algorithm,
which means it can be easily parallelized to take advantage of
multiple processors or threads.
Drawbacks of Merge Sort:
• Space complexity: Merge sort requires additional memory to store
the merged sub-arrays during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which
means it requires additional memory to store the sorted data. This
can be a disadvantage in applications where memory usage is a
concern.
• Not always optimal for small datasets: For small datasets, Merge sort
has a higher time complexity than some other sorting algorithms,
such as insertion sort. This can result in slower performance for very
small datasets.
Applications of Merge Sort:
• Sorting large datasets: Merge sort is particularly well-suited for sorting
large datasets due to its guaranteed worst-case time complexity of O(n log
n).
• External sorting: Merge sort is commonly used in external sorting, where
the data to be sorted is too large to fit into memory.
• Custom sorting: Merge sort can be adapted to handle different input
distributions, such as partially sorted, nearly sorted, or completely
unsorted data.
• Inversion Count Problem: Inversion Count for an array indicates – how far
(or close) the array is from being sorted. If the array is already sorted, then
the inversion count is 0, but if the array is sorted in reverse order, the
inversion count is the maximum.
Sorting - comparison
Bubble Sort Selection sort Insertion sort
Number of
passes/ iterations
n-1 n-1 n-1
No of
comparisons
n-1 for pass1
n-2 for pass2
.
.
at most (n-1 +n-2+n-3+…+1)
n-1 for pass1
n-2 for pass2
.
.
at most (n-1 +n-2+n-3+…+1)
depends on the elements
at most n-passi comparisons
for each passi
where 1<=i<= n-1
No of swaps/
Interchanges
at most n-1 for pass1
at most n-2 for pass2
.
.
at most (n-1 +n-2+n-3+…+1)
Exactly one swap for each
pass.
Total swaps= n-1
depends on the elements
at most n-passi swaps
for each passi
where 1<=i<= n-1
Time complexity Best case – O(n2)
Worst Case – O(n2)
Best case – O(n2)
Worst Case – O(n2)
Best case – O(n)
Worst Case – O(n2)
Space Complexity O(n) O(n) O(n)
Sorting - comparison
Quick Sort Merge sort
Uses Divide-and-
conquer strategy
Yes Yes
Is in-place sorting? Yes No
Time complexity Best case – O( n log n)
Worst case – O( n2)
Best case – O( n log n)
Worst case – O( n log n)
Is stable? No Yes
• A sorting algorithm is said to be stable if two objects with equal
keys appear in the same order in sorted output as they appear in the
input data set
• Selection and Quick sorting is not stable
• Bubble, insertion, merge sorting are stable.

More Related Content

Similar to DS Unit-1.pptx very easy to understand..

CS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfCS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfssuser034ce1
 
Daa unit 6_efficiency of algorithms
Daa unit 6_efficiency of algorithmsDaa unit 6_efficiency of algorithms
Daa unit 6_efficiency of algorithmssnehajiyani
 
Unit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfUnit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfAmayJaiswal4
 
Introduction to Algorithms and Asymptotic Notation
Introduction to Algorithms and Asymptotic NotationIntroduction to Algorithms and Asymptotic Notation
Introduction to Algorithms and Asymptotic NotationAmrinder Arora
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihmSajid Marwat
 
Algorithm And analysis Lecture 03& 04-time complexity.
 Algorithm And analysis Lecture 03& 04-time complexity. Algorithm And analysis Lecture 03& 04-time complexity.
Algorithm And analysis Lecture 03& 04-time complexity.Tariq Khan
 
Computational Complexity.pptx
Computational Complexity.pptxComputational Complexity.pptx
Computational Complexity.pptxEnosSalar
 
time_complexity_list_02_04_2024_22_pages.pdf
time_complexity_list_02_04_2024_22_pages.pdftime_complexity_list_02_04_2024_22_pages.pdf
time_complexity_list_02_04_2024_22_pages.pdfSrinivasaReddyPolamR
 
lecture 1
lecture 1lecture 1
lecture 1sajinsc
 
Anlysis and design of algorithms part 1
Anlysis and design of algorithms part 1Anlysis and design of algorithms part 1
Anlysis and design of algorithms part 1Deepak John
 
04. Growth_Rate_AND_Asymptotic Notations_.pptx
04. Growth_Rate_AND_Asymptotic Notations_.pptx04. Growth_Rate_AND_Asymptotic Notations_.pptx
04. Growth_Rate_AND_Asymptotic Notations_.pptxarslanzaheer14
 
Advanced Datastructures and algorithms CP4151unit1b.pdf
Advanced Datastructures and algorithms CP4151unit1b.pdfAdvanced Datastructures and algorithms CP4151unit1b.pdf
Advanced Datastructures and algorithms CP4151unit1b.pdfSheba41
 
19. algorithms and-complexity
19. algorithms and-complexity19. algorithms and-complexity
19. algorithms and-complexityshowkat27
 
Lec03 04-time complexity
Lec03 04-time complexityLec03 04-time complexity
Lec03 04-time complexityAbbas Ali
 
Insersion & Bubble Sort in Algoritm
Insersion & Bubble Sort in AlgoritmInsersion & Bubble Sort in Algoritm
Insersion & Bubble Sort in AlgoritmEhsan Ehrari
 

Similar to DS Unit-1.pptx very easy to understand.. (20)

CS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdfCS-102 DS-class_01_02 Lectures Data .pdf
CS-102 DS-class_01_02 Lectures Data .pdf
 
Daa unit 6_efficiency of algorithms
Daa unit 6_efficiency of algorithmsDaa unit 6_efficiency of algorithms
Daa unit 6_efficiency of algorithms
 
3 analysis.gtm
3 analysis.gtm3 analysis.gtm
3 analysis.gtm
 
Unit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdfUnit-1 DAA_Notes.pdf
Unit-1 DAA_Notes.pdf
 
Introduction to Algorithms and Asymptotic Notation
Introduction to Algorithms and Asymptotic NotationIntroduction to Algorithms and Asymptotic Notation
Introduction to Algorithms and Asymptotic Notation
 
Time complexity.ppt
Time complexity.pptTime complexity.ppt
Time complexity.ppt
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihm
 
Algorithm And analysis Lecture 03& 04-time complexity.
 Algorithm And analysis Lecture 03& 04-time complexity. Algorithm And analysis Lecture 03& 04-time complexity.
Algorithm And analysis Lecture 03& 04-time complexity.
 
Annotations.pdf
Annotations.pdfAnnotations.pdf
Annotations.pdf
 
Computational Complexity.pptx
Computational Complexity.pptxComputational Complexity.pptx
Computational Complexity.pptx
 
time_complexity_list_02_04_2024_22_pages.pdf
time_complexity_list_02_04_2024_22_pages.pdftime_complexity_list_02_04_2024_22_pages.pdf
time_complexity_list_02_04_2024_22_pages.pdf
 
lecture 1
lecture 1lecture 1
lecture 1
 
Anlysis and design of algorithms part 1
Anlysis and design of algorithms part 1Anlysis and design of algorithms part 1
Anlysis and design of algorithms part 1
 
04. Growth_Rate_AND_Asymptotic Notations_.pptx
04. Growth_Rate_AND_Asymptotic Notations_.pptx04. Growth_Rate_AND_Asymptotic Notations_.pptx
04. Growth_Rate_AND_Asymptotic Notations_.pptx
 
AlgorithmAnalysis2.ppt
AlgorithmAnalysis2.pptAlgorithmAnalysis2.ppt
AlgorithmAnalysis2.ppt
 
Advanced Datastructures and algorithms CP4151unit1b.pdf
Advanced Datastructures and algorithms CP4151unit1b.pdfAdvanced Datastructures and algorithms CP4151unit1b.pdf
Advanced Datastructures and algorithms CP4151unit1b.pdf
 
Asymptotic Notation
Asymptotic NotationAsymptotic Notation
Asymptotic Notation
 
19. algorithms and-complexity
19. algorithms and-complexity19. algorithms and-complexity
19. algorithms and-complexity
 
Lec03 04-time complexity
Lec03 04-time complexityLec03 04-time complexity
Lec03 04-time complexity
 
Insersion & Bubble Sort in Algoritm
Insersion & Bubble Sort in AlgoritmInsersion & Bubble Sort in Algoritm
Insersion & Bubble Sort in Algoritm
 

Recently uploaded

Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 

Recently uploaded (20)

Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 

DS Unit-1.pptx very easy to understand..

  • 2. Introduction • Algorithm definition and its Characteristics • Analysis of algorithm Time complexity Space complexity • Recursion • Rules and format of recursive function • Recursion vs Iteration • Examples • Searching and sorting
  • 3. Algorithm – Definition and Properties • Algorithm Definition: - An algorithm is a finite set of instructions that , if followed , accomplishes a particular task. • Properties of an algorithm:- 1. Input : - An algorithm has zero (0) or more inputs. 2. Output : - An algorithm must produce one (1) or more outputs. 3. Finiteness : - An algorithm must contain a finite number of steps. 4. Definiteness : - Each step of an algorithm must be clear and unambiguous. 5. Effectiveness : - An algorithm should be effective i.e. operations can be performed with the given inputs in a finite period of time by a person using paper and pencil.
  • 4. Example- Algorithm specification 1. Algorithm Max(A, n) 2. // A is an array of size n. 3. { 4 Result:=A[1]; 5 for i :=2 to n do 6 if A[i] > Result then Result:=A[i]; 7 return Result; 8 }
  • 5. Analysis of Algorithm • The performances of algorithms can be measured on the scales of time and space. • The performance of a program is the amount of computer memory and time needed to run a program. We use two approaches to determine the performance of a program. 1) An analytical approach known as a priori estimates (known as performance analysis) 2) An experimental study known as a posteriori testing(known as performance measurement) In a priori analysis we obtain a function (of some relevant parameters) which bounds the algorithm's computing time. In a posteriori testing we collect actual statistics about the algorithm's consumption of time and space, while it is executing.( it is difficult to measure because , it contains actual running time + other processing time of the system to be considered)
  • 6. A Priori Analysis • In a priori analysis we obtain a function (of some relevant parameters) which bounds the algorithm's computing time. • Uses frequency count or step count method to find the order of magnitude of the algorithm in a function of input. Frequency count (or) Step count:- It denotes the number of times a statement to be executed. Generally, count value will be given depending upon corresponding statements. • For comments, declarations the frequency count is zero (0). • For Assignments, return statements the frequency count is 1. • Ignore lower order exponents when higher order exponents are present. • Ignore constant multipliers.
  • 7. Example 2n+3 can be written as O(n)
  • 8. Example -2 Statement s/e Frequency Total steps Algorithm matrix_add(int a[][],int b[][]) { int i , j; int c[n][n]; for( i= 0 ; i < n ; i++) { for ( j =0; j < n ; j++ ) { c[i][j] = a[i][j]+b[i][j]; } }
  • 9. Example -2 Statement s/e Frequency Total steps Algorithm matrix_add(int a[][],int b[][]) { int i , j; int c[n][n]; for( i= 0 ; i < n ; i++) { for ( j =0; j < n ; j++ ) { c[i][j] = a[i][j]+b[i][j]; } } 0 0 0 0 - 0 0 - 0 1 n+1 n+1 1 n*(n+1) n*(n+1) 1 n*n n*n 0 - 0 0 - 0 2n2+2n+1 2n2+2n+1 can be written as O(n2)
  • 10. Calculate the total number of steps using frequency method
  • 11.
  • 12. Asymptotic Notation • A convenient notation to represent the a priori Analysis is Asymptotic notation. • There are 5 types of Asymptotic notation Big Oh notation (O) Big Omega notation (Ω) Theta notation (θ) Little Oh notation (o) Little omega notation (ω)
  • 13. 13 Asymptotic notations (cont..) The function f(n) = 0(g(n)) (read as “f of n is Big oh of g of n” ) iff (if and only if) there exist positive constants c and no Such that f(n) < =c* g(n) for all n, n >= no Big-Oh (O) notation
  • 14. Example Let 3n2+2n <= 4n2 and f(n) = 3n2+2n and c*g(n)= 4*n2 find n0 Let n0 = 2 then 3 * (4) + 2* 2 <= 4*4 => 16 <=16 n0 = 3 then 3 * (9) + 2* 3 <= 4* 9 => 32 <=36 i.e., f(n) <= c*g(n) for all n0 >=2 . Therefore3n2+2n = O(n2) • n4 + 100n2 + 10n + 50 is O(n4) • 10n3 + 2n2 is O(n3) • n3 - n2 is O(n3) • constants  10 is O(1)  1273 is O(1) f(n)= 3n2+2n can be written as O(n2)
  • 16. 16 Asymptotic notations (cont..) The function f(n) = (g(n)) (read as “f of n is omega of g of n” ) iff (if and only if) there exist positive constants c and no Such that 0 < =c* g(n) <=f(n)for all n, n >= no Big-Omega () notation
  • 17. 17 Examples • 5n2 = (n) • 100n + 5 ≠ (n2) • n = (2n), n3 = (n2), n = (logn)  c, n0 such that: 0  cn  5n2  cn  5n2  c = 1 and n0 = 1  c, n0 such that: 0  cn2  100n + 5 100n + 5  100n + 5n ( n  1) = 105n cn2  105n  n(cn – 105)  0 Since n is positive  cn – 105  0  n  105/c  contradiction: n cannot be smaller than a constant
  • 18. O(1) < O(log n) < O(n) < O(n log n) < O(n2) < O(n3) < O(2n) < O(n!) < O(nn) Different types of time complexities Constant O(1) Logarithm O(log n) Linear O(n) Linear Logarithm O(n *Log n) Quadratic O(n2) Cubic O(n3) Exponential O(2n) Factorial O(n!) Exponential O(nn) Order of growth of the functions
  • 19. 19 Examples • 2n2 = O(n3): • n2 = O(n2): • 1000n2+1000n = O(n2): • n = O(n2): 2n2 ≤ cn3  2 ≤ cn  c = 1 and n0= 2 n2 ≤ cn2  c ≥ 1  c = 1 and n0= 1 1000n2+1000n ≤ 1000n2+ n2 =1001n2 c=1001 and n0 = 1000 n ≤ cn2  cn ≥ 1  c = 1 and n0= 1
  • 20. Examples void function(int n) { int i=1,s=1; while (s<=n) { i++; s=s+i; printf("*"); } } What is the time complexity of the following code? Example 1: void function(int n) { int i , count=0; for(i=1;i*i<=n; i++) count++; } Example 2: void function(int n){ for(int i=1;i<=n;i++) { for(int j=1;j<=n;j+=i) printf("*"); } } Example 3:
  • 21. void function(int n) { int i, j, k, count=0; for(i=n/2;i<=n; i++) { for(j=1;j+n/2 <=n; j++) { for(k=1;k<=n; k=k*2) { count++; } } } } Example 4: void function(int n) { int i, j, k, count=0; for(i=n/2;i<=n; i++) { for(j=1;j <=n; j=j*2) { for(k=1;k<=n; k=k*2) { count++; } } } } Example 5:
  • 22. Space complexity • The space complexity of an algorithm is the amount of memory it needs to run to completion. The space requirement S(P) of any algorithm P is calculated as S(P) = c + Sp (instance characteristics) ,where c is constant. Fixed part 1. Instruction space( space for code) 2. Independent of the characteristics (e.g., number, size) of the inputs and outputs. 3. Space for simple variables ,fixed-size component variables ,constants and so on. Variable part 1 .Component variables whose size is dependent on the particular problem instance being solved. 2. Referenced variables ( depends on instance characteristics) 3. Recursion stack space
  • 23. Examples 1. Algorithm calculate(a,b,c) 2. { 3. return a+b+b*c+(a+b-c)/(a+b)+4.0; 4. } One word is enough to store a,b,c and result. Sp(Instance characteristics )=0 1. Algorithm Sum(a, n) 2. { 3. s:=0.0; 4. for i := 1 to n do 5. s := s+a[i]; 6. return s; 7 } For array a of size n = n words For n,i,and s we need 3 words Therefore, Sp(Instance characteristics )= (n+3) 1. Algorithm Rsum(a,n) 2. { 3. if (n <= 0) then return 0.0; 4. else return RSum(a,n-1)+a[n]; 5.} Two words for formal parameter a and n (One word for each of the formal parameters) and one word for return address. So, total 3 words for each call. The recursion depth is n+1 . Therefore, Sp(Instance characteristics )= 3 * (n+1)
  • 24. Recursion • A recursive function is a function that is defined in terms of itself. Similarly, an algorithm is said to be recursive if the same algorithm is invoked in the body. • An algorithm that calls itself is direct recursive. Algorithm A is said to be indirect recursive if it calls another algorithm which in turn calls A. • It is easy to prove the correctness of the recursive algorithm. Rules for designing recursive algorithms 1. Base cases – At least one base case must present. 2. Making progress/General case - for every recursive call , problem should make progress towards the base case. Each call must reduce the size of the problem. 3. Design rule - Assume that all the recursive calls work.
  • 25. Limitations of Recursion • Do not use recursion if your answer is “NO” to any of the following: • Is the algorithm or data structure naturally suited to recursion (Eg: binary search, towers of hanoi for algorithms and Trees for data structures) • Is the recursive solution shorter and more understandable • Does the recursive solution run within acceptable time and space limits(Eg: Use recursion only when their efficiency is logarithmic)
  • 26. Examples Algorithm factorial(n) { if n==0 return 1; else if n==1; return 1; else return n*factorial(n-1); } Factorial (n) = 1 if n=0 or n=1 = n* factorial(n-1) if n>1 If and b are positive integers, then GCD (a,b) = a if b=0 = GCD (b , a % b) otherwise Algorithm GCD(a,b) { if b==0 return a; else return GCD(b , a%b); } Base cases Recursive call Base case Recursive call
  • 27. Code Static Stack Heap Set of instructions(object code) Static data like global variable, symbolic constants Used for functions calls(both recursive and non recursive) Used for dynamically created objects( memory used at run-time) Run time memory of program Fixed part Variable part For every function call, an activation record is created and pushed(put) on the stack. On completion of function call, the activation record is popped(deleted) off from the stack. An activation record is a data structure that contains all the necessary information like Actual parameters, return value, machine status etc., to support function call and return.
  • 28. Code Static Main- AR Heap int main(){ int n; scanf("%d",&n); result=fact(n); printf("%d",result); return 0; } int fact(int n){ if(n<=1) return 1; else return n*fact(n-1); } Calling main( ) { fact(5) } fact(5) { return 5 * fact(4) } fact(4){ return 4 * fact(3) } fact(3){ return 3 * fact(2) } fact(2){ return 2 * fact(1) } fact(1){ return 1; } return 1 return 2 return 6 return 120 Fact(5) n=5,R.V1=5*R.V2= 120 Fact(4) n=4,R.V2=4*R.V3 =24 Fact(3) n=3,R.V3=3*R.V4 = 6 Fact(1) n=1,R.V5=1 Fact(2) n=2,R.V4=2*R.V5 = 2 return 24 Execution sequence of Recursive functions
  • 29. int main(){ int n; scanf("%d",&n); result=fact(n); printf("%d",result); return 0; } int fact(int n){ int result=1; for(i=2;i<=n;i++) result=result*I; return result; } Code Static Main- AR Heap Calling main( ) { fact(5) } fact(5) { calculate result return 120 } Execution sequence of Non-recursive functions return 120 fact(5) result=1, n=5, i
  • 30. Recursion vs iteration Recursion Iteration Programming language must support recursion to implement recursive algorithms because, some languages are not supported recursion(FORTRAN language doesn’t support recursion) • At present almost all the languages are supported recursion. Every programming language supports looping constructs for iteration. Needs a context switch when a recursive call is made. i.e., pushing and popping of the activation records. Not required. Needs an extra memory for every recursive call Not required.
  • 31. Towers of Hanoi - problem • Move n disks from pole A to pole B using auxiliary pole C. • Rules for moving the disks 1. Only one disk will be moved at a time. 2. Higher numbered disk is not placed above the lower number disk. Poles are named as A,B, and C. Disks are numbered as 1,2,3,…(lower size disk =1 and so on). Initial Configuration Solution
  • 32. Idea to solve TowersOfHanoi problem • Move n-1 disks from source pole A to auxiliary pole C , to make a provision for moving the last disk from source pole A to destination pole B. • Now, Move disk numbered n(i.e. 5 in this case) from source pole A to destination Pole B. The last disk is placed in the destination pole B. • Now, Move the remaining n-1 disks from auxiliary pole C to destination pole B. A B C 4 5 1 2 3 A B C 1 2 3 4 5 A B C 1 2 3 4 5 A B C 1 2 3 4 5
  • 33. Let us Assume src=‘A’, dest=‘B’ and aux=‘C’ 1. Algorithm TowersOfHanoi(n, src, dest, aux) 2. // Move n disks from src Pole to dest Pole 3. { 4. if (n>=1) then 5. { 6. TowersOfHanoi(n-1,src,aux,dest); 7. write(" move top disk from pole",src,"to top of pole",dest); 8. TowersOfHanoi(n-1,aux,dest,src); 9. } 10. } Recursive solution for Towers of Hanoi
  • 34. 1 2 3 A 2 3 A B C 1 3 A B C 1 2 3 A B C 1 2 B C 3 A B C 1 2 3 A B C 1 2 3 A B C 1 2 Step 1: Move disk 1 from A to B Step 2: Move disk 2 from A to C 3 A B C 1 2 Step 3: Move disk 1 from B to C Step 4: Move disk 3 from A to B Step 5: Move disk 1 from C to A Step 6: Move disk 2 from C to B Step 7:Move disk 1 from A to B
  • 35. Searching • To search an element in a list of elements. Some of the searching techniques are Linear search Binary search
  • 36. Linear search Algorithm LinearSearch (a,n,ele) // a is an array of size n // ele is the element to search { for i=1 to n do { if(a[i]==ele) return i; } return -1; } 1 2 3 4 5 6 7 8 9 10 8 25 -4 9 10 15 21 19 50 31 Is a[1]==ele ? 8==15 No Is a[2]==ele ? 25==15 No Is a[3]==ele? -4==15 No Is a[4]==ele? 9==15 No Is a[5]==ele? 10==15 No Is a[6]==ele? 15==15 yes Element to search ele=15
  • 37. Binary Search Algorithm BinarySearch(a,low,high,ele) // a is an array, low is lower index of the array // high is upper index of the array, ele is the element to //search { if low > high then return -1; else { mid = (low+high)/2; if ele==a[mid] then return mid; else if ele < a[mid] then return BinarySearch(a,low,mid-1,ele); else return BinarySearch(a,mid+1,high,ele); } • Input list must be sorted order. • For every iteration, it reduces the list to half. • For iteration 1 , size of list =n For iteration 2, size of list =n/2 For iteration 3, size of list = n/4 ………….
  • 38. 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 Low=1, high=10 Mid= (low +high)/2= (1+10)/2= 5 Is ele==a[5]? 62==38? No Is ele>a[5]? yes Low=mid+1 = 6, high=10 Mid= (low +high)/2= (6+10)/2= 8 Is ele==a[8]? 62==62? yes return mid 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82
  • 39. 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 Low=1, high=10 Is low<high? Yes 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 Low= 9, high=mid-1=8 Is low<high? No return -1 Mid= (low +high)/2= (1+10)/2= 5 Is ele==a[5]? 69==38? No Is ele>a[5]? 69 > 38? yes Low=mid+1 = 6, high=10 Is low<high? Yes Mid= (low +high)/2= (6+10)/2= 8 Is ele==a[8]? 69==62 No Is ele>a[8]? 69>62? yes Low=mid+1 = 9, high=10 Is low<high? Yes Mid= (low +high)/2= (9+10)/2= 9 Is ele==a[8]? 69==70 No Is ele>a[8]? 69>70? No Is ele<a[8]? 69<70? Yes Element to search ele=69
  • 40. 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 Low=1, high=10 Is low<=high? Yes 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 Low= mid+1 =7, high=7 Is low<=high? No Mid= (low +high)/2= (1+10)/2= 5 Is ele==a[5]? 53==38? No Is ele>a[5]? 53 > 38? yes Low=mid+1 = 6, high=10 Is low<=high? Yes Mid= (low +high)/2= (6+10)/2= 8 Is ele==a[8]? 53==62 No Is ele>a[8]? 53>62? No Low=6, high=mid-1=7 Is low<=high? Yes Mid= (low +high)/2= (6+7)/2= 6 Is ele==a[6]? 53==41 No Is ele>a[6]? 53>41? Yes Element to search ele=53 Is ele>a[8]? 53<62? Yes Mid= (low +high)/2= (7+7)/2= 7 Is ele==a[7]? 53==53 Yes 1 2 3 4 5 6 7 8 9 10 11 18 25 29 38 41 53 62 70 82 return mid
  • 41. Sorting • Arranging the data in ascending or descending order. The output is a permutation or reordering of the input. • Various sorting algorithms are  Bubble sort  Selection sort  Insertion sort  Quick Sort  Merge Sort  Heap Sort
  • 42. Classification of Sorting Algorithms • internal sort(RAM) or external sort(Secondary Storage) • number of comparisons • number of swaps • memory usage • recursive or non-recursive • stability (equivalent elements retain their relative positions even after sorting) • adaptability (complexity changes based of pre sortedness)
  • 43. Bubble sorting • It is a comparison-based sorting algorithm. • It needs n-1 passes .( pass means visiting all elements once) • For every pass, it compares the adjacent elements and interchanges if ai >aj ( where i and j are indices of array) • After every pass, one element will be placed in the correct place. • Time complexity is O(n2)
  • 44. Bubble Sort algorithm Algorithm Bubble_sort(a,n) // a is an array of size n { int i,j; for i:=1 to n-1 do // number of passes { for j:=0 to (n-1)-i do { if(a[j]>a[j+1]) // comparing adjacent elements swap(a[j],a[j+1]); // Interchange a[j] and a[j+1] } } print_array(a,n); }
  • 45. -8 20 29 12 9 11 -8 20 29 12 9 11 20 -8 29 12 9 11 -8 20 12 29 9 11 -8 20 12 9 29 11 -8 20 12 9 11 29 0 1 2 3 4 5 Swap? Pass-1 No Yes No . of comparisons = 5 Yes Yes Yes After first pass, the highest element 29 is placed in correct position. Bubble sort –working process No . of swaps= 4
  • 46. -8 20 12 9 11 29 -8 12 20 9 11 29 -8 20 12 9 11 29 -8 12 9 20 11 29 -8 12 9 11 20 29 0 1 2 3 4 5 Swap? Pass-2 Yes No Yes Yes For pass-2 we consider only n-1 elements i.e., 5 elements (in this case) After second pass, the second highest element 20 is placed in correct position. No . of comparisons = 4 No . of swaps= 3
  • 47. -8 12 9 11 20 29 -8 9 12 11 20 29 -8 12 9 11 20 29 -8 9 11 12 20 29 0 1 2 3 4 5 Swap? Pass-3 Yes No Yes For pass-3 we consider only n-2 elements i.e., 4 elements (in this case) After third pass, the second highest element 12 is placed in correct position. No . of comparisons = 3 No . of swaps= 2
  • 48. -8 9 11 12 20 29 -8 9 11 12 20 29 -8 9 11 12 20 29 0 1 2 3 4 5 Swap? Pass-4 No No For pass-4 we consider only n-3 elements i.e., 3 elements (in this case) After fourth pass, the fourth highest element 11 is placed in correct position. -8 9 11 12 20 29 -8 9 11 12 20 29 0 1 2 3 4 5 Swap? Pass-5 No For pass-5 we consider only n-4 elements i.e., 2 elements (in this case) After fifth pass, the fifth highest element 9 is placed in correct position. No . of comparisons = 2 No . of swaps= 0 No . of comparisons = 1 No . of swaps= 0
  • 49. Selection sorting • It is a comparison sorting algorithm. • It needs n-1 passes . • For every pass, it selects the lowest element and places in the correct place. • After every pass, one element will be placed in the correct place. • Time complexity is O(n2)
  • 50. Selection Sort algorithm Algorithm selection_sort(a,n) // a is an array of size n a[1..n] { int i,j; for i:=1 to n-1 do // number of passes { minIndex=i; minVal=a[i]; for j:=i+1 to n do // find minimum element index of the list { if a[j]<minVal then { minVal=a[j]; minIndex=j; } } swap(a[i],a[minIndex]); // Interchange a[j] and a[j+1] } print_array(a,n); }
  • 51. 20 -8 29 12 9 11 0 1 2 3 4 5 Pass-1 Is -8<20 ? yes minIndex=1 minVal=-8 Is 29<-8? No minIndex=1 minVal=-8 Is 12< -8? No minIndex=1 minVal=-8 Is9<-8 ? No minIndex=1 minVal=-8 Is 11< -8? No minIndex=1 minVal=-8 Swap a[0] and a[minIndex] -8 20 29 12 9 11 minIndex=0 minVal=20 20 -8 29 12 9 11 After first pass, the lowest element -8 is placed in correct position.
  • 52. -8 20 29 12 9 11 0 1 2 3 4 5 Pass-2 Is 29< 20? No minIndex=1 minVal=20 Is 12< 20? yes minIndex=3 minVal=12 Is 9<12 ? yes minIndex=4 minVal=9 Is 11<9? No minIndex=4 minVal=9 Swap a[1] and a[minIndex] -8 9 29 12 20 11 minIndex=1 minVal=20 -8 20 29 12 9 11 After Second pass, the lowest element 9 is placed in correct position.
  • 53. -8 9 29 12 20 11 0 1 2 3 4 5 Pass-3 minIndex=2 minVal=29 Is 12< 29? yes minIndex=3 minVal=12 Is 12<20 ? No minIndex=3 minVal=12 Is 11<12? Yes minIndex=5 minVal=11 Swap a[2] and a[minIndex] -8 9 11 12 20 29 -8 9 29 12 20 11 After third pass, the lowest element 11 is placed in correct position.
  • 54. -8 9 11 12 20 29 0 1 2 3 4 5 Pass-4 minIndex=3 minVal=12 Is 12<20 ? No minIndex=3 minVal=12 Is 29<12? No minIndex=3 minVal=12 Swap a[3] and a[minIndex] -8 9 11 12 20 29 -8 9 11 12 20 29 After fourth pass, the lowest element 12 is placed in correct position.
  • 55. -8 9 11 12 20 29 0 1 2 3 4 5 Pass-5 minIndex=4 minVal=20 Is 20<29? No minIndex=4 minVal=20 Swap a[4] and a[minIndex] -8 9 11 12 20 29 -8 9 11 12 20 29 After fifth pass, the lowest element 20 is placed in correct position.
  • 56. Insertion sort • It partitions(logically) the list into two lists : Sorted and Unsorted . • For every pass, it selects the first element in unsorted list and places it in correct place in the sorted list. • Time complexity Best case – O(n) ( input list in sorted order) Worst case - O(n2).
  • 57. Algorithm Insertion_sort(a,n) // a is an array of size n { for i=1 to n-1 do { key=a[i]; j=i-1; while(j>=0 && a[j]>key) { a[j+1]=a[j]; j=j-1; } a[j+1]=key; } }
  • 58. 20 -8 29 12 9 11 Pass-1 Sorted UnSorted swap -8 20 29 12 9 11 Sorted UnSorted Pass-2 -8 20 29 12 9 11 -8 20 29 12 9 11 Sorted UnSorted 20 -8 29 12 9 11 No swap Stop the process/pass -8 20 29 12 9 11
  • 59. -8 20 29 12 9 11 swap Pass-3 -8 20 12 29 9 11 swap -8 12 20 29 9 11 No swap -8 12 20 29 9 11 Sorted UnSorted -8 20 29 12 9 11 Sorted UnSorted Input to pass-3
  • 60. swap Pass-4 swap No swap -8 12 20 29 9 11 Sorted UnSorted -8 12 20 29 9 11 -8 12 20 9 29 11 -8 12 9 20 29 11 swap -8 9 12 20 29 11 -8 9 12 20 29 11 Sorted UnSorted Input to pass-4
  • 61. swap Pass-5 swap No swap -8 9 12 20 29 11 Sorted UnSorted -8 9 12 20 29 11 -8 9 12 20 11 29 -8 9 12 11 20 29 swap -8 9 11 12 20 29 -8 9 11 12 20 29 Sorted Input to pass-5 Stop the iteration/pass
  • 62. 20 30 42 58 68 71 Pass-1 Sorted UnSorted No swap 20 30 42 58 68 71 Pass-4 20 30 42 58 68 71 20 30 42 58 68 71 20 30 42 58 68 71 20 30 42 58 68 71 Stop the iteration/pass Pass-2 No swap Stop the iteration/pass No swap Stop the iteration/pass No swap Stop the iteration/pass No swap Stop the iteration/pass Pass-5 Pass-3 Advantage of Insertion sort - Suitable for sorted /partially Sorted list.
  • 63. Pass-1 Sorted UnSorted -8 20 29 12 9 11 Sorted UnSorted 20 -8 29 12 9 11 Initial configuration • Pick the first element in unsorted list . Let key =-8 • Compare the key element with the sorted list to find its location 20 -8 29 12 9 11 Is 20 > -8 Yes 20 20 29 12 9 11 Copy 20 to its immediate right Now copy key element at the location stopped -8 20 29 12 9 11 Repeat the above process until comparison says No or completion of the sorted list
  • 64. Pass-1 -8 20 29 12 9 11 Sorted UnSorted Input to pass-2 • Pick the first element in unsorted list . Let key = 29 • Compare the key element with the sorted list to find its location -8 20 29 12 9 11 Is 20 > 29 No Now copy key element at the location stopped -8 20 29 12 9 11
  • 65. Algorithm Insertion_sort(a,n) // a is an array of size n { for i=1 to n-1 do { key=a[i]; j=i-1; while(j>=0 && a[j]>key) { a[j+1]=a[j]; j=j-1; } a[j+1]=key; } } While loop moves the elements to right which are greater than key , to make a room for new element(known as key) Insert the key element in its correct position Key - new element to insert into sorted list
  • 66. Advantages • Simple implementation • Efficient for small data • Adaptive: If the input list is presorted [may not be completely] then insertions sort takes O(n + d), where d is the number of inversions • Practically more efficient than selection and bubble sorts, even though all of them have O(n 2 ) worst case complexity • Stable: Maintains relative order of input data if the keys are same • In-place: It requires only a constant amount O(1) of additional memory space • Online: Insertion sort can sort the list as it receives it
  • 67. Comparisons to Other Sorting Algorithms • Bubble sort takes n2/2 comparisons and n2/2 swaps (inversions) in both average case and in worst case. • Selection sort takes n2/2 comparisons and n swaps. • Insertion sort takes n2/4 comparisons and n2/8 swaps in average case and in the worst case they are double. • Insertion sort is almost linear for partially sorted input. • Selection sort is best suits for elements with bigger values and small keys.
  • 68. Quick sort It is also called partition exchange sort. Quick sort uses divide-and-conquer algorithmic technique. Divide: The array A[low ...high] is partitioned into two non-empty sub arrays A[low ...q] and A[q + 1... high], such that each element of A[low ... high] is less than or equal to each element of A[q+ 1... high]. The index q is computed as part of this partitioning procedure. Conquer: The two sub arrays A[low ...q] and A[q + 1 ...high] are sorted by recursive calls to Quick sort. Combine : No work for combining is needed because all the subarrays are in sorted order. The recursive algorithm consists of four steps: 1) If there are one or no elements in the array to be sorted, return. 2) Pick an element in the array to serve as the “pivot” point. (Usually, the left-most element in the array is used.) 3) Split the array into two parts – one with elements larger than the pivot and the other with elements smaller than the pivot. 4) Recursively repeat the algorithm for both halves of the original array.
  • 69. Algorithm for Partitioning the array • step1: Select the first element as a pivot in the list • step2: Start a pointer (the left pointer) at the second item in the list • step3: Start a pointer (the right pointer) at the last item in the list • step4: While the value at the left pointer in the list is less than the pivot value, move the left pointer to the right(add 1). Continue this process until the value at the left pointer is greater than or equal to the pivot value. • step5: While the value at the right pointer in the list is greater than the pivot value, move the right pointer to the left(subtract 1). Continue this process until the value at the right pointer is less than or equal to the pivot value. • step6: If the left pointer value is less than right pointer value, then swap the value at these locations in the list. • step7: If the left pointer and right pointer don’t meet, go to step 1.
  • 70. Algorithm QuickSort(a,p,q) // sorts the elements a[p],….,a[q] which // reside in the global array a[1:n] // into ascending order { if ( p < q ) then // If there are more than one element { // divide P into two subproblems. j:= partition(a,p,q); QuickSort(a,p,j-1); QuickSort(a,j+1,q); // There is no need for combining Solutions. } } Quick sort Algorithm Algorithm Partition(a,low,high) { pivot= a[l]; i=low; j=high; while(i<j) { while (a[i]<=pivot) i++; while(a[j]>pivot) j--; if(i<j) then swap(a[i],a[j]); } a[low]=a[j]; a[j]=pivot; return j; }
  • 71. Quicksort - example 10 5 8 21 -7 12 24 9 17 Pivot 5 8 -7 9 10 21 12 24 17 5 -7 8 9 10 21 12 17 24 8 9 -7 10 12 17 21 24 Pivot Pivot 5 Pivot Pivot -7 5 8 9 10 12 21 24 17 <= >
  • 72. 10 8 -5 20 7 14 31 19 -3 11 10 8 -5 -3 7 14 31 19 20 11 Is 8<10? Yes Is -5<10? Yes Is 20<10? No Is 11>10? yes Is -3>10? No i j Swap if i<j 3 8 a[3] & a[8] 5 4 No swap Is14<10? No Is19>10? Yes Is31>10? Yes Is14>10? Yes Swap pivot &a[4] pivot Is 7<10? yes Is i <j ? Yes Is7>10? No Is i <j ? No 7 8 -5 -3 10 14 31 19 20 11
  • 73. 23 18 51 20 17 42 31 19 -3 36 1 9 23 23 18 51 20 17 42 31 19 -3 36 1 9 23 23 18 51 20 17 42 31 19 -3 36 2 9 23 23 18 51 20 17 42 31 19 -3 36 2 8 23 23 18 -3 20 17 42 31 19 51 36 4 7 23 23 18 -3 20 17 42 31 19 51 36 3 7 23 23 18 -3 20 17 42 31 19 51 36 5 7 23 23 18 51 20 17 19 31 42 51 36 6 6 23 23 18 51 20 17 19 31 42 51 36 6 5 23 19 18 51 20 17 23 31 42 51 36 6 5 23 Stop the iteration/pass because i>j Swap a[2] & a[8]
  • 74. Time Complexity Recurrence Equation for Divide-and-Conquer Algorithms T(n) = c if n=1 where c is constant = a T(n/b) + f(n) if n>1 f(n) is the time for dividing P and combining the solutions to subproblems
  • 75. Time complexity of Quick sort Recurrence Equation for Quick sort (best case) Let us assume that partition divides the list into two halves(approx.) T(n) = 1 if n<=1 = T(n/2) + T(n/2)+c*n if n>1 T(n/2) – time for solving first partition list T(n/2) – time for solving second partition list n is for time for partitioning the list. Recurrence Equation for Quick sort (worst case) Let us assume the list is in sorted order and partition divides n elements into 0 and n-1 T(n) = 1 if n<=1 = T(0) + T(n-1) + n if n>1 T(0) – time for solving first partition list T(n-1) – time for solving second partition list n is for time for partitioning the list.
  • 76. Quicksort – Worst case example 14 18 28 39 51 59 90 72 Pivot 14 18 28 39 51 59 90 72 Pivot 14 18 28 39 51 59 90 72 Pivot 39 51 59 90 72 28 14 18 Pivot 51 59 90 72 39 28 14 18 39 28 14 18 Pivot 51 59 90 72 Pivot 39 28 14 18 51 59 90 72 39 28 14 18 51 59 90 72 Pivot <= >
  • 77. Benefits: • It works rapidly and effectively. • It has the best time complexity when compared to other sorting algorithms. • Quick sort has a space complexity of O(logn), making it an excellent choice for situations when space is limited.
  • 78. Limitations: • This sorting technique is considered unstable since it does not maintain the key-value pairs initial order. • When the pivot element is the largest or smallest, or when all of the components have the same size. The performance of the quicksort is significantly impacted by these worst-case scenarios. • It’s difficult to implement since it’s a recursive process, especially if recursion isn’t available.
  • 79. Applications: • Quicksort is a cache-friendly algorithm as it has a good locality of reference[1] when used for arrays. • It is tail -recursive and hence all the call optimization can be done. • It is an in-place sort that does not require any extra storage memory. • It is used in operational research and event-driven simulation. • Numerical computations and in scientific research, for accuracy in calculations most of the efficiently developed algorithm uses priority queue and quick sort is used for sorting. • Variants of Quicksort are used to separate the Kth smallest or largest elements. • It is used to implement primitive type methods. • If data is sorted then the search for information became easy and efficient. Note: [1] the tendency of the computer program to access the same set of memory locations for a particular time period
  • 80. Performance Worst Case complexity O(n2) Best Case complexity O(nlogn) Average Case complexity O(nlogn) Worst Case space complexity O(1)
  • 81. Merge-sort Merge sort uses divide-and-conquer algorithmic technique. 1.Divide the array into 2 parts of lengths n/2 and n - n/2 respectively (here if n is odd, we round off the value of n/2). Let us call these arrays as left half and right half, respectively. 2.Recursively sort the left half array and the right half array. 3.Merge the left half array and right half-array to get the full array sorted.
  • 83.
  • 84. 23 18 51 20 17 42 31 19 -3 36 0 1 2 3 4 5 6 7 8 9 23 18 51 20 17 42 31 19 -3 36 mid=(low +high)/2=(0+4)/2 = 2 23 18 51 20 17 42 31 19 -3 36 51 mid=(0+2)/2 = 1 23 18 18 23 18 23 18 23 51 20 17 mid=(3+4)/2 = 3 17 20 17 18 20 23 51 mid=(5+7)/2 = 6 42 31 19 42 31 31 42 31 42 19 -3 36 mid=(8+9)/2 = 8 -3 19 31 36 42 -3 36 mid=(5+6)/2 = 5 Merge Merge Merge Merge Merge Merge Merge Merge 17 -3 18 19 20 23 31 36 42 51 Merge mid=(low +high)/2=(0+9)/2 = 4 mid=(low +high)/2=(5+9)/2 = 7
  • 85. Time complexity of Merge sort Recurrence Equation for Merge sort (best case) Let us assume that partition divides the list into two halves(approx.) T(n) = 0 if n=1 = T(n/2) + T(n/2)+c*n if n>1 T(n/2) – time for solving first partition list T(n/2) – time for solving second partition list c*n is for time for merging the list.
  • 86. 23 18 51 20 17 42 31 19 -3 36 0 1 2 3 4 5 6 7 8 9 23 18 51 20 17 42 31 19 -3 36 23 18 51 20 17 42 31 19 -3 36 51 23 18 18 23 18 23 18 23 51 20 17 17 20 17 18 20 23 51 42 31 19 42 31 31 42 31 42 19 -3 36 -3 19 31 36 42 -3 36 -3 17 18 19 20 23 31 36 42 51
  • 87. Advantages of Merge Sort: • Stability: Merge sort is a stable sorting algorithm, which means it maintains the relative order of equal elements in the input array. • Guaranteed worst-case performance: Merge sort has a worst-case time complexity of O(N logN), which means it performs well even on large datasets. • Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it can be easily parallelized to take advantage of multiple processors or threads.
  • 88. Drawbacks of Merge Sort: • Space complexity: Merge sort requires additional memory to store the merged sub-arrays during the sorting process. • Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires additional memory to store the sorted data. This can be a disadvantage in applications where memory usage is a concern. • Not always optimal for small datasets: For small datasets, Merge sort has a higher time complexity than some other sorting algorithms, such as insertion sort. This can result in slower performance for very small datasets.
  • 89. Applications of Merge Sort: • Sorting large datasets: Merge sort is particularly well-suited for sorting large datasets due to its guaranteed worst-case time complexity of O(n log n). • External sorting: Merge sort is commonly used in external sorting, where the data to be sorted is too large to fit into memory. • Custom sorting: Merge sort can be adapted to handle different input distributions, such as partially sorted, nearly sorted, or completely unsorted data. • Inversion Count Problem: Inversion Count for an array indicates – how far (or close) the array is from being sorted. If the array is already sorted, then the inversion count is 0, but if the array is sorted in reverse order, the inversion count is the maximum.
  • 90. Sorting - comparison Bubble Sort Selection sort Insertion sort Number of passes/ iterations n-1 n-1 n-1 No of comparisons n-1 for pass1 n-2 for pass2 . . at most (n-1 +n-2+n-3+…+1) n-1 for pass1 n-2 for pass2 . . at most (n-1 +n-2+n-3+…+1) depends on the elements at most n-passi comparisons for each passi where 1<=i<= n-1 No of swaps/ Interchanges at most n-1 for pass1 at most n-2 for pass2 . . at most (n-1 +n-2+n-3+…+1) Exactly one swap for each pass. Total swaps= n-1 depends on the elements at most n-passi swaps for each passi where 1<=i<= n-1 Time complexity Best case – O(n2) Worst Case – O(n2) Best case – O(n2) Worst Case – O(n2) Best case – O(n) Worst Case – O(n2) Space Complexity O(n) O(n) O(n)
  • 91. Sorting - comparison Quick Sort Merge sort Uses Divide-and- conquer strategy Yes Yes Is in-place sorting? Yes No Time complexity Best case – O( n log n) Worst case – O( n2) Best case – O( n log n) Worst case – O( n log n) Is stable? No Yes
  • 92.
  • 93. • A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input data set • Selection and Quick sorting is not stable • Bubble, insertion, merge sorting are stable.