3. 1. Bubble sort
Bubble Sort is a simple-minded algorithm based on the idea that –
we look at the list, and wherever we find two consecutive elements out of order, we
swap them.
To do this :
Repeatedly traverse the unsorted part of the array,
comparing consecutive elements, and
Interchange them when they are out of order.
The name of the algorithm refers to the fact that the largest element "sinks" to the
bottom and the smaller elements "float" to the top.
6. Algorithm
int temp;
for(int i=0; i<a.length; i++)
{
int flag=0;
for(int j=0; j<a.length-1-i; j++)
{
if (a[j]>a[j+i])
{
temp=a[j];
a[j]=a[j+1]
a[j+1]=temp;
flag=1;
}
}
If (flag==0)
{
break;
}
}
7. 2. SELECTION SORT
The Selection sort algorithm is based on the idea of finding the minimum or maximum
element in an unsorted array and then putting it in its correct position in a sorted array.
Assume that the array A=[7,5,4,2] needs to be sorted in ascending order.
The minimum element in the array i.e. 2 is searched for and then swapped with the
element that is currently located at the first position, i.e. 7. Now the minimum element in
the remaining unsorted array is searched for and put in the second position, and so on.
9. PSUDOCODE
void sort(int arr[])
{
int n = arr.length;
for (int i = 0; i < n-1; i++)
{
// Find the minimum element in unsorted array
int min_idx = i;
for (int j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;
// Swap the found minimum element with the first element
int temp = arr[min_idx];
arr[min_idx] = arr[i];
arr[i] = temp;
}
}
11. 3. Insertion Sort
1.We start by making the second element of the given array, i.e. element at
index 1, the key.
2. We compare the key element with the element(s) before it, in this case,
element at index 0:
•If the key element is less than the first element, we insert the key element
before the first element.
•If the key element is greater than the first element, then we insert it after
the first element.
3. Then, we make the third element of the array as key and will compare it with
elements to it's left and insert it at the right position.
4. And we go on repeating this, until the array is sorted.
13. EXAMPLE: 11, 12, 13, 5, 6
Let us loop for i = 1 (second element of the array) to 4 (last element of the array)
• i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12
11, 12, 13, 5, 6
• i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
• i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead of their current
position.
5, 11, 12, 13, 6
• i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead of their current position.
5, 6, 11, 12, 13
14. Algorithm- Insertion Sort
void sort(int arr[])
{
int n = arr.length;
for (int i = 1; i < n; ++i) {
int key = arr[i];
int j = i - 1;
/* Move elements of arr[0..i-1], that are
greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}
15. 4. Merge Sort
• Merge Sort uses a divide and conquer paradigm for sorting.
• It divides the problem into sub problems and solves them individually.
• It then combines the results of sub problems to get the solution of the original
problem
Merge Sort Algorithm works in the following steps-
• It divides the given unsorted array into two halves- left and right sub arrays.
• The sub arrays are divided recursively.
• This division continues until the size of each sub array becomes 1.
• After each sub array contains only a single element then merge procedure is
called.
• The merge procedure combines these trivially sorted arrays to produce a final
sorted array.
19. Quicksort Algorithm
Given an array of n elements (e.g., integers):
• If array only contains one element, return
• Else
• pick one element to use as pivot.
• Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
• Quicksort two sub-arrays
• Return results
21. Pick Pivot Element
There are a number of ways to pick the pivot element. In this
example, we will use the first element in the array:
40 20 10 80 60 50 7 30 100
22. Partitioning Array
Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot
The sub-arrays are stored in the original data array.
Partitioning loops through, swapping elements
below/above pivot.
51. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• What is best case running time?
• Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
52. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• What is best case running time?
• Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
• Depth of recursion tree?
53. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• What is best case running time?
• Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
• Depth of recursion tree? O(log2n)
54. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• What is best case running time?
• Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
• Depth of recursion tree? O(log2n)
• Number of accesses in partition?
55. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• What is best case running time?
• Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
• Depth of recursion tree? O(log2n)
• Number of accesses in partition? O(n)
56. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
57. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time?
58. Quicksort: Worst Case
• Assume first element is chosen as pivot.
• Assume we get array that is already in order:
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
59. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
60. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
61. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
62. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
63. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
64. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
65. 1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
2 4 10 12 13 50 57 63 100
pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
> data[pivot]
<= data[pivot]
66. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time?
• Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
• Depth of recursion tree?
67. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time?
• Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
• Depth of recursion tree? O(n)
68. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time?
• Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
• Depth of recursion tree? O(n)
• Number of accesses per partition?
69. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time?
• Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
• Depth of recursion tree? O(n)
• Number of accesses per partition? O(n)
70. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time: O(n2)!!!
71. Quicksort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2n)
• Worst case running time: O(n2)!!!
• What can we do to avoid worst case?
72. Improved Pivot Selection
Pick median value of three elements from data array:
data[0], data[n/2], and data[n-1].
Use this median value as pivot.
73. Improving Performance of Quicksort
• Improved selection of pivot.
• For sub-arrays of size 3 or less, apply brute force search:
• Sub-array of size 1: trivial
• Sub-array of size 2:
• if(data[first] > data[second]) swap them
• Sub-array of size 3: left as an exercise.