- 1. Module 2: Divide and Conquer Approach CSC402.2: Describe, apply and analyze the complexity of divide and conquer strategy. Approx Weightage: 20-25 marks
- 2. Contents ● General method ● Analysis of Binary search. ● Analysis of Merge sort ● Analysis of Quick sort ● Analysis of Finding minimum and maximum algorithms
- 3. Divide and Conquer If we can break a single big problem into smaller sub-problems, solve the smaller subproblems and combine their solutions to find the solution for the original big problem, it becomes easier to solve the whole problem. The concept of Divide and Conquer involves three steps: 1. Divide the problem into multiple small problems. 2. Conquer the subproblems by solving them. The idea is to break down the problem into atomic subproblems, where they are actually solved. 3. Combine the solutions of the subproblems to find the solution of the actual problem.
- 4. Divide and Conquer The concept of Divide and Conquer involves three steps: 1. Divide the problem into multiple subproblems. 2. Solve the Sub Problems. The idea is to break down the problem into atomic subproblems, where they are actually solved. 3. Combine the solutions of the subproblems to find the solution of the actual problem.
- 7. Divide and Conquer Divide/Break This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the original problem. This step generally takes a recursive approach to divide the problem until no sub- problem is further divisible. At this stage, sub-problems become atomic in size but still represent some part of the actual problem. Conquer/Solve This step receives a lot of smaller subproblems to be solved. Generally, at this level, the problems are considered 'solved' on their own. Merge/Combine When the smaller subproblems are solved, this stage recursively combines them until they formulate a solution of the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that they appear as one.
- 8. Divide And Conquer algorithm DAC(a, i, j) { if(small(a, i, j)) return(Solution(a, i, j)) else mid = divide(a, i, j) // f1(n) b = DAC(a, i, mid) // T(n/2) c = DAC(a, mid+1, j) // T(n/2) d = combine(b, c) // f2(n) return(d) }
- 9. Recurrence Relation for DAC algorithm This is a recurrence relation for the above program. O(1) if n is small T(n) = f1(n) + 2T(n/2) + f2(n)
- 10. Time Complexity of Divide and Conquer Algorithm T(n) = aT(n/b) + f(n), where, n = size of input a = number of subproblems in the recursion n/b = size of each subproblem. All subproblems are assumed to have the same size. f(n) = cost of the work done outside the recursive call, which includes the cost of dividing the problem and cost of merging the solutions
- 11. Pros and cons of Divide and Conquer Approach ● Divide and conquer approach supports parallelism as sub- problems are independent. ● Hence, an algorithm, which is designed using this technique, can run on the multiprocessor system or in different machines simultaneously. ● In this approach, most of the algorithms are designed using recursion, hence memory management is very high. ● For recursive function stack is used, where function state needs to be stored.
- 13. Binary Search Binary Search algorithm is used extensively in computer science and mathematics that locates a specific element in a sorted dataset. It works by repeatedly dividing the dataset in half and comparing the target value with the middle value until the target value is discovered or determined to be absent.
- 17. Binary Search
- 18. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the first array element, 'upper_bound' is the index of the last array element, 'val' is the value to search 1. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1 2. Step 2: repeat steps 3 and 4 while beg <=end 3. Step 3: set mid = (beg + end)/2 4. Step 4: if a[mid] = val 5. set pos = mid 6. print pos 7. go to step 6 8. else if a[mid] > val 9. set end = mid - 1 10. else 11. set beg = mid + 1 12. [end of if] 13. [end of loop] 14. Step 5: if pos = -1 15. print "value is not present in the array" 16. [end of if] 17. Step 6: exit
- 23. Complexity Analysis of Binary Search ● Since analysis starts with the middle element, in the best case, it’s possible that the middle element of the array itself is the target element. ● Otherwise, the length of the array is halved. ● The equation T(n)= T(n/2)+1 is known as the recurrence relation for binary search.
- 25. Merge Sort The merge sort working rule involves the following steps: 1. Divide the unsorted array into subarray, each containing a single element. 2. Take adjacent pairs of two single-element array and merge them to form an array of 2 elements. 3. Repeat the process till a single sorted array is obtained.
- 26. Merge Sort ● An array of Size ‘N’ is divided into two parts ‘N/2’ size of each. ● Then the arrays are further divided till we reach a single element. ● The base case here is reaching one single element. ● When the base case is hit, we start merging the left part and the right part and we get a sorted array at the end. ● Merge sort repeatedly breaks down an array into several subarrays until each subarray consists of a single element and merging those subarrays in a manner that results in a sorted array.
- 30. Complexity Analysis of Merge Sort Merge sort repeatedly divides the array into two equally sized parts. Thus merge sort time complexity depends on the number of division stages. The number of division stages is log2 n. On each merge stage, n elements are merged. ● Step 1 - n×1 ● Step 2 - n/2×2 ● Step 3 - n/4×4 Merge Sort time complexity is calculated using time per division stage. Since the merge process has linear time complexity, for n elements there will be n∗log 2 n division and merge stages. Hence, regardless of the arrangement, the time complexity of Merge Sort is O(nlogn)
- 31. Analysis of Merge Sort Let T (n) be the total time taken by the Merge Sort algorithm.
- 36. Analysis of Merge Sort Time Complexity Best Case Time Complexity of Merge Sort The best case scenario occurs when the elements are already sorted in ascending order. If two sorted arrays of size n need to be merged, the minimum number of comparisons will be n. This happens when all elements of the first array are less than the elements of the second array. The best case time complexity of merge sort is O(n∗logn). Average Case Time Complexity of Merge Sort The average case scenario occurs when the elements are jumbled (neither in ascending or descending order). This depends on the number of comparisons. The average case time complexity of merge sort is O(n∗logn). Worst Case Time Complexity of Merge Sort The worst-case scenario occurs when the given array is sorted in descending order leading to the maximum number of comparisons. In this case, for two sorted arrays of size n, the minimum number of comparisons will be 2n. The worst-case time complexity of merge sort is O(n∗logn).
- 37. • Quick sort algorithm: • Is one of the most efficient sorting algorithms • Is based on the divide and conquer approach • Successively divides the problem into smaller parts until the problems become so small that they can be directly solved Sorting Data by Using Quick Sort
- 38. • In quick sort algorithm, you: • Select an element from the list called as pivot. • Partition the list into two parts such that: • All the elements towards the left end of the list are smaller than the pivot. • All the elements towards the right end of the list are greater than the pivot. • Store the pivot at its correct position between the two parts of the list. • You repeat this process for each of the two sublists created after partitioning. • This process continues until one element is left in each sublist. Implementing Quick Sort Algorithm
- 39. • To understand the implementation of quick sort algorithm, consider an unsorted list of numbers stored in an array. Implementing Quick Sort Algorithm (Contd.) arr 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7
- 40. • Let us sort this unsorted list. arr 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7 Implementing Quick Sort Algorithm (Contd.)
- 41. • Select a Pivot. arr 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7 Pivot Implementing Quick Sort Algorithm (Contd.)
- 42. • Start from the left end of the list (at index 1). • Move in the left to right direction. • Search for the first element that is greater than the pivot value. arr 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7 Pivot Greater element Greater Value Implementing Quick Sort Algorithm (Contd.)
- 43. • Start from the right end of the list. • Move in the right to left direction. • Search for the first element that is smaller than or equal to the pivot value. 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7 Smaller element Smaller Value Implementing Quick Sort Algorithm (Contd.) arr Pivot Greater Value
- 44. • Interchange the greater value with smaller value. arr 2 1 0 4 3 55 46 38 28 16 89 83 30 5 6 7 Pivot Greater Value Smaller Value Swap 55 16 Implementing Quick Sort Algorithm (Contd.)
- 45. • Continue the search for an element greater than the pivot. • Start from arr[2] and move in the left to right direction. • Search for the first element that is greater than the pivot value. arr 2 1 0 4 3 16 46 38 28 55 89 83 30 5 6 7 Pivot Greater element Greater Value Implementing Quick Sort Algorithm (Contd.)
- 46. • Continue the search for an element smaller than the pivot. • Start from arr[3] and move in the right to left direction. • Search for the first element that is smaller than or equal to the pivot value. 2 1 0 4 3 16 46 38 28 55 89 83 30 5 6 7 Pivot Smaller element Smaller Value Greater Value Implementing Quick Sort Algorithm (Contd.) arr
- 47. • The smaller value is on the left hand side of the greater value. • Values remain same. Implementing Quick Sort Algorithm (Contd.) 2 1 0 4 3 16 46 38 28 55 89 83 30 5 6 7 Pivot Smaller Value Greater Value arr
- 48. • List is now partitioned into two sublists. • List 1 contains all values less than or equal to the pivot. • List 2 contains all the values greater than the pivot. Implementing Quick Sort Algorithm (Contd.) 2 1 0 4 3 16 46 38 55 89 83 30 5 6 7 Pivot 28 16 List 1 List 2 55 89 83 30 46 38 arr 2 1 0 4 3 5 6 7 28
- 49. • Replace the pivot value with the last element of List 1. • The pivot value, 28 is now placed at its correct position in the list. Implementing Quick Sort Algorithm (Contd.) arr 16 List 1 List 2 55 89 83 30 38 Swap 28 2 1 0 4 3 5 6 7 46 28 16
- 50. • Truncate the last element, that is, pivot from List 1. Implementing Quick Sort Algorithm (Contd.) List 2 55 89 83 30 38 2 4 3 5 6 7 arr List 1 1 0 16 28 16 16 16 0 46
- 51. • List 1 has only one element. • Therefore, no sorting required. Implementing Quick Sort Algorithm (Contd.) arr 16 List 1 List 2 55 89 83 30 38 16 16 0 2 4 3 5 6 7 46
- 52. • Sort the second list, List 2. Implementing Quick Sort Algorithm (Contd.) 16 55 89 83 30 38 16 16 0 2 4 3 5 6 7 46 List 1 List 2 arr
- 53. • Select a pivot. • The pivot in this case will be arr[2], that is, 46. Implementing Quick Sort Algorithm (Contd.) 16 55 89 83 30 38 16 16 0 Pivot 2 4 3 5 6 7 46 List 1 List 2 arr
- 54. Implementing Quick Sort Algorithm (Contd.) 16 55 89 83 30 38 16 16 0 Pivot • Start from the left end of the list (at index 3). • Move in the left to right direction. • Search for the first element that is greater than the pivot value. Greater element Greater Value 2 4 3 5 6 7 46 List 1 List 2 arr
- 55. Implementing Quick Sort Algorithm (Contd.) 16 55 89 83 30 38 16 16 0 Pivot Greater Value • Start from the right end of the list (at index 7). • Move in the right to left direction. • Search for the first element that is smaller than or equal to the pivot value. Smaller element Smaller Value 2 4 3 5 6 7 46 List 1 List 2 arr
- 56. Implementing Quick Sort Algorithm (Contd.) 16 55 89 83 30 38 16 16 0 Pivot Greater Value Smaller Value • Interchange the greater value with smaller value. Swap 30 55 2 4 3 5 6 7 46 List 1 List 2 arr
- 57. Implementing Quick Sort Algorithm (Contd.) 16 89 83 38 16 16 0 Pivot • Continue the search for an element greater than the pivot. • Start from arr[5] and move in the left to right direction. • Search for the first element that is greater than the pivot value. Greater element Greater Value 2 4 3 5 6 7 46 List 1 List 2 arr 30 55
- 58. Implementing Quick Sort Algorithm (Contd.) 16 89 83 38 16 16 0 Pivot • Continue the search for an element smaller than the pivot. • Start from arr[6] and move in the right to left direction. • Search for the first element that is smaller than the pivot value. Greater Value Smaller element Smaller Value 2 4 3 5 6 7 46 List 1 List 2 arr 30 55
- 59. Implementing Quick Sort Algorithm (Contd.) 16 89 83 38 16 16 0 Pivot Greater Value Smaller Value • The smaller value is on the left hand side of the greater value. • Values remain same. 2 4 3 5 6 7 46 List 1 List 2 arr 30 55
- 60. Implementing Quick Sort Algorithm (Contd.) • Divide the list into two sublists. • Sublist 1 contains all values less than or equal to the pivot. • Sublist 2 contains all the values greater than the pivot. 2 4 3 5 6 7 83 89 55 38 46 30 16 28 1 0 arr
- 61. Implementing Quick Sort Algorithm (Contd.) • Replace the pivot value with the last element of Sublist 1. • The pivot value, 46 is now placed at its correct position in the list. • This process is repeated until all elements reach their correct position. arr arr 2 4 3 5 6 7 83 89 55 38 46 30 16 28 1 0 Sublist 1 Sublist 2 Swap 46 30
- 62. QuickSort(low,high) 1. If (low > high): a. Return 2. Set pivot = arr[low] 3. Set i = low + 1 4. Set j = high 5. Repeat step 6 until i > high or arr[i] > pivot // Search for an element greater than pivot 6. Increment i by 1 7. Repeat step 8 until j < low or arr[j] < pivot // Search for an element smaller than pivot 8. Decrement j by 1 9. If i < j: // If greater element is on the left of smaller element a. Swap arr[i] with arr[j] Implementing Quick Sort Algorithm (Contd.)
- 63. 10. If i <= j: a. Go to step 5 // Continue the search 11. If low < j: a. Swap arr[low] with arr[j] // Swap pivot with last element in // first part of the list 12. QuickSort(low, j – 1) // Apply quicksort on list left to pivot 13. QuickSort(j + 1, high) // Apply quicksort on list right to pivot Implementing Quick Sort Algorithm (Contd.)
- 64. • The total time taken by this sorting algorithm depends on the position of the pivot value. • The worst case occurs when the list is already sorted. • If the first element is chosen as the pivot, it leads to a worst case efficiency of O(n2). • If you select the median of all values as the pivot, the efficiency would be O(n log n). Determining the Efficiency of Quick Sort Algorithm
- 65. • Merge sort algorithm: • Is based on the divide and conquer approach • Divides the list into two sublists of sizes as nearly equal as possible • Sorts the two sublists separately by using merge sort • Merges the sorted sublists into one single list Sorting Data by Using Merge Sort
- 66. • To understand the implementation of merge sort algorithm, consider an unsorted list of numbers stored in an array. Implementing Merge Sort Algorithm arr 2 1 0 4 3 10 30 76 53 3 57 24 5 6
- 67. • Let us sort this unsorted list. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 10 30 76 53 3 57 24 5 6
- 68. • The first step to sort data by using merge sort is to split the list into two parts. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 10 30 76 53 3 57 24 5 6
- 69. • The first step to sort data by using merge sort is to split the list into two parts. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 5 6 53 10 30 76 3 57 24
- 70. • The list has odd number of elements, therefore, the left sublist is longer than the right sublist by one entry. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 5 6 53 10 30 76 3 57 24
- 71. • Further divide the two sublists into nearly equal parts. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 5 6 53 10 30 76 3 57 24 1 0 53 10 30 76 2 1 0 4 3 5 6 53 10 30 76 3 57 24
- 72. • Further divide the sublists. Implementing Merge Sort Algorithm (Contd.) arr 1 0 53 10 30 76 2 1 0 4 3 5 6 53 10 30 76 3 57 24 2 1 0 4 3 5 6 24 57 3 76 30 10 53
- 73. • There is a single element left in each sublist. • Sublists with one element require no sorting. Implementing Merge Sort Algorithm (Contd.) arr arr 2 1 0 4 3 5 6 24 57 3 76 30 10 53
- 74. • Start merging the sublists to obtain a sorted list. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 5 6 24 57 3 76 30 10 53 0 5 6 57 2 1 4 3 10 53 30 76 3 57 24
- 75. • Further merge the sublists. Implementing Merge Sort Algorithm (Contd.) arr 0 5 6 57 2 1 4 3 10 53 30 76 3 57 24 2 1 0 4 3 5 6 10 30 53 76 3 24 57
- 76. • Again, merge the sublists. Implementing Merge Sort Algorithm (Contd.) arr 2 1 0 4 3 5 6 10 30 53 76 3 24 57 2 0 3 5 1 4 10 24 30 3 53 57 76 6
- 77. • The list is now sorted. Implementing Merge Sort Algorithm (Contd.) arr 2 0 3 5 1 4 10 24 30 3 53 57 76 6
- 78. • Write an algorithm to implement merge sort: MergeSort(low,high) 1. If (low >= high): a. Return 2. Set mid = (low + high)/2 3. Divide the list into two sublists of nearly equal lengths, and sort each sublist by using merge sort. The steps to do this are as follows: a. MergeSort(low, mid b. MergeSort(mid + 1, high) 4. Merge the two sorted sublists: a. Set i = low b. Set j = mid + 1 Implementing Merge Sort Algorithm (Contd.)
- 79. i. If (arr[i] <= arr[j]) Store arr[i] at index k in array B Increment i by 1 Else Store arr[j] at index k in array B Increment j by 1 ii. Increment k by 1 e. Repeat until j > high: // If there are still some elements in the // second sublist append them to the new list i. Store arr[j] at index k in array B ii. Increment j by 1 iii. Increment k by 1 Implementing Merge Sort Algorithm (Contd.)
- 80. • To sort the list by using merge sort algorithm, you need to recursively divide the list into two nearly equal sublists until each sublist contains only one element. • To divide the list into sublists of size one requires log n passes. • In each pass, a maximum of n comparisons are performed. • Therefore, the total number of comparisons will be a maximum of n × log n. • The efficiency of merge sort is equal to O(n log n) • There is no distinction between best, average, and worst case efficiencies of merge sort because all of them require the same amount of time. Determining the Efficiency of Merge Sort Algorithm
- 81. • Which algorithm uses the following procedure to sort a given list of elements? 1. Select an element from the list called a pivot. 2. Partition the list into two parts such that one part contains elements lesser than the pivot, and the other part contains elements greater than the pivot. 3. Place the pivot at its correct position between the two lists. 4. Sort the two parts of the list using the same algorithm. Just a minute • Answer: • Quick sort
- 82. • On which algorithm design technique are quick sort and merge sort based? Just a minute • Answer: • Quick sort and merge sort are based on the divide and conquer technique.
- 83. • Linear Search: • Is the simplest searching method • Is also referred to as sequential search • Involves comparing the items sequentially with the elements in the list Performing Linear Search
- 84. • The linear search would begin by comparing the required element with the first element in the list. • If the values do not match: • The required element is compared with the second element in the list. • If the values still do not match: • The required element is compared with the third element in the list. • This process continues, until: • The required element is found or the end of the list is reached. Implementing Linear Search
- 85. • Write an algorithm to search for a given employee ID in a list of employee records by using linear search algorithm: 1. Read the employee ID to be searched 2. Set i = 0 3. Repeat step 4 until i > n or arr[i] = employee ID 4. Increment i by 1 5. If i > n: Display “Not Found” Else Display “Found” Implementing Linear Search (Contd.)
- 86. • The efficiency of a searching algorithm is determined by the running time of the algorithm. • In the best case scenario: • The element is found at the first position in the list. • The number of comparisons in this case is 1. • The best case efficiency of linear search is therefore, O(1). • In the worst case scenario: • The element is found at the last position of the list or does not exists in the list. • The number of comparisons in this case is equal to the number of elements. • The worst case efficiency of linear search is therefore, O(n). Determining the Efficiency of Linear Search
- 87. • In the average case scenario: • The number of comparisons for linear search can be determined by finding the average of the number of comparisons in the best and worst case. • The average case efficiency of linear search is 1/2(n + 1). Determining the Efficiency of Linear Search (Contd.)
- 88. • You have to apply linear search to search for an element in an array containing 5,000 elements. If, at the end of the search, you find that the element is not present in the array, how many comparisons you would have made to search the required element in the given list? Just a minute • Answer: • 5,000
- 89. • Problem Statement: • Write a program to search a given number in an array that contains a maximum of 20 numbers by using the linear search algorithm. If there are more than one occurrences of the element to be searched, then the program should display the position of the first occurrence. The program should also display the total number of comparisons made. Activity: Performing Linear Search
- 90. • Binary search algorithm: • Is used for searching large lists • Searches the element in very few comparisons • Can be used only if the list to be searched is sorted Performing Binary Search
- 91. • Consider an example. You have to search for the name Steve in a telephone directory that is sorted alphabetically. • To search the name Steve by using binary search algorithm: • You open the telephone directory at the middle to determine which half contains the name. • Open that half at the middle to determine which quarter of the directory contains the name. • Repeat this process until the name Steve is not found. • Binary search reduces the number of pages to be searched by half each time. Implementing Binary Search
- 92. • Consider a list of 9 elements in a sorted array. Implementing Binary Search (Contd.) arr 2 1 0 4 3 13 17 19 9 25 29 39 5 6 40 47 7 8
- 93. • You have to search an element 13 in the given list. Implementing Binary Search (Contd.) arr 2 1 0 4 3 13 17 19 9 25 29 39 5 6 40 47 7 8
- 94. • Determine the index of the middlemost element in the list: Mid = (Lower bound + Upper bound)/2 = (0 + 8)/2 = 4 Implementing Binary Search (Contd.) arr 2 1 0 4 3 13 17 19 9 25 29 39 5 6 40 47 7 8 Middle element Lower bound Upper bound
- 95. • 13 is not equal to the middle element, therefore, again divide the list into two halves: Mid = (Lower bound + Upper bound)/2 = (0 + 3)/2 = 1 Implementing Binary Search (Contd.) arr 2 1 0 4 3 13 17 19 9 25 29 39 5 6 40 47 7 8 Middle element Lower bound Upper bound Upper bound Middle element
- 96. • 13 is equal to middle element. • Element found at index 1. Implementing Binary Search (Contd.) arr 2 1 0 4 3 13 17 19 9 25 29 39 5 6 40 47 7 8 Lower bound Upper bound Element found
- 97. • Write an algorithm to implement binary search algorithm. 1. Accept the element to be searched 2. Set lowerbound = 0 3. Set upperbound = n – 1 4. Set mid = (lowerbound + upperbound)/2 5. If arr[mid] = desired element: a. Display “Found” b. Go to step 10 6. If desired element < arr[mid]: a. Set upperbound = mid – 1 Implementing Binary Search (Contd.)
- 98. 7. If desired element > arr[mid]: a. Set lowerbound = mid + 1 8. If lowerbound <= upperbound: a. Go to step 4 9. Display “Not Found” 10.Exit Implementing Binary Search (Contd.)
- 99. • In binary search, with every step, the search area is reduced to half. • In the best case scenario, the element to be search is found at the middlemost position of the list: • The number of comparisons in this case is 1. • In the worst case scenario, the element is not found in the list: • After the first bisection, the search space is reduced to n/2 elements, where n is the number of elements in the original list. • After the second bisection, the search space is reduced to n/4 elements, that is, n/22 elements. • After ith bisections, the number of comparisons would be n/2i Determining the Efficiency of Binary Search
- 100. • In ___________ search algorithm, you begin at one end of the list and scan the list until the desired item is found or the end of the list is reached. Just a minute • Answer: • linear
- 101. • To implement __________ search algorithm, the list should be sorted. Just a minute • Answer: • binary
- 102. • Problem Statement: • Write a program to search a number in an array that contains a maximum of 20 elements by using binary search. Assume that the array elements are entered in ascending order. If the number to be searched is present at more than one location in the array, the search should stop when one match is found. The program should also display the total number of comparisons made. Activity: Performing Binary Search
- 103. • In this session, you learned that: • Quick sort and merge sort algorithms are based on the divide and conquer technique. • To sort a list of items by using the quick sort algorithm, you need to: • Select a pivot value. • Partition the list into two sublists such that one sublist contains all items less than the pivot, and the second sublist contains all items greater than the pivot. • Place the pivot at its correct position between the two sublists. • Sort the two sublists by using quick sort. Summary
- 104. • The total time taken by the quick sort algorithm depends on the position of the pivot value and the initial ordering of elements. • The worst case efficiency of the quick sort algorithm is O(n2). • The best case efficiency of the quick sort algorithm is O(n log n). • To sort a list of items by using merge sort, you need to: • Divide the list into two sublists. • Sort each sublist by using merge sort. • Merge the two sorted sublists. • The merge sort algorithm has an efficiency of O(n log n). Summary (Contd.)
- 105. • The best case efficiency of linear search is O(1) and the worst case efficiency of linear search is O(n). • To apply binary search algorithm, you should ensure that the list to be searched is sorted. • The best case efficiency of binary search is O(1) and the worst case efficiency of binary search is O(log n). Summary (Contd.)
- 114. Min Max Algorithm
- 116. 1. Given the array A = [1, 2, 5, 3, 8, 10], divide it in two subarrays A1 = [1, 2, 5] and A2 = [3, 8, 10]. 2. The min and max of each subarray will be ● A1: min = 1, max = 5 ● A2: min = 3, max = 10 3. The min of A will be min of [1, 3] (the min of A1 and A2) 4. The max of A will be max of [5, 10] (the max of A1 and A2)
- 122. Substitution Method