Dinive conquer algorithm
Upcoming SlideShare
Loading in...5
×
 

Dinive conquer algorithm

on

  • 421 views

 

Statistics

Views

Total Views
421
Views on SlideShare
414
Embed Views
7

Actions

Likes
2
Downloads
32
Comments
0

1 Embed 7

http://epembelajaran.umt.edu.my 7

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 2

Dinive conquer algorithm Dinive conquer algorithm Presentation Transcript

  • Divide and Conquer
  • Divide and Conquer• divide the problem into a number of subproblems• conquer the subproblems (solve them)• combine the subproblem solutions to get the solution to the original problem• Note: often the “conquer” step is done recursively
  • Divide-and-ConquerA general methodology for using recursion to design efficient algorithmsIt solves a problem by: – Diving the data into parts – Finding sub solutions for each of the parts – Constructing the final answer from the sub solutions
  • Divide and Conquer• Based on dividing problem into subproblems• Approach 1. Divide problem into smaller subproblems Subproblems must be of same type Subproblems do not need to overlap 2. Solve each subproblem recursively 3. Combine solutions to solve original problem• Usually contains two or more recursive calls
  • Divide-and-conquer technique a problem of size n subproblem 1 subproblem 2 of size n/2 of size n/2 a solution to a solution to subproblem 1 subproblem 2 a solution to the original problem
  • Divide and Conquer Algorithms• Based on dividing problem into subproblems – Divide problem into sub-problems Subproblems must be of same type Subproblems do not need to overlap – Conquer by solving sub-problems recursively. If the sub-problems are small enough, solve them in brute force fashion – Combine the solutions of sub-problems into a solution of the original problem (tricky part)
  • D-A-C• For Divide-and-Conquer algorithms the running time is mainly affected by 3 criteria:• The number of sub-instances into which a problem is split.• The ratio of initial problem size to sub- problem size.• The number of steps required to divide the initial instance and to combine sub- solutions.
  • Algorithm for General Divide and Conquer Sorting• Algorithm for General Divide and Conquer Sorting• Begin Algorithm Start Sort(L) If L has length greater than 1 then Begin Partition the list into two lists, high and low Start Sort(high) Start Sort(low) Combine high and low End• End Algorithm
  • Analyzing Divide-and-Conquer Algorithms• When an algorithm contains a recursive call to itself, its running time can often be describ ed by a recurrence equation which describes the overall running time on a problem of size n in terms of the running time on smaller inp uts.• For divide-and-conquer algorithms, we get recurrences that look like:•• T(n) { = Θ(1) aT(n/b) +D(n) +C(n) if n < c
  • Analyzing Divide-and-Conquer Algorithms (cont.)• where• a = the number of subproblems we break the problem into• n/b = the size of the subproblems (in terms of n)• D(n) is the time to divide the problem of size n into the subproblems• C(n) is the time to combine the subproblem solutions to get the answer for the problem of size n
  • The algorithm• Lets assume the following array 2 6 7 3 5 6 9 2 4 1• We divide the values into pairs 2 6 7 3 5 6 9 2 4 1• We sort each pair 2 6 3 7 5 6 2 9 1 4• Get the first pair (both lowest values!)
  • The algorithm (2)• We compare these values (2 and 6) with the values of the next pair (3 and 7) 2 6 3 7 5 6 2 9 1 4 – Lowest 2,3• The next one (5 and 6) – Lowest 2,3• The next one (2 and 9) – Lowest 2,2• The next one (1 and 4) – Lowest 1,2
  • Example: Divide and Conquer• Binary Search• Heap Construction• Tower of Hanoi• Exponentiation – Fibonnacci Sequence• Quick Sort• Merge Sort• Multiplying large Integers• Matrix Multiplications• Closest Pairs
  • Quicksort
  • Design Follows the divide-and-conquer paradigm. Divide: Partition (separate) the array A[p..r] into two (possibly nonempty) subarrays A[p..q–1] and A[q+1..r].  Each element in A[p..q–1] ≤ A[q].  A[q] ≤ each element in A[q+1..r].  Index q is computed as part of the partitioning procedure. Conquer: Sort the two subarrays A[p..q–1] & A[q+1..r] by recursive calls to quicksort. Combine: Since the subarrays are sorted in place – no work is needed to combine them. How do the divide and combine steps of quicksort compare with those of merge sort?
  • Pseudocode Quicksort(A, p, r) Quicksort(A, p, r) Partition(A, p, r) Partition(A, p, r) if pp< rrthen if < then x:= A[r], x:= A[r], qq:= Partition(A, p, r); := Partition(A, p, r); i i:=p – 1; :=p – 1; Quicksort(A, p, qq––1); Quicksort(A, p, 1); for jj:= ppto rr––11do for := to do Quicksort(A, qq+ 1, r) Quicksort(A, + 1, r) if A[j] ≤ xxthen if A[j] ≤ then fi fi ii:= ii+ 1; := + 1; A[i] ↔ A[j] A[i] ↔ A[j] A[p..r] fi fi od; od; 5 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 11 return + A[p..q – 1] A[q+1..r]Partition 5 ≤5 ≥5
  • Example p rinitially: 2 5 8 3 9 4 1 7 10 6 note: pivot (x) = 6 i jnext iteration: 2 5 8 3 9 4 1 7 10 6 i j Partition(A, p, r) Partition(A, p, r) x, ii := A[r], pp––1; x, := A[r], 1;next iteration: 2 5 8 3 9 4 1 7 10 6 for jj:= ppto rr––11do for := to do i j if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1;next iteration: 2 5 8 3 9 4 1 7 10 6 A[i] ↔ A[j] A[i] ↔ A[j] i j fi fi od; od;next iteration: 2 5 3 8 9 4 1 7 10 6 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; i j return ii+ 1 return + 1
  • Example (Continued)next iteration: 2 5 3 8 9 4 1 7 10 6 i jnext iteration: 2 5 3 8 9 4 1 7 10 6 i jnext iteration: 2 5 3 4 9 8 1 7 10 6 Partition(A, p, r) Partition(A, p, r) i j x, ii := A[r], pp––1; x, := A[r], 1;next iteration: 2 5 3 4 1 8 9 7 10 6 for jj:= ppto rr––11do for := to do i j if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1;next iteration: 2 5 3 4 1 8 9 7 10 6 A[i] ↔ A[j] A[i] ↔ A[j] i j fi fi od; od;next iteration: 2 5 3 4 1 8 9 7 10 6 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; i j return ii+ 1 return + 1after final swap: 2 5 3 4 1 6 9 7 10 8 i j
  • Partitioning Select the last element A[r] in the subarray A[p..r] as the pivot – the element around which to partition. As the procedure executes, the array is partitioned into four (possibly empty) regions. 1. A[p..i] — All entries in this region are ≤ pivot. 2. A[i+1..j – 1] — All entries in this region are > pivot. 3. A[r] = pivot. 4. A[j..r – 1] — Not known how they compare to pivot. The above hold before each iteration of the for loop, and constitute a loop invariant. (4 is not part of the LI.)
  • Correctness of PartitionUse loop invariant.Initialization: – Before first iteration • A[p..i] and A[i+1..j – 1] are empty – Conds. 1 and 2 are satisfied (trivially). Partition(A, p, r) Partition(A, p, r) x, i := A[r], p – 1; • r is the index of the pivot – Cond. 3 is forij := p to r p – 1; x, := A[r], satisfied. – 1 do for j := p to r – 1 doMaintenance: if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; – Case 1: A[j] > x A[i] ↔ A[j] A[i] ↔ A[j] • Increment j only. fi fi od; od; • LI is maintained. A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 11 return +
  • Correctness of PartitionCase 1:p i j r >x x ≤x >xp i j r x ≤x >x
  • Correctness of Partition• Case 2: A[j] ≤ x – Increment i – A[r] is unaltered. – Swap A[i] and A[j] • Condition 3 is maintained. • Condition 1 is maintained. – Increment j • Condition 2 is maintained. p i j r ≤x x ≤x >x p i j r x ≤x >x
  • Correctness of Partition Termination: – When the loop terminates, j = r, so all elements in A are partitioned into one of the three cases: • A[p..i] ≤ pivot • A[i+1..j – 1] > pivot • A[r] = pivot The last two lines swap A[i+1] and A[r]. – Pivot moves from the end of the array to between the two subarrays. – Thus, procedure partition correctly performs the divide step.
  • Complexity of Partition• PartitionTime(n) is given by the number of iterations in the for loop.∀ Θ(n) : n = r – p + 1. Partition(A, p, r) Partition(A, p, r) x, ii := A[r], pp––1; x, := A[r], 1; for jj:= ppto rr––11do for := to do if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; A[i] ↔ A[j] A[i] ↔ A[j] fi fi od; od; A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 1 return + 1
  • Algorithm Performance• Running time of quicksort depends on whether the partitioning is balanced or not.• Worst-Case Partitioning (Unbalanced Partitions): – Occurs when every call to partition results in the most unbalanced partition. – Partition is most unbalanced when • Subproblem 1 is of size n – 1, and subproblem 2 is of size 0 or vice versa. • pivot ≥ every element in A[p..r – 1] or pivot < every element in A[p..r – 1]. – Every call to partition is most unbalanced when • Array A[1..n] is sorted or reverse sorted!
  • Worst-case Partition Analysis Recursion tree for worst-case partition n n–1 • Running time for worst-case partitions at each recursive level: • T(n) = T(n – 1) + T(0) + n–2 PartitionTime(n)n • = T(n – 1) + Θ(n) n–3 • = ∑k=1 to nΘ(k) • = Θ(∑k=1 to n k ) 2 • = Θ(n2) • 1
  • Best-case Partitioning• Size of each subproblem ≤ n/2. – One of the subproblems is of size n/2 – The other is of size n/2 −1.• Recurrence for running time – T(n) ≤ 2T(n/2) + PartitionTime(n) = 2T(n/2) + Θ(n)• T(n) = Θ(n lg n)
  • Recursion Tree for Best-case Partition cn cn cn/2 cn/2 cnlg n cn/4 cn/4 cn/4 cn/4 cn c c c c c c cn Total : O(n lg n)
  • Conclusion• • Divide and conquer is just one of several• powerful techniques for algorithm design.• • Divide-and-conquer algorithms can be analyzed using recurrences and the master method (so practice this math).• • Can lead to more efficient algorithms
  • Divide and Conquer (Merge Sort)Divide and Conquer (Merge Sort)
  • Divide and Conquer• Recursive in structure – Divide the problem into sub-problems that are similar to the original but smaller in size – Conquer the sub-problems by solving them recursively. If they are small enough, just solve them in a straightforward manner. – Combine the solutions to create a solution to the original problem
  • An Example: Merge Sort• Sorting Problem: Sort a sequence of n elements into non-decreasing order.• Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each• Conquer: Sort the two subsequences recursively using merge sort.• Combine: Merge the two sorted subsequences to produce the sorted
  • Merge Sort – Example Original Sequence Sorted Sequence 18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 4318 26 32 6 43 15 9 1 6 18 26 32 1 9 15 43 4318 26 32 6 43 15 9 1 18 26 6 32 15 43 1 918 26 32 6 43 15 9 1 18 26 32 6 43 15 9 118 26 32 6 43 15 9 1
  • Merge-Sort (A, p, r)• INPUT: a sequence of n numbers stored in array AMergeSort (A, p, r) // sort A[p..r] by divide & conquer• if p < r1 OUTPUT: an ordered sequence of n2 numbers (p+r)/2 then q ←3 MergeSort (A, p, q)4 MergeSort (A, q+1, r)5 Merge (A, p, q, r) // merges A[p..q] with A[q+1..r]Initial Call: MergeSort(A, 1, n)
  • Procedure Merge• Merge(A, p, q, r)• 1 n1 ← q – p + 1 Input: Array containing• 2 n2 ← r – q sorted subarrays A[p..q] for i ← 1 to n1 and A[q+1..r]. do L[i] ← A[p + i – 1] Output: Merged sorted for j ← 1 to n2 subarray in A[p..r]. do R[j] ← A[q + j] L[n1+1] ← ∞ R[n2+1] ← ∞ i←1 j←1 Sentinels, to avoid having to for k ←p to r check if either subarray is do if L[i] ≤ R[j] fully copied at each step. then A[k] ← L[i] i←i+1 else A[k] ← R[j] j←j+1
  • Merge – ExampleA … 6 1 8 26 32 26 32 42 43 6 8 9 1 9 … k k k k k k k k k L 6 8 26 32 ∞ R 1 9 42 43 ∞ i i i i i j j j j j
  • Correctness of Merge• Merge(A, p, q, r) Loop Invariant for the for loop• 1 n1 ← q – p + 1 At the start of each iteration of the• 2 n2 ← r – q for loop: for i ← 1 to n1 Subarray A[p..k – 1] do L[i] ← A[p + i – 1] contains the k – p smallest elements for j ← 1 to n2 of L and R in sorted order. L[i] and R[j] are the smallest elements of do R[j] ← A[q + j] L and R that have not been copied back into L[n1+1] ← ∞ A. R[n2+1] ← ∞ i←1 Initialization: j←1 Before the first iteration: •A[p..k – 1] is empty. for k ←p to r •i = j = 1. do if L[i] ≤ R[j] •L[1] and R[1] are the smallest then A[k] ← L[i] elements of L and R not copied to A. i←i+1 else A[k] ← R[j]
  • Correctness of Merge• Merge(A, p, q, r) Maintenance:• 1 n1 ← q – p + 1 Case 1: L[i] ≤ R[j] •By LI, A contains p – k smallest elements• 2 n2 ← r – q of L and R in sorted order. for i ← 1 to n1 •By LI, L[i] and R[j] are the smallest elements do L[i] ← A[p + i – 1] of L and R not yet copied into A. for j ← 1 to n2 •Line 13 results in A containing p – k + 1 smallest elements (again in sorted order). do R[j] ← A[q + j] Incrementing i and k reestablishes the LI for L[n1+1] ← ∞ the next iteration. R[n2+1] ← ∞ Similarly for L[i] > R[j]. i←1 Termination: j←1 •On termination, k = r + 1. for k ←p to r •By LI, A contains r – p + 1 smallest do if L[i] ≤ R[j] elements of L and R in sorted order. then A[k] ← L[i] •L and R together contain r – p + 3 elements. All but the two sentinels have been copied i←i+1 back into A. else A[k] ← R[j]
  • Analysis of Merge Sort• Running time T(n) of Merge Sort:• Divide: computing the middle takes Θ(1)• Conquer: solving 2 subproblems takes 2T(n/2)• Combine: merging n elements takes Θ(n)• Total: T(n) = Θ(1) if n = 1 T(n) = 2T(n/2) + Θ(n) if n > 1 ⇒ T(n) = Θ(n lg n) (CLRS, Chapter 4)