Successfully reported this slideshow.
Upcoming SlideShare
×

# Introduction to Algorithms

Algorithms complexity. Searching and sorting. Data structures.

• Full Name
Comment goes here.

Are you sure you want to Yes No

• Be the first to like this

### Introduction to Algorithms

1. 1. Petar Petrov 10.12.2015
2. 2.  Algorithm Analysis  Sorting  Searching  Data Structures
3. 3.  An algorithm is a set of instructions to be followed to solve a problem.
4. 4.  Correctness  Finiteness  Definiteness  Input  Output  Effectiveness
5. 5. There are two aspects of algorithmic performance:  Time  Space
6. 6.  First, we start to count the number of basic operations in a particular solution to assess its efficiency.  Then, we will express the efficiency of algorithms using growth functions.
7. 7.  We measure an algorithm’s time requirement as a function of the problem size.  The most important thing to learn is how quickly the algorithm’s time requirement grows as a function of the problem size.  An algorithm’s proportional time requirement is known as growth rate.  We can compare the efficiency of two algorithms by comparing their growth rates.
8. 8.  Each operation in an algorithm (or a program) has a cost.  Each operation takes a certain of time. count = count + 1;  take a certain amount of time, but it is constant A sequence of operations: count = count + 1; Cost: c1 sum = sum + count; Cost: c2  Total Cost = c1 + c2
9. 9. Example: Simple If-Statement Cost Times if (n < 0) c1 1 absval = -n c2 1 else absval = n; c3 1 Total Cost <= c1 + max(c2,c3)
10. 10. Example: Simple Loop Cost Times i = 1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 i = i + 1; c4 n sum = sum + i; c5 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5  The time required for this algorithm is proportional to n
11. 11. Example: Nested Loop Cost Times i=1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 j=1; c4 n while (j <= n) { c5 n*(n+1) sum = sum + i; c6 n*n j = j + 1; c7 n*n } i = i +1; c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8  The time required for this algorithm is proportional to n2
12. 12.  Consecutive Statements  If/Else  Loops  Nested Loops
13. 13.  Informal definitions: ◦ Given a complexity function f(n), ◦ O(f(n)) is the set of complexity functions that are upper bounds on f(n) ◦ (f(n)) is the set of complexity functions that are lower bounds on f(n) ◦ (f(n)) is the set of complexity functions that, given the correct constants, correctly describes f(n)  Example: If f(n) = 17n3 + 4n – 12, then ◦ O(f(n)) contains n3, n4, n5, 2n, etc. ◦ (f(n)) contains 1, n, n2, n3, log n, n log n, etc. ◦ (f(n)) contains n3
14. 14. Example: Simple If-Statement Cost Times if (n < 0) c1 1 absval = -n c2 1 else absval = n; c3 1 Total Cost <= c1 + max(c2,c3) O(1)
15. 15. Example: Simple Loop Cost Times i = 1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 i = i + 1; c4 n sum = sum + i; c5 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5  The time required for this algorithm is proportional to n O(n)
16. 16. Example: Nested Loop Cost Times i=1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 j=1; c4 n while (j <= n) { c5 n*(n+1) sum = sum + i; c6 n*n j = j + 1; c7 n*n } i = i +1; c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8  The time required for this algorithm is proportional to n2 O(n2)
17. 17. Function Growth Rate Name c Constant log N Logarithmic log2N Log-squared N Linear N log N Linearithmic N2 Quadratic N3 Cubic 2N Exponential
18. 18.  Input: ◦ A sequence of n numbers a1, a2, . . . , an  Output: ◦ A permutation (reordering) a1’, a2’, . . . , an’ of the input sequence such that a1’ ≤ a2’ ≤ · · · ≤ an’
19. 19.  In-Place Sort ◦ The amount of extra space required to sort the data is constant with the input size.
20. 20. Sorted on first key: Sort file on second key: Records with key value 3 are not in order on first key!!  Stable sort ◦ preserves relative order of records with equal keys
21. 21.  Idea: like sorting a hand of playing cards ◦ Start with an empty left hand and the cards facing down on the table. ◦ Remove one card at a time from the table, and insert it into the correct position in the left hand ◦ The cards held in the left hand are sorted
22. 22. To insert 12, we need to make room for it by moving first 36 and then 24.
23. 23. insertionsort (a) { for (i = 1; i < a.length; ++i) { key = a[i] pos = i while (pos > 0 && a[pos-1] > key) { a[pos]=a[pos-1] pos-- } a[pos] = key } }
24. 24.  O(n2), stable, in-place  O(1) space  Great with small number of elements
25. 25.  Algorithm: ◦ Find the minimum value ◦ Swap with 1st position value ◦ Repeat with 2nd position down  O(n2), stable, in-place
26. 26.  Algorithm ◦ Traverse the collection ◦ “Bubble” the largest value to the end using pairwise comparisons and swapping  O(n2), stable, in-place  Totally useless?
27. 27. 1. Divide: split the array in two halves 2. Conquer: Sort recursively both subarrays 3. Combine: merge the two sorted subarrays into a sorted array
28. 28. mergesort (a, left, right) { if (left < right) { mid = (left + right)/2 mergesort (a, left, mid) mergesort (a, mid+1, right) merge(a, left, mid+1, right) } }
29. 29.  The key to Merge Sort is merging two sorted lists into one, such that if you have two lists X (x1x2…xm) and Y(y1y2…yn) the resulting list is Z(z1z2…zm+n)  Example: L1 = { 3 8 9 } L2 = { 1 5 7 } merge(L1, L2) = { 1 3 5 7 8 9 }
30. 30. 3 10 23 54 1 5 25 75X: Y: Result:
31. 31. 3 10 23 54 5 25 75 1 X: Y: Result:
32. 32. 10 23 54 5 25 75 1 3 X: Y: Result:
33. 33. 10 23 54 25 75 1 3 5 X: Y: Result:
34. 34. 23 54 25 75 1 3 5 10 X: Y: Result:
35. 35. 54 25 75 1 3 5 10 23 X: Y: Result:
36. 36. 54 75 1 3 5 10 23 25 X: Y: Result:
37. 37. 75 1 3 5 10 23 25 54 X: Y: Result:
38. 38. 1 3 5 10 23 25 54 75 X: Y: Result:
39. 39. 99 6 86 15 58 35 86 4 0
40. 40. 99 6 86 15 58 35 86 4 0 99 6 86 15 58 35 86 4 0
41. 41. 99 6 86 15 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0
42. 42. 99 6 86 15 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0 99 6 86 15 58 35 86 4 0
43. 43. 99 6 86 15 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0 99 6 86 15 58 35 86 4 0 4 0
44. 44. 99 6 86 15 58 35 86 0 4 4 0
45. 45. 15 866 99 35 58 0 4 86 99 6 86 15 58 35 86 0 4
46. 46. 6 15 86 99 0 4 35 58 86 15 866 99 58 35 0 4 86
47. 47. 0 4 6 15 35 58 86 86 99 6 15 86 99 0 4 35 58 86
48. 48. 0 4 6 15 35 58 86 86 99
49. 49. Merge Sort runs O (N log N) for all cases, because of its Divide and Conquer approach. T(N) = 2T(N/2) + N = O(N logN)
50. 50. 1. Select: pick an element x 2. Divide: rearrange elements so that x goes to its final position • L elements less than x • G elements greater than or equal to x 3. Conquer: sort recursively L and G x x x L G L G
51. 51. quicksort (a, left, right) { if (left < right) { pivot = partition (a, left, right) quicksort (a, left, pivot-1) quicksort (a, pivot+1, right) } }
52. 52.  How to pick a pivot?  How to partition?
53. 53.  Use the first element as pivot ◦ if the input is random, ok ◦ if the input is presorted? - shuffle in advance  Choose the pivot randomly ◦ generally safe ◦ random numbers generation can be expensive
54. 54.  Use the median of the array ◦ Partitioning always cuts the array into half ◦ An optimal quicksort (O(n log n)) ◦ hard to find the exact median (chicken-egg?) ◦ Approximation to the exact median..  Median of three ◦ Compare just three elements: the leftmost, the rightmost and the center ◦ Use the middle of the three as pivot
55. 55.  Given a pivot, partition the elements of the array such that the resulting array consists of: ◦ One subarray that contains elements < pivot ◦ One subarray that contains elements >= pivot  The subarrays are stored in the original array
56. 56. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index
57. 57. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
58. 58. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
59. 59. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
60. 60. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index
61. 61. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index
62. 62. 40 20 10 80 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index]
63. 63. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index]
64. 64. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
65. 65. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
66. 66. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
67. 67. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
68. 68. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
69. 69. 40 20 10 30 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
70. 70. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
71. 71. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
72. 72. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
73. 73. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
74. 74. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
75. 75. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
76. 76. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
77. 77. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
78. 78. 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
79. 79. 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1. 5. swap a[too_small_index]a[pivot_index] 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index
80. 80. 7 20 10 30 40 50 60 80 100pivot_index = 4 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1. 5. swap a[too_small_index]a[pivot_index]
81. 81.  Running time ◦ pivot selection: constant time, i.e. O(1) ◦ partitioning: linear time, i.e. O(N) ◦ running time of the two recursive calls  T(N)=T(i)+T(N-i-1)+cN where c is a constant ◦ i: number of elements in L
82. 82.  What will be the worst case? ◦ The pivot is the smallest element, all the time ◦ Partition is always unbalanced
83. 83.  What will be the best case? ◦ Partition is perfectly balanced. ◦ Pivot is always in the middle (median of the array)
84. 84.  Java API provides a class Arrays with several overloaded sort methods for different array types  Class Collections provides similar sorting methods
85. 85. Arrays methods: public static void sort (int[] a) public static void sort (Object[] a) // requires Comparable public static <T> void sort (T[] a, Comparator<? super T> comp) // uses given Comparator
86. 86. Collections methods: public static <T extends Comparable<T>> void sort (List<T> list) public static <T> void sort (List<T> l, Comparator<? super T> comp)
87. 87.  Given the collection and an element to find…  Determine whether the “target” element was found in the collection ◦ Print a message ◦ Return a value (an index or pointer, etc.)  Don’t modify the collection in the search!
88. 88.  A search traverses the collection until ◦ the desired element is found ◦ or the collection is exhausted
89. 89. linearsearch (a, key) { for (i = 0; i < a.length; i++) { if (a[i] == key) return i } return –1 }
90. 90. 40 20 10 30 7 Search for 20
91. 91. 40 20 10 30 7 Search for 20 40 != 20
92. 92. 40 20 10 30 7 Search for 20 20 = 20
93. 93. 40 20 10 30 7 Search for 20 20 = 20 return 1
94. 94. 40 20 10 30 7 Search for 5
95. 95. 40 20 10 30 7 Search for 5 40 != 5
96. 96. 40 20 10 30 7 Search for 5 20 != 5
97. 97. 40 20 10 30 7 Search for 5 10 != 5
98. 98. 40 20 10 30 7 Search for 5 30 != 5
99. 99. 40 20 10 30 7 Search for 5 7 != 5 return -1
100. 100.  O(n)  Examines every item
101. 101.  Locates a target value in a sorted array/list by successively eliminating half of the array on each step
102. 102. binarysearch (a, low, high, key) { while (low <= high) { mid = (low+high) >>> 1 midVal = a[mid] if (midVal < key) low=mid+1 else if (midVal > key) high=mid+1 else return mid } return –(low + 1) }
103. 103. 3 4 6 7 Search for 4 8 10 13 141
104. 104. 3 4 6 7 Search for 4 8 10 13 141 left right
105. 105. 3 4 6 7 Search for 4 8 10 13 141 4 < 7 left right
106. 106. 3 4 6 7 Search for 4 8 10 13 141 left right
107. 107. 3 4 6 7 Search for 3 8 10 13 141 4 > 3 left right
108. 108. 3 4 6 7 Search for 4 8 10 13 141 left right
109. 109. 3 4 6 7 Search for 4 8 10 13 141 4 = 4 left right
110. 110. 3 4 6 7 Search for 4 8 10 13 141 return 4 left right
111. 111. 3 4 6 7 Search for 9 8 10 13 141
112. 112. 3 4 6 7 Search for 9 8 10 13 141 left right
113. 113. 3 4 6 7 Search for 9 8 10 13 141 9 > 7 left right
114. 114. 3 4 6 7 Search for 9 8 10 13 141 left right
115. 115. 3 4 6 7 Search for 9 8 10 13 141 9 < 10 left right
116. 116. 3 4 6 7 Search for 9 8 10 13 141 left right
117. 117. 3 4 6 7 Search for 9 8 10 13 141 9 > 8 left right
118. 118. 3 4 6 7 Search for 9 8 10 13 141 leftright right < left return -7
119. 119.  Requires a sorted array/list  O(log n)  Divide and conquer
120. 120. Collection List Set SortedSet Map SortedMap LinkedList ArrayList HashSet TreeSet HashMap TreeMap Extends Implements Interface Class
121. 121.  Set ◦ The familiar set abstraction. ◦ No duplicates; May or may not be ordered.  List ◦ Ordered collection, also known as a sequence. ◦ Duplicates permitted; Allows positional access  Map ◦ A mapping from keys to values. ◦ Each key can map to at most one value (function).
123. 123.  Ordered ◦ Elements are stored and accessed in a specific order  Sorted ◦ Elements are stored and accessed in a sorted order  Indexed ◦ Elements can be accessed using an index  Unique ◦ Collection does not allow duplicates
124. 124.  A linked list is a series of connected nodes  Each node contains at least ◦ A piece of data (any type) ◦ Pointer to the next node in the list  Head: pointer to the first node  The last node points to NULL A  Head B C A data pointer node
125. 125. A  Head B C D x A  Head B CD
126. 126. A  Head B C x  Head B C
127. 127. A  Head B C
128. 128. A  Head B C A data next node  previous Tail
129. 129. Operation Complexity insert at beginning O(1) Insert at end O(1) Insert at index O(n) delete at beginning O(1) delete at end O(1) delete at index O(n) find element O(n) access element by index O(n)
130. 130.  Resizable-array implementation of the List interface  capacity vs. size A B C
131. 131. A B C A B C D A B C D E D capacity > size capacity = size
132. 132. A B C
133. 133. Operation Complexity insert at beginning O(n) Insert at end O(1) amortized Insert at index O(n) delete at beginning O(n) delete at end O(1) delete at index O(n) find element O(n) access element by index O(1)
134. 134.  Some collections are constrained so clients can only use optimized operations ◦ stack: retrieves elements in reverse order as added ◦ queue: retrieves elements in same order as added stack queue top 3 2 bottom 1 pop, peekpush front back 1 2 3 addremove, peek
135. 135.  stack: A collection based on the principle of adding elements and retrieving them in the opposite order.  basic stack operations: ◦ push: Add an element to the top. ◦ pop: Remove the top element. ◦ peek: Examine the top element. stack top 3 2 bottom 1 pop, peekpush
136. 136.  Programming languages and compilers: ◦ method call stack  Matching up related pairs of things: ◦ check correctness of brackets (){}[]  Sophisticated algorithms: ◦ undo stack
137. 137.  queue: Retrieves elements in the order they were added.  basic queue operations: ◦ add (enqueue): Add an element to the back. ◦ remove (dequeue): Remove the front element. ◦ peek: Examine the front element. queue front back 1 2 3 addremove, peek
138. 138.  Operating systems: ◦ queue of print jobs to send to the printer  Programming: ◦ modeling a line of customers or clients  Real world examples: ◦ people on an escalator or waiting in a line ◦ cars at a gas station
139. 139.  A data structure optimized for a very specific kind of search / access  In a map we access by asking "give me the value associated with this key."  capacity, load factor A -> 65
140. 140. “Ivan Ivanov" 555389085 ivan@gmail.net 5122466556 12 hash function “Ivan" 5/5/1967
141. 141.  Implements Map  Fast put, get operations  hashCode(), equals()
142. 142. 0 1 2 3 4 5 key=“BG” 2117 hashCode() %6 5 (“BG”, “359”)
143. 143.  What to do when inserting an element and already something present?
144. 144.  Could search forward or backwards for an open space  Linear probing ◦ move forward 1 spot. Open?, 2 spots, 3 spots  Quadratic probing ◦ 1 spot, 2 spots, 4 spots, 8 spots, 16 spots  Resize when load factor reaches some limit
145. 145.  Each element of hash table be another data structure ◦ LinkedList ◦ Balanced Binary Tree  Resize at given load factor or when any chain reaches some limit
146. 146.  Implements Map  Sorted  Easy access to the biggest  logarithmic put, get  Comparable or Comparator
147. 147.  0, 1, or 2 children per node  Binary Search Tree ◦ node.left < node.value ◦ node.right >= node.value
148. 148.  A priority queue stores a collection of entries  Main methods of the Priority Queue ADT ◦ insert(k, x) inserts an entry with key k and value x ◦ removeMin() removes and returns the entry with smallest key Priority Queues 15 4
149. 149.  A heap can be seen as a complete binary tree: 16 14 10 8 7 9 3 2 4 1
150. 150.  A heap can be seen as a complete binary tree: 16 14 10 8 7 9 3 2 4 1 1 1 111
151. 151.  In practice, heaps are usually implemented as arrays: 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1 =0
152. 152.  To represent a complete binary tree as an array: ◦ The root node is A[1] ◦ Node i is A[i] ◦ The parent of node i is A[i/2] (note: integer divide) ◦ The left child of node i is A[2i] ◦ The right child of node i is A[2i + 1] 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1 =0
153. 153. 16 4 10 14 7 9 3 2 8 1 16 10 14 7 9 3 2 8 140
154. 154. 16 4 10 14 7 9 3 2 8 1 16 10 7 9 3 2 8 14 140
155. 155. 16 14 10 4 7 9 3 2 8 1 16 14 10 4 7 9 3 2 8 10
156. 156. 16 14 10 4 7 9 3 2 8 1 16 14 10 4 7 9 3 2 8 10
157. 157. 16 14 10 4 7 9 3 2 8 1 16 14 10 7 9 3 2 14 80
158. 158. 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 10
159. 159. 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 10
160. 160. 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 10
161. 161. 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 10
162. 162.  java.util.Collections  java.util.Arrays exports similar basic operations for an array. binarySearch(list, key) sort(list) min(list) max(list) reverse(list) shuffle(list) swap(list, p1, p2) replaceAll(list, x1, x2) Finds key in a sorted list using binary search. Sorts a list into ascending order. Returns the smallest value in a list. Returns the largest value in a list. Reverses the order of elements in a list. Randomly rearranges the elements in a list. Exchanges the elements at index positions p1 and p2. Replaces all elements matching x1 with x2.