The document discusses tournament trees and their applications. Tournament trees are binary trees used to model single-elimination tournaments. They have n external nodes representing players and n-1 internal nodes representing matches. Winner trees store the winner at each internal node, while loser trees store the loser. Tournament trees can be used to sort values in O(n log n) time by repeatedly extracting the winner and replacing it.
Apply Hammer Directly to Thumb; Avoiding Apache Spark and Cassandra AntiPatt...Databricks
Learn from someone who has made just about every basic Apache Spark mistake possible so you don’t have to! We’ll go over some of the most common things that users do that end up doing that cause unnecessary pain and actually explain how to avoid them.
Confused about serialization? Not sure what is meant by use a singleton to share connections? Together we will walk through concrete examples of how to handle these situation. Learn how to: do all your work remotely, not break your catalyst optimizations, use all your resources, and much more! Together lets learn how to make our Spark Applications better!
Analyzing the Performance Effects of Meltdown + Spectre on Apache Spark Workl...Databricks
Meltdown and Spectre are two security vulnerabilities disclosed in early 2018 that expose systems to cross-VM and cross-process attacks. They were the first of their kind and opened up a new class of exploits that allow one program to scan another program’s memory. The kernel and VM patches released to address these vulnerabilities have shown to degrade the performance of Apache Spark workloads in the cloud by 2-5%.
This talk will dive deep into the exploits and their patches in order to help explain the origin of this decline in performance.
Apply Hammer Directly to Thumb; Avoiding Apache Spark and Cassandra AntiPatt...Databricks
Learn from someone who has made just about every basic Apache Spark mistake possible so you don’t have to! We’ll go over some of the most common things that users do that end up doing that cause unnecessary pain and actually explain how to avoid them.
Confused about serialization? Not sure what is meant by use a singleton to share connections? Together we will walk through concrete examples of how to handle these situation. Learn how to: do all your work remotely, not break your catalyst optimizations, use all your resources, and much more! Together lets learn how to make our Spark Applications better!
Analyzing the Performance Effects of Meltdown + Spectre on Apache Spark Workl...Databricks
Meltdown and Spectre are two security vulnerabilities disclosed in early 2018 that expose systems to cross-VM and cross-process attacks. They were the first of their kind and opened up a new class of exploits that allow one program to scan another program’s memory. The kernel and VM patches released to address these vulnerabilities have shown to degrade the performance of Apache Spark workloads in the cloud by 2-5%.
This talk will dive deep into the exploits and their patches in order to help explain the origin of this decline in performance.
Amazing fact is that the 763-768 Digits of Pi are 999999. This is known as the Feynman Point and the order position is 66 so I do the proof of concept.
All other sequences and regex are shown too.
PyCon Ukraine 2017: Operational Transformation Max Klymyshyn
Nowadays IoT and partitioned sources of information require tricky approaches to keep data consistent and provide freedom on different levels of communication flow. Operational Transformation is one of the pioneering technologies providing good responsiveness in hight latency environments. Let’s have some fun with Python and OT.
Combines the better attributes of merge sort and insertion sort.
Like merge sort, but unlike insertion sort, running time is O(nlgn).
Like insertion sort, but unlike merge sort, sorts in place.
A humble introduction to ROP chaining basics. The ppt deals with what is ROP. It builds the basics by introducing basics of buffer overflow and then talks about ROPs and why they are needed. It also has animated videos to help understand the layout of the stack clearly.
Exploring Parallel Merging In GPU Based Systems Using CUDA C.Rakib Hossain
We present a program that implemented to execute Adaptive merge sort algorithm in parallel on a GPU based system. Parallel implementation is used to get better performance than serial implementation in runtime perspective. Parallel implementation executes independent executable operation in parallel using large number of cores in GPU based system. Results from a parallel implementation of the algorithm is given and compared with its serial implementation on run time basis. The parallel version is implemented with CUDA platform in a system based on NVIDIA GPU (GTX 650)
Amazing fact is that the 763-768 Digits of Pi are 999999. This is known as the Feynman Point and the order position is 66 so I do the proof of concept.
All other sequences and regex are shown too.
PyCon Ukraine 2017: Operational Transformation Max Klymyshyn
Nowadays IoT and partitioned sources of information require tricky approaches to keep data consistent and provide freedom on different levels of communication flow. Operational Transformation is one of the pioneering technologies providing good responsiveness in hight latency environments. Let’s have some fun with Python and OT.
Combines the better attributes of merge sort and insertion sort.
Like merge sort, but unlike insertion sort, running time is O(nlgn).
Like insertion sort, but unlike merge sort, sorts in place.
A humble introduction to ROP chaining basics. The ppt deals with what is ROP. It builds the basics by introducing basics of buffer overflow and then talks about ROPs and why they are needed. It also has animated videos to help understand the layout of the stack clearly.
Exploring Parallel Merging In GPU Based Systems Using CUDA C.Rakib Hossain
We present a program that implemented to execute Adaptive merge sort algorithm in parallel on a GPU based system. Parallel implementation is used to get better performance than serial implementation in runtime perspective. Parallel implementation executes independent executable operation in parallel using large number of cores in GPU based system. Results from a parallel implementation of the algorithm is given and compared with its serial implementation on run time basis. The parallel version is implemented with CUDA platform in a system based on NVIDIA GPU (GTX 650)
All types of Sorting logic.Some algorithms (selection, bubble, heapsort) work by moving elements to their final position, one at a time. You sort an array of size N, put 1 item in place, and continue sorting an array of size N – 1 (heapsort is slightly different).
Some algorithms (insertion, quicksort, counting, radix) put items into a temporary position, close(r) to their final position. You rescan, moving items closer to the final position with each iteration.
One technique is to start with a “sorted list” of one element, and merge unsorted items into it, one at a time.
2. Winner Trees
Complete binary tree with n external
nodes and n - 1 internal nodes.
External nodes represent tournament
players.
Each internal node represents a match
played between its two children;
the winner of the match is stored at
the internal node.
Root has overall winner.
26. Time To Sort
• Initialize winner tree.
O(n) time
• Remove winner and replay.
O(log n) time
• Remove winner and replay n times.
O(n log n) time
• Total sort time is O(n log n).
• Actually Theta(n log n).
27. Winner Tree Operations
• Initialize
O(n) time
• Get winner
O(1) time
• Remove/replace winner and replay
O(log n) time
more precisely Theta(log n)
31. Replace Winner And Replay
1
1 2
3 1 2 2
3 6 1 3 2 4 2 5
4 3 6 8 6 5 7 3 2 6 9 4 5 2 5 8
Opponent is player who lost last match played at this node.
32. Loser Tree
Each match node stores the match
loser rather than the match winner.
33. Min Loser Tree For 16 Players
4
3
8
4 3 6 8 1 5 7 3 2 6 9 4 5 2 5 8
34. Min Loser Tree For 16 Players
4
6
8
3
5
1
7
4 3 6 8 1 5 7 3 2 6 9 4 5 2 5 8
41. Complexity Of Loser Tree
Initialize
• One match at each match node.
• One store of a left child winner.
• Total time is O(n).
• More precisely Theta(n).
43. Complexity Of Replay
• One match at each level that has a match
node.
• O(log n)
• More precisely Theta(log n).
44. More Tournament Tree
Applications
• k-way merging of runs during an external
merge sort
• Truck loading
45. Truck Loading
n packages to be loaded into trucks
each package has a weight
each truck has a capacity of c tons
minimize number of trucks
46. Truck Loading
n = 5 packages
weights [2, 5, 6, 3, 4]
truck capacity c = 10
Load packages from left to right. If a package
doesn’t fit into current truck, start loading a
new truck.
49. Bin Packing
• n items to be packed into bins
• each item has a size
• each bin has a capacity of c
• minimize number of bins
50. Bin Packing
Truck loading is same as bin packing.
Truck is a bin that is to be packed (loaded).
Package is an item/element.
Bin packing to minimize number of bins is NP-hard.
Several fast heuristics have been proposed.
51. Bin Packing Heuristics
• First Fit.
Bins are arranged in left to right order.
Items are packed one at a time in given order.
Current item is packed into leftmost bin into
which it fits.
If there is no bin into which current item fits,
start a new bin.
52. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
Pack red item into first bin.
53. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
Pack blue item next.
Doesn’t fit, so start a new bin.
54. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
55. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
Pack yellow item into first
bin.
56. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
Pack green item.
Need a new bin.
57. First Fit
n = 4
weights = [4, 7, 3, 6]
capacity = 10
Not optimal.
2 bins suffice.
58. Bin Packing Heuristics
• First Fit Decreasing.
Items are sorted into decreasing order.
Then first fit is applied.
59. Bin Packing Heuristics
• Best Fit.
Items are packed one at a time in given order.
To determine the bin for an item, first
determine set S of bins into which the item fits.
If S is empty, then start a new bin and put item
into this new bin.
Otherwise, pack into bin of S that has least
available capacity.
60. Bin Packing Heuristics
• Best Fit Decreasing.
Items are sorted into decreasing order.
Then best fit is applied.
61. Performance
• For first fit and best fit:
Heuristic Bins <= (17/10)(Minimum Bins) + 2
• For first fit decreasing and best fit
decreasing:
Heuristic Bins <= (11/9)(Minimum Bins) + 4
62. Complexity Of First Fit
Use a max tournament tree in which
the players are n bins and the value
of a player is the available capacity
in the bin.
O(n log n), where n is the number of
items.
Editor's Notes
Tennis ladder.
See text for case when number of players is not a power of 2.
Distribution center in Jacksonville sends packages to distribution center in Miami. Use the fewest number of trucks.