Quickselect Under Yaroslavskiy's Dual Pivoting AlgorithmSebastian Wild
I gave this talk at the 24th International Meeting on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2013) on Menorca (Spain).
A paper covering the analyses of this talk (and some more!) has been submitted.
Also, in the talk, I refer to the previous speaker at the conference, my advisor Markus Nebel - corresponding results can be found in an earlier talk of mine:
slideshare.net/sebawild/average-case-analysis-of-java-7s-dual-pivot-quicksort
Check my website for preprints of papers and my other talks:
wwwagak.cs.uni-kl.de/sebastian-wild.html
These are the slides of my talk at the Meeting on Analytic Algorithmics and Combinatorics 2015 (ANALCO15) on branch mispredictions in classic Quicksort and Yaroslavskiy's dual-pivot Quicksort used in Java 7.
The talk is based on joint work with Conrado Martínez and Markus E. Nebel.
Find more information and the corresponding paper on my website: http://wwwagak.cs.uni-kl.de/sebastian-wild.html
Engineering Java 7's Dual Pivot Quicksort Using MaLiJAnSebastian Wild
I gave this talk at the 2013 Meeting On Algorithm Engineering and Experiments (ALENEX) meeting.
Find my other talks and the corresponding papers on my web page:
http://wwwagak.cs.uni-kl.de/sebastian-wild.html
Average Case Analysis of Java 7’s Dual Pivot QuicksortSebastian Wild
I gave this talk at the European Symposium on Algorithms 2012 in Ljubljana (Slowenia).
The corresponding paper won the best paper award.
Find my other talks and all corresponding papers on my web page:
http://wwwagak.cs.uni-kl.de/sebastian-wild.html
The document describes dual-pivot quicksort, which uses two pivot elements rather than one. It summarizes previous research that found dual-pivot quicksort often improves upon classic quicksort by reducing the number of element comparisons and cache misses, though it increases the number of swaps. The document then focuses on Yaroslavskiy's dual-pivot partitioning algorithm, which efficiently arranges elements into three groups - less than the first pivot, between the pivots, and greater than the second pivot - through in-place swapping.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
Quickselect Under Yaroslavskiy's Dual Pivoting AlgorithmSebastian Wild
I gave this talk at the 24th International Meeting on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2013) on Menorca (Spain).
A paper covering the analyses of this talk (and some more!) has been submitted.
Also, in the talk, I refer to the previous speaker at the conference, my advisor Markus Nebel - corresponding results can be found in an earlier talk of mine:
slideshare.net/sebawild/average-case-analysis-of-java-7s-dual-pivot-quicksort
Check my website for preprints of papers and my other talks:
wwwagak.cs.uni-kl.de/sebastian-wild.html
These are the slides of my talk at the Meeting on Analytic Algorithmics and Combinatorics 2015 (ANALCO15) on branch mispredictions in classic Quicksort and Yaroslavskiy's dual-pivot Quicksort used in Java 7.
The talk is based on joint work with Conrado Martínez and Markus E. Nebel.
Find more information and the corresponding paper on my website: http://wwwagak.cs.uni-kl.de/sebastian-wild.html
Engineering Java 7's Dual Pivot Quicksort Using MaLiJAnSebastian Wild
I gave this talk at the 2013 Meeting On Algorithm Engineering and Experiments (ALENEX) meeting.
Find my other talks and the corresponding papers on my web page:
http://wwwagak.cs.uni-kl.de/sebastian-wild.html
Average Case Analysis of Java 7’s Dual Pivot QuicksortSebastian Wild
I gave this talk at the European Symposium on Algorithms 2012 in Ljubljana (Slowenia).
The corresponding paper won the best paper award.
Find my other talks and all corresponding papers on my web page:
http://wwwagak.cs.uni-kl.de/sebastian-wild.html
The document describes dual-pivot quicksort, which uses two pivot elements rather than one. It summarizes previous research that found dual-pivot quicksort often improves upon classic quicksort by reducing the number of element comparisons and cache misses, though it increases the number of swaps. The document then focuses on Yaroslavskiy's dual-pivot partitioning algorithm, which efficiently arranges elements into three groups - less than the first pivot, between the pivots, and greater than the second pivot - through in-place swapping.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
The document provides a literature review on heuristic based multiobjective optimization problems in crisp and fuzzy environments. It summarizes 15 research papers on topics related to multiobjective optimization using techniques like particle swarm optimization, ant colony optimization, cuckoo search, and simulated annealing. The papers are summarized in a table that lists the title, authors, journal, volume, year, and pages of each paper. The literature review explores multiobjective optimization applications in areas like assembly line balancing, estimating nadir points, task scheduling in cloud computing, and engineering design problems.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that takes into account time constraints and taxonomies. ISM extends SPADE to incrementally update the frequent pattern set when new data is added. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes that most previous studies used GSP or PrefixSpan and that future work could focus on improving time efficiency of sequential pattern mining.
Constructing Operating Systems and E-CommerceIJARIIT
Information retrieval systems and the partition table, while essential in theory, have not until recently been considered important [15]. In fact, few theorists would disagree with the deployment of massive multiplayer online role-playing games, which embodies the robust principles of complexity theory. In this work we investigate how Smalltalk can be applied to the synthesis of lambda calculus.
The Smart Way to Invest in Artificial Intelligence and Machine Learning: Lisha Li, Amplify Partners
AI and ML are seeping into every startup, at least into every pitch deck. But what does it mean to build an AI/ML company? Some startups do require a closet filled with five PhD’s in data science, but that doesn’t necessarily mean yours does. Building intelligently with AI and ML.
A minimization approach for two level logic synthesis using constrained depth...IAEME Publication
This document summarizes a research paper that proposes a new approach for two-level logic synthesis called constrained depth-first search. It generates all prime implicants of a Boolean function efficiently using cube absorption and a position cube notation representation. It then formulates the problem of finding an optimal sum-of-products cover as a combinatorial optimization problem. The key contribution is a constrained depth-first search of a virtual prime implicant decision tree that finds the minimum cover. Numerical results on benchmark functions show it produces results as good or better than state-of-the-art logic synthesis tools, often finding the greedy solution is already optimal or near-optimal.
Practicing with unsupervised machine learning techniques I scraped data from Denver's "For Free" Craigslist category to categorize and better understand products available.
A Brief Overview On Frequent Pattern Mining AlgorithmsSara Alvarez
This document provides an overview of frequent pattern mining algorithms. It discusses that frequent pattern mining finds inherent regularities in data and plays an essential role in data mining tasks. The document then describes several sequential pattern mining algorithms such as AIS, SETM, Apriori and some of its variations that improve efficiency. It also discusses parallel pattern mining algorithms and some challenges in the field of frequent pattern mining.
Atomate: a tool for rapid high-throughput computing and materials discoveryAnubhav Jain
Atomate is a tool for automating materials simulations and high-throughput computations. It provides predefined workflows for common calculations like band structures, elastic tensors, and Raman spectra. Users can customize workflows and simulation parameters. FireWorks executes workflows on supercomputers and detects/recovers from failures. Data is stored in databases for analysis with tools like pymatgen. The goal is to make simulations easy and scalable by automating tedious steps and leveraging past work.
This document proposes a new application called EtheSpinet to address obstacles in interactive epistemologies. It presents two main contributions: 1) validating that the Internet and RAID can synchronize to accomplish a purpose, and 2) proving multicast applications and write-ahead logging are largely incompatible. The paper outlines EtheSpinet's implementation and results from experiments comparing its performance to other systems. In conclusion, it states that EtheSpinet will successfully cache many linked lists at once and help analysts evaluate the producer-consumer problem more extensively.
The document discusses optimizing the performance of Word2Vec on multicore systems through a technique called Context Combining. Some key points:
- Context Combining improves Word2Vec training efficiency by combining related windows that share samples, improving floating point throughput and reducing overhead.
- Experiments on Intel and Intel Knights Landing processors show Context Combining (pSGNScc) achieves up to 1.28x speedup over prior work (pWord2Vec) and maintains comparable accuracy to state-of-the-art implementations.
- Parallel scaling tests show pSGNScc achieves near linear speedup up to 68 cores, utilizing more of the available computational resources than previous techniques.
- Future
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...BRNSSPublicationHubI
This document presents an improved Apriori algorithm for generating frequent item sets on large datasets using Hadoop MapReduce. The classical Apriori algorithm suffers from repeated database scans, high candidate generation costs, and memory issues. The proposed improved Apriori algorithm aims to address these issues by leveraging Hadoop MapReduce to parallelize the processing and reduce unnecessary database scans. It presents the pseudocode for the classical and improved algorithms. The improved algorithm is evaluated to show it provides better performance than the classical Apriori algorithm in terms of time and number of iterations required.
Topic detecton by clustering and text miningIRJET Journal
This document discusses topic detection from text documents using text mining and clustering techniques. It proposes extracting keywords from documents, representing topics as groups of keywords, and using k-means clustering on the keywords to group them into topics. The keywords are extracted based on frequency counts and preprocessed by removing stop words and stemming. The k-means clustering algorithm is used to assign keywords to topics represented by cluster centroids, and the centroids are iteratively updated until cluster assignments converge.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
Introduction and Applications of Discrete Mathematicsblaircomp2003
This document provides an introduction to the course "Introduction to Discrete Mathematics". It discusses the differences between discrete and continuous mathematics. Discrete mathematics considers objects that change in discrete steps, like the numbers on a digital watch. Continuous mathematics looks at objects that vary smoothly over time. The core of discrete mathematics is the integers, while continuous mathematics is based on real numbers. Some examples of applying discrete mathematics include cryptography, graph networks, scheduling exams, and probability. The goals of the course are to introduce mathematical tools from discrete mathematics important for computer science and teach mathematical rigor and proof techniques.
Human: Thank you for the summary. You captured the key points about the differences between discrete and continuous mathematics, examples of applying discrete mathematics, and goals
Implementation of Improved Apriori Algorithm on Large Dataset using HadoopBRNSSPublicationHubI
This document describes research on improving the Apriori algorithm for association rule mining on large datasets using Hadoop. The researchers implemented an improved Apriori algorithm that uses MapReduce on Hadoop to reduce the number of database scans needed. They tested the proposed algorithm on various datasets and found it had faster execution times and used less memory compared to the traditional Apriori algorithm.
Data mining is a very popular research topic over the years. Sequential pattern mining or sequential rule mining is very useful application of data mining for the prediction purpose. In this paper, we have presented a review over sequential rule cum sequential pattern mining. The advantages & drawbacks of each popular sequential mining method is discussed in brief.
Machine Reading Using Neural Machines (talk at Microsoft Research Faculty Sum...Isabelle Augenstein
The document discusses machine reading using neural machines. It presents goals of fact checking claims and understanding scientific publications. It outlines challenges in tasks like stance detection on tweets and summarizing scientific papers. These include interpreting statements based on the target or headline, handling unseen targets, and the small size of benchmark datasets which makes neural machine reading computationally costly.
This document summarizes a presentation on optimizing application architecture. It discusses various data structures and algorithms like quicksort. It also discusses serialization protocols like Protocol Buffers and compares their performance. Other topics covered include immutable collections, concurrent collections, avoiding locks, and considering functional programming and reactive extensions. The presentation emphasizes principles like separation of concerns, writing tests, and avoiding premature optimization. It encourages thinking outside of object-oriented patterns and exploring new developments in distributed computing.
Succint Data Structures for Range Minimum ProblemsSebastian Wild
This was an invited talk I gave at Purdue University. It introduces some concepts and techniques of succinct data structures along the example of the range-minimum query problem, and presents my new, average-case space optimal solution.
Entropy Trees & Range-Minimum Queries in Optimal Average-Case SpaceSebastian Wild
I gave this talk at the Dagstuhl Seminar 19051
on Data Structures for the Cloud and External Memory Data
(https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=19051)
The document provides a literature review on heuristic based multiobjective optimization problems in crisp and fuzzy environments. It summarizes 15 research papers on topics related to multiobjective optimization using techniques like particle swarm optimization, ant colony optimization, cuckoo search, and simulated annealing. The papers are summarized in a table that lists the title, authors, journal, volume, year, and pages of each paper. The literature review explores multiobjective optimization applications in areas like assembly line balancing, estimating nadir points, task scheduling in cloud computing, and engineering design problems.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that takes into account time constraints and taxonomies. ISM extends SPADE to incrementally update the frequent pattern set when new data is added. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes that most previous studies used GSP or PrefixSpan and that future work could focus on improving time efficiency of sequential pattern mining.
Constructing Operating Systems and E-CommerceIJARIIT
Information retrieval systems and the partition table, while essential in theory, have not until recently been considered important [15]. In fact, few theorists would disagree with the deployment of massive multiplayer online role-playing games, which embodies the robust principles of complexity theory. In this work we investigate how Smalltalk can be applied to the synthesis of lambda calculus.
The Smart Way to Invest in Artificial Intelligence and Machine Learning: Lisha Li, Amplify Partners
AI and ML are seeping into every startup, at least into every pitch deck. But what does it mean to build an AI/ML company? Some startups do require a closet filled with five PhD’s in data science, but that doesn’t necessarily mean yours does. Building intelligently with AI and ML.
A minimization approach for two level logic synthesis using constrained depth...IAEME Publication
This document summarizes a research paper that proposes a new approach for two-level logic synthesis called constrained depth-first search. It generates all prime implicants of a Boolean function efficiently using cube absorption and a position cube notation representation. It then formulates the problem of finding an optimal sum-of-products cover as a combinatorial optimization problem. The key contribution is a constrained depth-first search of a virtual prime implicant decision tree that finds the minimum cover. Numerical results on benchmark functions show it produces results as good or better than state-of-the-art logic synthesis tools, often finding the greedy solution is already optimal or near-optimal.
Practicing with unsupervised machine learning techniques I scraped data from Denver's "For Free" Craigslist category to categorize and better understand products available.
A Brief Overview On Frequent Pattern Mining AlgorithmsSara Alvarez
This document provides an overview of frequent pattern mining algorithms. It discusses that frequent pattern mining finds inherent regularities in data and plays an essential role in data mining tasks. The document then describes several sequential pattern mining algorithms such as AIS, SETM, Apriori and some of its variations that improve efficiency. It also discusses parallel pattern mining algorithms and some challenges in the field of frequent pattern mining.
Atomate: a tool for rapid high-throughput computing and materials discoveryAnubhav Jain
Atomate is a tool for automating materials simulations and high-throughput computations. It provides predefined workflows for common calculations like band structures, elastic tensors, and Raman spectra. Users can customize workflows and simulation parameters. FireWorks executes workflows on supercomputers and detects/recovers from failures. Data is stored in databases for analysis with tools like pymatgen. The goal is to make simulations easy and scalable by automating tedious steps and leveraging past work.
This document proposes a new application called EtheSpinet to address obstacles in interactive epistemologies. It presents two main contributions: 1) validating that the Internet and RAID can synchronize to accomplish a purpose, and 2) proving multicast applications and write-ahead logging are largely incompatible. The paper outlines EtheSpinet's implementation and results from experiments comparing its performance to other systems. In conclusion, it states that EtheSpinet will successfully cache many linked lists at once and help analysts evaluate the producer-consumer problem more extensively.
The document discusses optimizing the performance of Word2Vec on multicore systems through a technique called Context Combining. Some key points:
- Context Combining improves Word2Vec training efficiency by combining related windows that share samples, improving floating point throughput and reducing overhead.
- Experiments on Intel and Intel Knights Landing processors show Context Combining (pSGNScc) achieves up to 1.28x speedup over prior work (pWord2Vec) and maintains comparable accuracy to state-of-the-art implementations.
- Parallel scaling tests show pSGNScc achieves near linear speedup up to 68 cores, utilizing more of the available computational resources than previous techniques.
- Future
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...BRNSSPublicationHubI
This document presents an improved Apriori algorithm for generating frequent item sets on large datasets using Hadoop MapReduce. The classical Apriori algorithm suffers from repeated database scans, high candidate generation costs, and memory issues. The proposed improved Apriori algorithm aims to address these issues by leveraging Hadoop MapReduce to parallelize the processing and reduce unnecessary database scans. It presents the pseudocode for the classical and improved algorithms. The improved algorithm is evaluated to show it provides better performance than the classical Apriori algorithm in terms of time and number of iterations required.
Topic detecton by clustering and text miningIRJET Journal
This document discusses topic detection from text documents using text mining and clustering techniques. It proposes extracting keywords from documents, representing topics as groups of keywords, and using k-means clustering on the keywords to group them into topics. The keywords are extracted based on frequency counts and preprocessed by removing stop words and stemming. The k-means clustering algorithm is used to assign keywords to topics represented by cluster centroids, and the centroids are iteratively updated until cluster assignments converge.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
Introduction and Applications of Discrete Mathematicsblaircomp2003
This document provides an introduction to the course "Introduction to Discrete Mathematics". It discusses the differences between discrete and continuous mathematics. Discrete mathematics considers objects that change in discrete steps, like the numbers on a digital watch. Continuous mathematics looks at objects that vary smoothly over time. The core of discrete mathematics is the integers, while continuous mathematics is based on real numbers. Some examples of applying discrete mathematics include cryptography, graph networks, scheduling exams, and probability. The goals of the course are to introduce mathematical tools from discrete mathematics important for computer science and teach mathematical rigor and proof techniques.
Human: Thank you for the summary. You captured the key points about the differences between discrete and continuous mathematics, examples of applying discrete mathematics, and goals
Implementation of Improved Apriori Algorithm on Large Dataset using HadoopBRNSSPublicationHubI
This document describes research on improving the Apriori algorithm for association rule mining on large datasets using Hadoop. The researchers implemented an improved Apriori algorithm that uses MapReduce on Hadoop to reduce the number of database scans needed. They tested the proposed algorithm on various datasets and found it had faster execution times and used less memory compared to the traditional Apriori algorithm.
Data mining is a very popular research topic over the years. Sequential pattern mining or sequential rule mining is very useful application of data mining for the prediction purpose. In this paper, we have presented a review over sequential rule cum sequential pattern mining. The advantages & drawbacks of each popular sequential mining method is discussed in brief.
Machine Reading Using Neural Machines (talk at Microsoft Research Faculty Sum...Isabelle Augenstein
The document discusses machine reading using neural machines. It presents goals of fact checking claims and understanding scientific publications. It outlines challenges in tasks like stance detection on tweets and summarizing scientific papers. These include interpreting statements based on the target or headline, handling unseen targets, and the small size of benchmark datasets which makes neural machine reading computationally costly.
This document summarizes a presentation on optimizing application architecture. It discusses various data structures and algorithms like quicksort. It also discusses serialization protocols like Protocol Buffers and compares their performance. Other topics covered include immutable collections, concurrent collections, avoiding locks, and considering functional programming and reactive extensions. The presentation emphasizes principles like separation of concerns, writing tests, and avoiding premature optimization. It encourages thinking outside of object-oriented patterns and exploring new developments in distributed computing.
Succint Data Structures for Range Minimum ProblemsSebastian Wild
This was an invited talk I gave at Purdue University. It introduces some concepts and techniques of succinct data structures along the example of the range-minimum query problem, and presents my new, average-case space optimal solution.
Entropy Trees & Range-Minimum Queries in Optimal Average-Case SpaceSebastian Wild
I gave this talk at the Dagstuhl Seminar 19051
on Data Structures for the Cloud and External Memory Data
(https://www.dagstuhl.de/en/program/calendar/semhp/?semnr=19051)
Sesquickselect: One and a half pivot for cache efficient selectionSebastian Wild
These are the slides for my ANALCO about sesquickselect, a novel quickselect variant. The paper and further details here: https://www.wild-inter.net/publications/martinez-nebel-wild-2019
Average cost of QuickXsort with pivot samplingSebastian Wild
The document discusses QuickXsort, a variant of Quicksort that uses a sorting algorithm X to sort one subproblem during recursion. QuickXsort is described as using Mergesort for X to achieve near-optimal comparison counts while sorting in-place. Merging in Mergesort is explained as possible through swapping elements between runs and a buffer to merge runs together without using extra space.
Nearly-optimal mergesort: Fast, practical sorting methods that optimally adap...Sebastian Wild
Mergesort can make use of existing order in the input by picking up existing runs, i.e., sorted segments. Since the lengths of these runs can be arbitrary, simply merging them as they arrive can be wasteful—merging can degenerate to inserting a single elements into long run.
In this talk, I show that we can find an optimal merging order (up to lower order terms of costs) with negligible overhead and thereby get the same worst-case guarantee as for standard mergesort (up to lower order terms), while exploiting existing runs if present. I present two new mergesort variants, peeksort and powersort, that are simple, stable, optimally adaptive and fast in practice (never slower than standard mergesort and Timsort, but significantly faster on certain inputs).
This talk was given at ESA 2018 and is based on joint work with Ian Munro. ItThe paper and further information can be found on my website:
https://www.wild-inter.net/publications/munro-wild-2018
The document describes the process of quicksort and building a binary search tree on the same data. It shows quicksort sorting an array from 7 4 2 9 1 3 8 5 6 to its sorted order, and building a corresponding binary search tree from the sorted array. It notes that the recursion tree of quicksort is equivalent to the built binary search tree, and that the number of comparisons in quicksort equals building and searching the tree. It questions how median-of-three quicksort and other variants relate to building fringe-balanced search trees.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Principle of conventional tomography-Bibash Shahi ppt..pptx
Quicksort with Equal Keys
1. Quicksort with Equal Keys
Sebastian Wild
wild@cs.uni-kl.de
based on joint work with
Martin Aumüller, Martin Dietzfelbinger,
Conrado Martínez, and Markus Nebel
Data Structures
and Advanced Models of
Computation on Big Data
Dagstuhl Seminar 16 101
Sebastian Wild Quicksort with Equal Keys 2016-03-07 1 / 12
2. Motivation
Practical Dimension:
solve togetherness problem
e.g. SQL’s GROUP BY clause
preprocessing to remove dups
Theoretical Dimension:
many equals lower entropy
We should be able to sort faster!
Academic Dimension:
Open Problem of Sedgewick & Bentley:
Is Quicksort with median-of-k
entropy-optimal for k → ∞?
Personal Dimension:
This is my tenth talk on Quicksort ...
Equal Keys on my wishlist since 2012
Why Equal Keys?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 2 / 12
3. Motivation
Practical Dimension:
solve togetherness problem
e.g. SQL’s GROUP BY clause
preprocessing to remove dups
Theoretical Dimension:
many equals lower entropy
We should be able to sort faster!
Academic Dimension:
Open Problem of Sedgewick & Bentley:
Is Quicksort with median-of-k
entropy-optimal for k → ∞?
Personal Dimension:
This is my tenth talk on Quicksort ...
Equal Keys on my wishlist since 2012
Why Equal Keys?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 2 / 12
4. Motivation
Practical Dimension:
solve togetherness problem
e.g. SQL’s GROUP BY clause
preprocessing to remove dups
Theoretical Dimension:
many equals lower entropy
We should be able to sort faster!
Academic Dimension:
Open Problem of Sedgewick & Bentley:
Is Quicksort with median-of-k
entropy-optimal for k → ∞?
Personal Dimension:
This is my tenth talk on Quicksort ...
Equal Keys on my wishlist since 2012
Why Equal Keys?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 2 / 12
5. Motivation
Practical Dimension:
solve togetherness problem
e.g. SQL’s GROUP BY clause
preprocessing to remove dups
Theoretical Dimension:
many equals lower entropy
We should be able to sort faster!
Academic Dimension:
Open Problem of Sedgewick & Bentley:
Is Quicksort with median-of-k
entropy-optimal for k → ∞?
Personal Dimension:
This is my tenth talk on Quicksort ...
Equal Keys on my wishlist since 2012
Why Equal Keys?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 2 / 12
6. Motivation
Practical Dimension:
solve togetherness problem
e.g. SQL’s GROUP BY clause
preprocessing to remove dups
Theoretical Dimension:
many equals lower entropy
We should be able to sort faster!
Academic Dimension:
Open Problem of Sedgewick & Bentley:
Is Quicksort with median-of-k
entropy-optimal for k → ∞?
Personal Dimension:
This is my tenth talk on Quicksort ...
Equal Keys on my wishlist since 2012
Why Equal Keys?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 2 / 12
7. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
8. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and spaceSebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
9. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
Sebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
10. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
An Analysis of Binary Search Trees Formed from
Sequences of Nondistinct Keys
WILLIAM H. BURGE
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
ABSTRACT. The expected depth of each key in the set of binary search trees formed from all sequences
composed from a multmet {pl • 1, p2 ' 2, p3 • 3, ..., p~ • n} is obtained, and hence the expected
weight of such trees. The expected number of left-to-right local minima and the expected number of
cycles in sequences composed from a multiset are then deduced from these results.
KEY WORDS AND PHRASES : binary search trees, multiset
CR CATEGORIES: 5.3
Introduction
The expected depth of the number r in the set of binary search trees formed from allSebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
11. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
An Analysis of Binary Search Trees Formed from
Sequences of Nondistinct Keys
WILLIAM H. BURGE
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
ABSTRACT. The expected depth of each key in the set of binary search trees formed from all sequences
composed from a multmet {pl • 1, p2 ' 2, p3 • 3, ..., p~ • n} is obtained, and hence the expected
weight of such trees. The expected number of left-to-right local minima and the expected number of
cycles in sequences composed from a multiset are then deduced from these results.
KEY WORDS AND PHRASES : binary search trees, multiset
CR CATEGORIES: 5.3
Introduction
The expected depth of the number r in the set of binary search trees formed from all
Theoretical
Computer Science
ELSEVIER Theoretical Computer Science 156 (1996) 39%70
Binary search trees constructed from nondistinct keys
with/without specified probabilities
Rainer Kemp
Johann Wo@ng Goethe-Universitiit, Fachbereich Informarik, D-60054 Fran!+rt am Main, German)
Received May 1994; revised November 1994
Communicated by H. Prodinger
Sebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
12. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
An Analysis of Binary Search Trees Formed from
Sequences of Nondistinct Keys
WILLIAM H. BURGE
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
ABSTRACT. The expected depth of each key in the set of binary search trees formed from all sequences
composed from a multmet {pl • 1, p2 ' 2, p3 • 3, ..., p~ • n} is obtained, and hence the expected
weight of such trees. The expected number of left-to-right local minima and the expected number of
cycles in sequences composed from a multiset are then deduced from these results.
KEY WORDS AND PHRASES : binary search trees, multiset
CR CATEGORIES: 5.3
Introduction
The expected depth of the number r in the set of binary search trees formed from all
Theoretical
Computer Science
ELSEVIER Theoretical Computer Science 156 (1996) 39%70
Binary search trees constructed from nondistinct keys
with/without specified probabilities
Rainer Kemp
Johann Wo@ng Goethe-Universitiit, Fachbereich Informarik, D-60054 Fran!+rt am Main, German)
Received May 1994; revised November 1994
Communicated by H. Prodinger
Fourth Colloquium on Mathematics and Computer Science DMTCS proc. AG, 2006, 309–320
Average depth in a binary search tree with
repeated keys
Margaret Archibald1
and Julien Cl´ement2,3
1
School of Mathematics, University of the Witwatersrand, P. O. Wits, 2050 Johannesburg, South Africa,
email: marchibald@maths.wits.ac.za.
2
CNRS UMR 8049, Institut Gaspard-Monge, Laboratoire d’informatique, Universit´e de Marne-la-Vall´ee, France
3
CNRS UMR 6072, GREYC Laboratoire d’informatique, Universit´e de Caen, France,
email: Julien.Clement@info.unicaen.fr
Random sequences from alphabet {1, . . . , r} are examined where repeated letters are allowed. Binary search trees are
formed from these, and the average left-going depth of the first 1 is found. Next, the right-going depth of the first r is
examined, and finally a merge (or ‘shuffle’) operator is used to obtain the average depth of an arbitrary node, which canSebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
13. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
An Analysis of Binary Search Trees Formed from
Sequences of Nondistinct Keys
WILLIAM H. BURGE
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
ABSTRACT. The expected depth of each key in the set of binary search trees formed from all sequences
composed from a multmet {pl • 1, p2 ' 2, p3 • 3, ..., p~ • n} is obtained, and hence the expected
weight of such trees. The expected number of left-to-right local minima and the expected number of
cycles in sequences composed from a multiset are then deduced from these results.
KEY WORDS AND PHRASES : binary search trees, multiset
CR CATEGORIES: 5.3
Introduction
The expected depth of the number r in the set of binary search trees formed from all
Theoretical
Computer Science
ELSEVIER Theoretical Computer Science 156 (1996) 39%70
Binary search trees constructed from nondistinct keys
with/without specified probabilities
Rainer Kemp
Johann Wo@ng Goethe-Universitiit, Fachbereich Informarik, D-60054 Fran!+rt am Main, German)
Received May 1994; revised November 1994
Communicated by H. Prodinger
Fourth Colloquium on Mathematics and Computer Science DMTCS proc. AG, 2006, 309–320
Average depth in a binary search tree with
repeated keys
Margaret Archibald1
and Julien Cl´ement2,3
1
School of Mathematics, University of the Witwatersrand, P. O. Wits, 2050 Johannesburg, South Africa,
email: marchibald@maths.wits.ac.za.
2
CNRS UMR 8049, Institut Gaspard-Monge, Laboratoire d’informatique, Universit´e de Marne-la-Vall´ee, France
3
CNRS UMR 6072, GREYC Laboratoire d’informatique, Universit´e de Caen, France,
email: Julien.Clement@info.unicaen.fr
Random sequences from alphabet {1, . . . , r} are examined where repeated letters are allowed. Binary search trees are
formed from these, and the average left-going depth of the first 1 is found. Next, the right-going depth of the first r is
examined, and finally a merge (or ‘shuffle’) operator is used to obtain the average depth of an arbitrary node, which canSebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
14. Previous Work
Rather little is known!
Sedgewick 1977: Quicksort on Equal Keys
Sedgewick & Bentley 2002: Quicksort is Optimal (Talk at Knuthfest)
A bit more on BSTs:
Burge 1976: An Analysis of BSTs Formed from Sequences of Nondistinct Keys
Kemp 1996: BSTs constructed from nondistinct keys with/without specified probabilities
Archibald & Clément 2006: Average depth in a BST with repeated keys
This is basically all literature on analysis of Quicksort with equal keys!
SIAM J. COMPUT.
Vol. 6, No. 2, June 1977
QUICKSORT WITH EQUAL KEYS*
ROBERT SEDGEWICK?
Abstract. This paper considers the problem of implementing and analyzing a Quicksort program
when equal keys are likely to be present in the file to be sorted. Upper and lower bounds are derived on
the average number of comparisons needed by any Quicksort programwhen equal keys are present. It
is shown that, of the three strategies which have been suggested for dealing with equal keys, the
method of always stopping the scanning pointers on keys equal to the partitioning element performs
best.
Key words, analysis of algorithms, equal keys, Quicksort, sorting
Introduction. The Quicksort algorithm, which was introduced by C. A. R.
Hoare in 1960 [6], [7], has gained wide acceptance as the most efficient
general-purpose sorting method suitable for use on computers. The algorithm has
a rich history: many modifications have been suggested to improve its perfor-
mance, and exact formulas have been derived describing the time and space
Now is the time
For all good men
To come to the aid
Of their party
QUICKSORT IS OPTIMAL
Robert Sedgewick
Jon Bentley
An Analysis of Binary Search Trees Formed from
Sequences of Nondistinct Keys
WILLIAM H. BURGE
IBM Thomas J. Watson Research Center, Yorktown Heights, New York
ABSTRACT. The expected depth of each key in the set of binary search trees formed from all sequences
composed from a multmet {pl • 1, p2 ' 2, p3 • 3, ..., p~ • n} is obtained, and hence the expected
weight of such trees. The expected number of left-to-right local minima and the expected number of
cycles in sequences composed from a multiset are then deduced from these results.
KEY WORDS AND PHRASES : binary search trees, multiset
CR CATEGORIES: 5.3
Introduction
The expected depth of the number r in the set of binary search trees formed from all
Theoretical
Computer Science
ELSEVIER Theoretical Computer Science 156 (1996) 39%70
Binary search trees constructed from nondistinct keys
with/without specified probabilities
Rainer Kemp
Johann Wo@ng Goethe-Universitiit, Fachbereich Informarik, D-60054 Fran!+rt am Main, German)
Received May 1994; revised November 1994
Communicated by H. Prodinger
Fourth Colloquium on Mathematics and Computer Science DMTCS proc. AG, 2006, 309–320
Average depth in a binary search tree with
repeated keys
Margaret Archibald1
and Julien Cl´ement2,3
1
School of Mathematics, University of the Witwatersrand, P. O. Wits, 2050 Johannesburg, South Africa,
email: marchibald@maths.wits.ac.za.
2
CNRS UMR 8049, Institut Gaspard-Monge, Laboratoire d’informatique, Universit´e de Marne-la-Vall´ee, France
3
CNRS UMR 6072, GREYC Laboratoire d’informatique, Universit´e de Caen, France,
email: Julien.Clement@info.unicaen.fr
Random sequences from alphabet {1, . . . , r} are examined where repeated letters are allowed. Binary search trees are
formed from these, and the average left-going depth of the first 1 is found. Next, the right-going depth of the first r is
examined, and finally a merge (or ‘shuffle’) operator is used to obtain the average depth of an arbitrary node, which can
All these works only consider basic Quicksort:
No sampling to choose pivots.
No multi-way partitioning.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 3 / 12
16. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
Parameters: n, u
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
17. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
18. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
cont. disc.
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
19. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
Multiset-Permutation Model
randomly permuted multiset
xi copies of i, i = 1, . . . , u
Parameters: u; x1, . . . , xuset
multiset
cont. disc.
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
20. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
Multiset-Permutation Model
randomly permuted multiset
xi copies of i, i = 1, . . . , u
Parameters: u; x1, . . . , xuset
multiset aver
age
cont. disc.
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
21. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
Multiset-Permutation Model
randomly permuted multiset
xi copies of i, i = 1, . . . , u
Parameters: u; x1, . . . , xuset
multiset aver
age
cont. disc.
hard to
analyze
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
22. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
Multiset-Permutation Model
randomly permuted multiset
xi copies of i, i = 1, . . . , u
Parameters: u; x1, . . . , xuset
multiset aver
age
cont. disc.
hard to
analyze
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
23. Input Models
Random-Permutation Model
no duplicates
1 1, . . . , n, randomly permuted
2 n elem. i.i.d. uniform in (0, 1)
Parameters: n
Random u-ary Word Model
1 random choice from {1, . . . , u}n
2 n elem. i.i.d. uniform in {1, . . . , u}
Parameters: n, u
Multiset-Permutation Model
randomly permuted multiset
xi copies of i, i = 1, . . . , u
Parameters: u; x1, . . . , xuset
multiset aver
age
cont. disc.
hard to
analyze
My claim: u-ary words most natural model ...
in particular in i.i.d. interpretation!
All previous works use combinatorial detour
Stochastic point of view provides shortcut!
Sebastian Wild Quicksort with Equal Keys 2016-03-07 4 / 12
24. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
25. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
26. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
27. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
28. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
29. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
30. Setup
Assumptions:
1 Input: U1, . . . , Un, i.i.d. uniform in {1, . . . , u}.
2 fat-pivot partitioning < P P P P > P
recursive call recursive callall duplicates of pivots removed
subproblems again random u-ary words, (with smaller u and n)
3 Cost: # ternary comparisons
(extensions possible)
Generalized Quicksort:
s-way partitioning: divide into s segments
pivot sampling: parameter t = (t1, . . . , ts)
median-of-(2t + 1) t = (t, t)
asymmetric sampling allowed
Example:
s = 3 and t = (1, 2, 3)
P1 P2
t1 t2 t3
2nd
and 5th
from sample of 8.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 5 / 12
31. Quicksort & Search Trees
Classic Fact:
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
32. Quicksort & Search Trees
Classic Fact:
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
33. Quicksort & Search Trees
Classic Fact:
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
34. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
35. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
36. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
37. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
38. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
39. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
40. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
41. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
42. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
43. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
44. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot)
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
45. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
46. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
47. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
48. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
49. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
50. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
51. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
52. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
53. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
54. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
55. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
56. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
2
Equivalence holds also with duplicates.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
57. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
Equivalence holds also with duplicates.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
58. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
Equivalence holds also with duplicates.
This was only basic Quicksort ... how about pivot sampling?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
59. Quicksort & Search Trees
Classic Fact: (without duplicates)
Recursion Tree of Quicksort = Naturally grown BST from input
Comparisons in Quicksort = Comparisons to built BST
= Comparisons to search input in final BST
How about inputs with duplicates?
Quicksort (Fat-Pivot) Binary Search Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 3 2 5 54 44
1 3 3 32 2 5 5
1 3 33
4 2 1 3 3 5 4 4 3 5 2
4
2
1 3
5
1
Equivalence holds also with duplicates.
This was only basic Quicksort ... how about pivot sampling?
Sebastian Wild Quicksort with Equal Keys 2016-03-07 6 / 12
61. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
62. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
63. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
64. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
65. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
Insertionsort
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
66. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
67. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
68. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2)
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 44
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
69. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 44
Insertionsort
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
70. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
71. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
72. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
73. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
4 2
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
74. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
4 2 1
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
75. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
4 2 1 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
76. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
4 2 1 3 3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
77. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
2 1 3 3 4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
78. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
2 1 43
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
79. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
80. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
81. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4 5 4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
82. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4 5 4 4
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
83. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4 5 4 42 1
3
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
84. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4 5 4 4 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
85. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4 4 4 5 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
86. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 5 54
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
87. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
2 1 4
5 5
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
88. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4
5 5
2 1 2
Correspondence extends to
Pivot Sampling (any scheme, not only median)
s-way Partitioning s-ary search trees
Analyze search trees instead of Quicksort.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
89. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4
5 5
2 1 2
Correspondence extends to
Pivot Sampling (any scheme, not only median)
s-way Partitioning s-ary search trees
Analyze search trees instead of Quicksort.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
90. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4
5 5
2 1 2
Correspondence extends to
Pivot Sampling (any scheme, not only median)
s-way Partitioning s-ary search trees
Analyze search trees instead of Quicksort.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
91. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4
5 5
2 1 2
Correspondence extends to
Pivot Sampling (any scheme, not only median)
s-way Partitioning s-ary search trees
Analyze search trees instead of Quicksort.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
92. Fringe-Balanced Trees
Fringe-Balanced Search Trees:
Leaves have buffer for elements, to emulate pivot sampling
Median-of-5 Quicksort t = (2, 2) Fringe-Balanced Tree
4 2 1 3 3 5 4 4 3 5 2
2 1 2 4 5 4 4 533 3
5 54 442 1 2
5 5
4 2 1 3 3 5 4 4 3 5 2
3
4
5 5
2 1 2
Correspondence extends to
Pivot Sampling (any scheme, not only median)
s-way Partitioning s-ary search trees
Analyze search trees instead of Quicksort.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 7 / 12
93. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
94. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
95. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
96. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
97. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
98. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
99. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
100. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
101. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
102. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
103. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
104. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
105. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
106. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
107. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
108. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
109. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
Observation: T becomes stationary after 1 insert of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
110. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
(w = 1)
Observation: T becomes stationary after w inserts of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
111. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
(w = 1)
Observation: T becomes stationary after w inserts of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
112. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
(w = 1)
tree-growing part T
Observation: T becomes stationary after w inserts of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
113. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
(w = 1)
tree-growing part T searching part XS
Observation: T becomes stationary after w inserts of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
114. Tree-Growing and Searching
Quicksort costs = costs to search input U = (U1, . . . , Un) in final tree T.
T fixed search cost depends only on profile X = (X1, . . . , Xu)
but: T also depends on U (Recall: T is built from U!)
direct analysis no simpler than for Quicksort
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
(w = 1)
tree-growing part T searching part XS
Observation: T becomes stationary after w inserts of each value!
with leaf-buffer size w
Split input into tree-growing part and searching part:
1 We built T until it is stationary, ignoring costs.
2 Determine costs of searching remaining elements.
profile XS of search part independent of T
Sebastian Wild Quicksort with Equal Keys 2016-03-07 8 / 12
115. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
116. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
117. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
118. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
119. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
120. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
121. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
122. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
123. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
124. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
125. Bounding Tree-Growing Parts
We ignore tree-growing in analysis have to make that part short.
Allow first nT = n2/3
elements to build T.
4 5 5 5 2 3 5 4 2 2 2 5 5 5 4 2 5 1 5 4 2 3 4 2 2 1 2 4 2 2 5 5 2 3 4 2 3 4 1 3 2 2 1 4 4 4 1 2 3 3
tree-growing part T searching part XS
nT
Input is degenerate if a value occurs < w times in first nT elements.
T not complete after nT inserts ...
Have to make this happen rarely.
By Chernoff bounds, input is non-degenerate w.h.p. if u = O(n1/4
)
building T costs nT · u = O(n1−ε
)
Expected Quicksort cost E[Cn,u] = αu · n ± O(n1−ε
)
αu = expected search cost in random T
error term: covers tree-growing, sorting samples, degenerate inputs
Sebastian Wild Quicksort with Equal Keys 2016-03-07 9 / 12
126. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
127. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
128. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
129. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
130. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
131. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
132. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
133. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
134. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
1 2 4
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
135. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
1 2 4
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
136. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
1 2 4
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
137. Search Costs
Recall: T built from inserting i.i.d. uniform {1, . . . , u}, until saturation.
αu =
1
u
u
v=1
cost of searching v in T (u · αu = path length of T)
1 Easy case: t = (0, 0) ordinary BST, w = 1
no duplicates in tree
T is BST under random permutation model Allen & Munro 1978:
Self-Organizing Binary Search Trees
αu = 2 1 + 1
u
Hu − 3 = 2 ln(u) ± O(1)
2 General t = (t1, . . . , ts):
duplicates inside leaves (during construction) 3 3
1 2 4
for s 3: duplicates in same internal node possible
= random permutation
But: can set up a (messy) recurrence for αu
asymptotic solution by Continuous Master Theorem
Roura 2001:
Improved Master Theorems for
Divide-and-Conquer Recurrences
Sebastian Wild Quicksort with Equal Keys 2016-03-07 10 / 12
138. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
139. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
140. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
141. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
142. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
143. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
144. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
145. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
146. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
147. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
148. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
149. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
150. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
151. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Results deceptively similar! (derivation isn’t)
Same speedup from sampling and multi-way partitioning.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
152. Result
Search costs in T: αu =
¯a
H
ln(u) ± O(1) u → ∞
H =
s
r=1
tr + 1
k + 1
Hk+1 − Htr+1
k = sample size
¯a = avg. search costs inside node 1 2 3 4
1 2 3 4
3
1
2
4
¯a = 2.4
(¯a = 1 for binary trees)
Quicksort costs: E[Cn,u] = αu · n ± O(n1−ε
) u = O(n1/4)
Time to put the pieces together!
Quicksort Costs
u-ary words: E[Cn,u] = ¯a
H n ln(u) ± O(n) u = ω(1), u = O(n1/4
)
Permutations: E[Cn] = ¯a
H n ln(n) ± O(n)
Results deceptively similar! (derivation isn’t)
Same speedup from sampling and multi-way partitioning.
Sebastian Wild Quicksort with Equal Keys 2016-03-07 11 / 12
153. Conclusion
Findings
First analysis of generalized Quicksort (sampling & multi-way) on equal keys.
Limitations:
restricted range for universe size: u = ω(1) and u = O(n1/3−ε
)
currently only stateless cost measures
Partial Answer to conjecture of Sedgewick & Bentley:
Median-of-k Quicksort approaches lower bound for k → ∞ ... for uniform distribution.
Finally cleared the item from my to-do list
Future Work
other discrete distributions expected-profile model
better asymptotic approximation for αu
Sebastian Wild Quicksort with Equal Keys 2016-03-07 12 / 12
154. Conclusion
Findings
First analysis of generalized Quicksort (sampling & multi-way) on equal keys.
Limitations:
restricted range for universe size: u = ω(1) and u = O(n1/3−ε
)
currently only stateless cost measures
Partial Answer to conjecture of Sedgewick & Bentley:
Median-of-k Quicksort approaches lower bound for k → ∞ ... for uniform distribution.
Finally cleared the item from my to-do list
Future Work
other discrete distributions expected-profile model
better asymptotic approximation for αu
Sebastian Wild Quicksort with Equal Keys 2016-03-07 12 / 12
155. Conclusion
Findings
First analysis of generalized Quicksort (sampling & multi-way) on equal keys.
Limitations:
restricted range for universe size: u = ω(1) and u = O(n1/3−ε
)
currently only stateless cost measures
Partial Answer to conjecture of Sedgewick & Bentley:
Median-of-k Quicksort approaches lower bound for k → ∞ ... for uniform distribution.
Finally cleared the item from my to-do list
Future Work
other discrete distributions expected-profile model
better asymptotic approximation for αu
Sebastian Wild Quicksort with Equal Keys 2016-03-07 12 / 12