The essence of the plethora of sorting algorithms available is to have varieties that suit different characteristics of data to be sorted. In addition, the real goal is to have a sorting algorithm that is both efficient and easy to implement. Towards achieving this goal, Shellsort improved on Insertion sort, and various sequences have been proposed to further improve the performance of Shellsort. The best of all the improvements on Shellsort in the worst case is the Modified Diminishing Increment Sorting (MDIS). This article presents Circlesort, a variant of MDIS. The results of the implementation and experimentation of the algorithm with MDIS and some notable sorting algorithms showed that it performed better than the established algorithms considered in the best case and worst case scenarios, but second to MDIS. The results of the performance comparison of the algorithms considered also show their strengths and weaknesses in different scenarios. This will guide prospective users as to the choice to be made depending on the nature of the list to be sorted.
Selection Sort with Improved Asymptotic Time Boundstheijes
Sorting and searching are the most fundamental problems in computer science. Sorting is used for most of the times to help in searching. One of the most well known sorting algorithms that are taught at introductory computer science courses is the classical selection sort. While such an algorithms is easy to explain and grasp at the introductory computer science level, it is far from being an efficient sorting technique, since it requires 푶(풏 ퟐ ) time to sort a list of n numbers. It does so by repeatedly finding the minimum. In this paper we explore the benefit of reducing the search time for the minimum on each pass of the algorithm, and show that we can obtain a worst case time bound of 푶(풏 풏 ퟐ ) by making only minor modifications to the input list. Thus our bound is a factor 푶( 풏 ퟐ ) of faster than the classical selections sort and other classical sorts such as insertion and bubble sort.
Analysis and Comparative of Sorting Algorithmsijtsrd
There are many popular problems in different practical fields of computer sciences, computer networks, database applications and artificial intelligence. One of these basic operations and problems is the sorting algorithm. Sorting is also a fundamental problem in algorithm analysis and designing point of view. Therefore, many computer scientists have much worked on sorting algorithms. Sorting is a key data structure operation, which makes easy arranging, searching, and finding the information. Sorting of elements is an important task in computation that is used frequently in different processes. For accomplish, the task in a reasonable amount of time efficient algorithm is needed. Different types of sorting algorithms have been devised for the purpose. Which is the best suited sorting algorithm can only be decided by comparing the available algorithms in different aspects. In this paper, a comparison is made for different sorting algorithms used in the computation. Htwe Htwe Aung "Analysis and Comparative of Sorting Algorithms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26575.pdfPaper URL: https://www.ijtsrd.com/computer-science/programming-language/26575/analysis-and-comparative-of-sorting-algorithms/htwe-htwe-aung
Sorting algorithms are the main concepts of the subject Data Structures and It’s Applications. These
algorithms are designed in arranging the data elements in the sorted order. If the data elements are arranged
in sorted order , then the searching is very easier. Some algorithms are comparison sort and some are noncomparison
sort. The choice of a algorithm is based on the efficiency of the algorithm. I have designed one
algorithm called as Alternate Sort. The main aspect is that different technique of comparisons is involed. I have
presented the algorithm , It’s working and the examples and finally my paper is consisting of the program
listing.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which sort the elements based on their average, which runs faster.. This algorithm was analyzed, implemented and tested and the results are promising for a random data
Selection Sort with Improved Asymptotic Time Boundstheijes
Sorting and searching are the most fundamental problems in computer science. Sorting is used for most of the times to help in searching. One of the most well known sorting algorithms that are taught at introductory computer science courses is the classical selection sort. While such an algorithms is easy to explain and grasp at the introductory computer science level, it is far from being an efficient sorting technique, since it requires 푶(풏 ퟐ ) time to sort a list of n numbers. It does so by repeatedly finding the minimum. In this paper we explore the benefit of reducing the search time for the minimum on each pass of the algorithm, and show that we can obtain a worst case time bound of 푶(풏 풏 ퟐ ) by making only minor modifications to the input list. Thus our bound is a factor 푶( 풏 ퟐ ) of faster than the classical selections sort and other classical sorts such as insertion and bubble sort.
Analysis and Comparative of Sorting Algorithmsijtsrd
There are many popular problems in different practical fields of computer sciences, computer networks, database applications and artificial intelligence. One of these basic operations and problems is the sorting algorithm. Sorting is also a fundamental problem in algorithm analysis and designing point of view. Therefore, many computer scientists have much worked on sorting algorithms. Sorting is a key data structure operation, which makes easy arranging, searching, and finding the information. Sorting of elements is an important task in computation that is used frequently in different processes. For accomplish, the task in a reasonable amount of time efficient algorithm is needed. Different types of sorting algorithms have been devised for the purpose. Which is the best suited sorting algorithm can only be decided by comparing the available algorithms in different aspects. In this paper, a comparison is made for different sorting algorithms used in the computation. Htwe Htwe Aung "Analysis and Comparative of Sorting Algorithms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26575.pdfPaper URL: https://www.ijtsrd.com/computer-science/programming-language/26575/analysis-and-comparative-of-sorting-algorithms/htwe-htwe-aung
Sorting algorithms are the main concepts of the subject Data Structures and It’s Applications. These
algorithms are designed in arranging the data elements in the sorted order. If the data elements are arranged
in sorted order , then the searching is very easier. Some algorithms are comparison sort and some are noncomparison
sort. The choice of a algorithm is based on the efficiency of the algorithm. I have designed one
algorithm called as Alternate Sort. The main aspect is that different technique of comparisons is involed. I have
presented the algorithm , It’s working and the examples and finally my paper is consisting of the program
listing.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which sort the elements based on their average, which runs faster.. This algorithm was analyzed, implemented and tested and the results are promising for a random data
In this paper, we are going to introduce the Square root sorting algorithm. We study the best case and worst case of Square
root sorting algorithm, and we compare this algorithm with some of the algorithms that are already existed.
A Sorting Algorithm is used to rearrange a given array elements according to a comparison operator on the elements. So far th ere
are many algorithms that have been used for sorting like: Bubble sort, insertion sort, selection sort, quick sort, merge sort, heap sort
etc. Each of these algorithms are used according to the list of elements of specific usage and they have specific space compl exity
and time complexity as well. The Square root sorting algorithm has the least time complexity comparing to some of the existing
algorithms, especially in case of the best case, and in the worst case it also has less time complexity than some of the existing algorithms, which we will discuss it in coming pages of this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper introduces a new comparison base stable sorting algorithm, named RA sort. The RA sort
involves only the comparison of pair of elements in an array which ultimately sorts the array and does not
involve the comparison of each element with every other element. It tries to build upon the relationship
established between the elements in each pass. Instead of going for a blind comparison we prefer a
selective comparison to get an efficient method. Sorting is a fundamental operation in computer science.
This algorithm is analysed both theoretically and empirically to get a robust average case result. We have
performed its Empirical analysis and compared its performance with the well-known quick sort for various
input types. Although the theoretical worst case complexity of RA sort is Yworst(n) = O(n√), the
experimental results suggest an empirical Oemp(nlgn)1.333 time complexity for typical input instances, where
the parameter n characterizes the input size. The theoretical complexity is given for comparison operation.
We emphasize that the theoretical complexity is operation specific whereas the empirical one represents the
overall algorithmic complexity.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm (Index Sort) which runs based on the previously sorted elements.. This algorithm was analyzed, implemented and tested and the results are promising for a random data.
Design of State Estimator for a Class of Generalized Chaotic Systemsijtsrd
In this paper, a class of generalized chaotic systems is considered and the state observation problem of such a system is investigated. Based on the time domain approach with differential inequality, a simple state estimator for such generalized chaotic systems is developed to guarantee the global exponential stability of the resulting error system. Besides, the guaranteed exponential decay rate can be correctly estimated. Finally, several numerical simulations are given to show the effectiveness of the obtained result. Yeong-Jeu Sun "Design of State Estimator for a Class of Generalized Chaotic Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29270.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29270/design-of-state-estimator-for-a-class-of-generalized-chaotic-systems/yeong-jeu-sun
A Survey of Adaptive QuickSort AlgorithmsCSCJournals
In this paper, a survey of adaptive quicksort algorithms is presented. Adaptive quicksort algorithms improve on the worst case behavior of quicksort when the list of elements is sorted or nearly sorted. These algorithms take into consideration the already existing order in the input list to be sorted. A detailed description of each algorithm is provided. The paper provides an empirical study of these algorithms by comparing each algorithm in terms of the number of comparisons performed and the running times when used for sorting arrays of integers that are already sorted, sorted in reverse order, and generated randomly.
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...Alkis Vazacopoulos
Presented in this short document is a description of our technology we call “Sparse Observability”. Observability is the estimatability metric (Bagajewicz, 2010) to structurally determine that an unmeasured variable or regressed parameter is either uniquely solvable (observable) or otherwise unsolvable (unobservable) in data reconciliation and regression (DRR) applications. Ultimately, our purpose to use efficient sparse matrix techniques is to solve large industrial DRR flowsheets quickly and accurately.
Most other implementations of observability calculation use dense linear algebra such as reduced row echelon form (RREF), Gauss-Jordan decomposition (Crowe et. al. 1983; Madron 1992), QR factorization which can now be considered as semi-sparse (Swartz, 1989; Sanchez and Romagnoli, 1996), Schur complements, Cholesky factorization (Kelly, 1998a) and singular value decomposition (SVD) (Kelly, 1999). A sparse LU decomposition with complete-pivoting from Albuquerque and Biegler (1996) for dynamic data reconciliation observability computation was used but it is uncertain if complete-pivoting causes extreme “fill-ins” of the lower and upper triangular matrices essentially making them near-dense. There is another sparse observability method using an LP sub-solver found in Kelly and Zyngier (2008) but this requires solving as many LP sub-problems as there are unmeasured variables which can be considered as somewhat inefficient.
IMPL’s sparse observability technique uses the variable classification and nomenclature found in Kelly (1998b) given that if we partition or separate the unmeasured variables into independent (B12) and dependent (B34) sub-matrices then all dependent unmeasured variables by definition are unobservable. If any independent unmeasured variable is a (linear) function of any dependent variable then this independent variable is of course also unobservable because it is dependent on another non-observable variable.
Concept and Definition of Data Structures
Introduction to Data Structures: Information and its meaning, Array in C++: The array as an ADT, Using one dimensional array, Two dimensional array, Multi dimensional array, Structure , Union, Classes in C++.
https://github.com/ashim888/dataStructureAndAlgorithm
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
PPT On Sorting And Searching Concepts In Data Structure. In Many Programming Concepts We Use This Tricks In Algorithms....So Wacth,Learn And Enjoy Study.....Thanks
Parallel sorting algorithms order a set of elements USING MULTIPLE processors in order to enhance the performance of sequential sorting algorithms. In general, the performance of sorting algorithms are EVALUATED IN term of algorithm growth rate according to the input size. In this paper, the running time, parallel speedup and parallel efficiency OF PARALLEL bubble sort is evaluated and measured. Message Passing Interface (MPI) IS USED for implementing the parallel version of bubble sort and IMAN1 supercomputer is used to conduct the results. The evaluation results show that parallel bubble sort has better running time as the number of processors increases. On other hand, regarding parallel efficiency, parallel bubble sort algorithm is more efficient to be applied OVER SMALL number of processors.
In this paper, we are going to introduce the Square root sorting algorithm. We study the best case and worst case of Square
root sorting algorithm, and we compare this algorithm with some of the algorithms that are already existed.
A Sorting Algorithm is used to rearrange a given array elements according to a comparison operator on the elements. So far th ere
are many algorithms that have been used for sorting like: Bubble sort, insertion sort, selection sort, quick sort, merge sort, heap sort
etc. Each of these algorithms are used according to the list of elements of specific usage and they have specific space compl exity
and time complexity as well. The Square root sorting algorithm has the least time complexity comparing to some of the existing
algorithms, especially in case of the best case, and in the worst case it also has less time complexity than some of the existing algorithms, which we will discuss it in coming pages of this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper introduces a new comparison base stable sorting algorithm, named RA sort. The RA sort
involves only the comparison of pair of elements in an array which ultimately sorts the array and does not
involve the comparison of each element with every other element. It tries to build upon the relationship
established between the elements in each pass. Instead of going for a blind comparison we prefer a
selective comparison to get an efficient method. Sorting is a fundamental operation in computer science.
This algorithm is analysed both theoretically and empirically to get a robust average case result. We have
performed its Empirical analysis and compared its performance with the well-known quick sort for various
input types. Although the theoretical worst case complexity of RA sort is Yworst(n) = O(n√), the
experimental results suggest an empirical Oemp(nlgn)1.333 time complexity for typical input instances, where
the parameter n characterizes the input size. The theoretical complexity is given for comparison operation.
We emphasize that the theoretical complexity is operation specific whereas the empirical one represents the
overall algorithmic complexity.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm (Index Sort) which runs based on the previously sorted elements.. This algorithm was analyzed, implemented and tested and the results are promising for a random data.
Design of State Estimator for a Class of Generalized Chaotic Systemsijtsrd
In this paper, a class of generalized chaotic systems is considered and the state observation problem of such a system is investigated. Based on the time domain approach with differential inequality, a simple state estimator for such generalized chaotic systems is developed to guarantee the global exponential stability of the resulting error system. Besides, the guaranteed exponential decay rate can be correctly estimated. Finally, several numerical simulations are given to show the effectiveness of the obtained result. Yeong-Jeu Sun "Design of State Estimator for a Class of Generalized Chaotic Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29270.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29270/design-of-state-estimator-for-a-class-of-generalized-chaotic-systems/yeong-jeu-sun
A Survey of Adaptive QuickSort AlgorithmsCSCJournals
In this paper, a survey of adaptive quicksort algorithms is presented. Adaptive quicksort algorithms improve on the worst case behavior of quicksort when the list of elements is sorted or nearly sorted. These algorithms take into consideration the already existing order in the input list to be sorted. A detailed description of each algorithm is provided. The paper provides an empirical study of these algorithms by comparing each algorithm in terms of the number of comparisons performed and the running times when used for sorting arrays of integers that are already sorted, sorted in reverse order, and generated randomly.
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...Alkis Vazacopoulos
Presented in this short document is a description of our technology we call “Sparse Observability”. Observability is the estimatability metric (Bagajewicz, 2010) to structurally determine that an unmeasured variable or regressed parameter is either uniquely solvable (observable) or otherwise unsolvable (unobservable) in data reconciliation and regression (DRR) applications. Ultimately, our purpose to use efficient sparse matrix techniques is to solve large industrial DRR flowsheets quickly and accurately.
Most other implementations of observability calculation use dense linear algebra such as reduced row echelon form (RREF), Gauss-Jordan decomposition (Crowe et. al. 1983; Madron 1992), QR factorization which can now be considered as semi-sparse (Swartz, 1989; Sanchez and Romagnoli, 1996), Schur complements, Cholesky factorization (Kelly, 1998a) and singular value decomposition (SVD) (Kelly, 1999). A sparse LU decomposition with complete-pivoting from Albuquerque and Biegler (1996) for dynamic data reconciliation observability computation was used but it is uncertain if complete-pivoting causes extreme “fill-ins” of the lower and upper triangular matrices essentially making them near-dense. There is another sparse observability method using an LP sub-solver found in Kelly and Zyngier (2008) but this requires solving as many LP sub-problems as there are unmeasured variables which can be considered as somewhat inefficient.
IMPL’s sparse observability technique uses the variable classification and nomenclature found in Kelly (1998b) given that if we partition or separate the unmeasured variables into independent (B12) and dependent (B34) sub-matrices then all dependent unmeasured variables by definition are unobservable. If any independent unmeasured variable is a (linear) function of any dependent variable then this independent variable is of course also unobservable because it is dependent on another non-observable variable.
Concept and Definition of Data Structures
Introduction to Data Structures: Information and its meaning, Array in C++: The array as an ADT, Using one dimensional array, Two dimensional array, Multi dimensional array, Structure , Union, Classes in C++.
https://github.com/ashim888/dataStructureAndAlgorithm
PPT On Sorting And Searching Concepts In Data Structure | In Programming Lang...Umesh Kumar
PPT On Sorting And Searching Concepts In Data Structure. In Many Programming Concepts We Use This Tricks In Algorithms....So Wacth,Learn And Enjoy Study.....Thanks
Parallel sorting algorithms order a set of elements USING MULTIPLE processors in order to enhance the performance of sequential sorting algorithms. In general, the performance of sorting algorithms are EVALUATED IN term of algorithm growth rate according to the input size. In this paper, the running time, parallel speedup and parallel efficiency OF PARALLEL bubble sort is evaluated and measured. Message Passing Interface (MPI) IS USED for implementing the parallel version of bubble sort and IMAN1 supercomputer is used to conduct the results. The evaluation results show that parallel bubble sort has better running time as the number of processors increases. On other hand, regarding parallel efficiency, parallel bubble sort algorithm is more efficient to be applied OVER SMALL number of processors.
A unique sorting algorithm with linear time & space complexityeSAT Journals
Abstract Sorting a list means selection of the particular permutation of the members of that list in which the final permutation contains members in increasing or in decreasing order. Sorted list is prerequisite of some optimized operations such as searching an element from a list, locating or removing an element to/ from a list and merging two sorted list in a database etc. As volume of information is growing up day by day in the world around us and these data are unavoidable to manage for real life situations, the efficient and cost effective sorting algorithms are required. There are several numbers of fundamental and problem oriented sorting algorithms but still now sorting a problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently and effectively despite of its simple and familiar statements. Algorithms having same efficiency to do a same work using different mechanisms must differ their required time and space. For that reason an algorithm is chosen according to one’s need with respect to space complexity and time complexity. Now a day, space (Memory) is available in market comparatively in cheap cost. So, time complexity is a major issue for an algorithm. Here, the presented approach is to sort a list with linear time and space complexity using divide and conquer rule by partitioning a problem into n (input size) number of sub problems then these sub problems are solved recursively. Required time and space for the algorithm is optimized through reducing the height of the recursive tree and reduced height is too small (as compared to the problem size) to evaluate. So, asymptotic efficiency of this algorithm is very high with respect to time and space. Keywords: sorting, searching, permutation, divide and conquer algorithm, asymptotic efficiency, space complexity, time complexity, and recursion.
Bidirectional Bubble Sort Approach to Improving the Performance of Introsort ...Waqas Tariq
Quicksort has been described as the best practical choice for sorting. It is faster than many algorithms for sorting on most inputs and remarkably efficient on the average. However, it is not efficient in the worst case scenarios as it takes O(n2). Research efforts have been made to enhance this algorithm for the worst case scenarios by improving the way the algorithm chooses its pivot element for partitioning, but these approaches have the disadvantage of increasing the algorithm’s average computing time. Introsort was, however, developed to overcome this limitation. This paper presents an approach that uses Bidirectional Bubble Sort to improve the performance of Introsort. Instead of using Insertion Sort as the last step of the sorting algorithm for small lists, the approach uses Bidirectional Bubble Sort. The results of the implementation and experimentation of this algorithm compared with Introsort shows its better performance in the worst case scenario as the size of the list increases.
PROPOSAL OF A TWO WAY SORTING ALGORITHM AND PERFORMANCE COMPARISON WITH EXIST...IJCSEA Journal
An algorithm is any well-defined procedure or set of instructions, that takes some input in the form of some values, processes them and gives some values as output. Sorting involves rearranging information into either ascending or descending order. Sorting is considered as a fundamental operation in computer science as it is used as an intermediate step in many operations. A new sorting algorithm namely ‘An Endto-End Bi-directional Sorting (EEBS) Algorithm’ is proposed to address the shortcomings of the current popular sorting algorithms. The goal of this research is to perform an extensive empirical analysis of the newly developed algorithm and present its functionality. The results of the analysis proved that EEBS is much more efficient than the other algorithms having O(n2 ) complexity, like bubble, selection and insertion sort..
An Experiment to Determine and Compare Practical Efficiency of Insertion Sort...Tosin Amuda
Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms have been developed. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, and the kind of storage device to be used: main memory, disks, or tapes.
There are three reasons to study sorting algorithms. First, sorting algorithms illustrate many creative approaches to problem solving, and these approaches can be applied to solve other problems. Second, sorting algorithms are good for practicing fundamental programming techniques using selection statements, loops, methods, and arrays. Third, sorting algorithms are excellent examples to demonstrate algorithm performance.
However, this paper attempt to compare the practical efficiency of three sorting algorithms – Insertion, Quick and mere Sort using empirical analysis. The result of the experiment shows that insertion sort is a quadratic time sorting algorithm and that it’s more applicable to subarray that is sufficiently small. The merge sort performs better with larger size of input as compared to insertion sort. Quicksort runs the most efficiently.
Data Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in CData Structures in
Similar to A Variant of Modified Diminishing Increment Sorting: Circlesort and its Performance Comparison with some Established Sorting Algorithms (20)
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
A Variant of Modified Diminishing Increment Sorting: Circlesort and its Performance Comparison with some Established Sorting Algorithms
1. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 14
A Variant of Modified Diminishing Increment Sorting: Circlesort
and its Performance Comparison with some Established Sorting
Algorithms
Hans Bezemer hans.bezemer@ordina.nl
Ordina N.V.
The Netherlands
Oyelami Olufemi Moses olufemi.oyelami@bowenuniversity.edu.ng
Faculty of Science and Science Education
Department of Computer Science and Information Technology
Bowen University, Iwo, Nigeria
Abstract
The essence of the plethora of sorting algorithms available is to have varieties that suit different
characteristics of data to be sorted. In addition, the real goal is to have a sorting algorithm that is
both efficient and easy to implement. Towards achieving this goal, Shellsort improved on Insertion
sort, and various sequences have been proposed to further improve the performance of Shellsort.
The best of all the improvements on Shellsort in the worst case is the Modified Diminishing
Increment Sorting (MDIS). This article presents Circlesort, a variant of MDIS. The results of the
implementation and experimentation of the algorithm with MDIS and some notable sorting
algorithms showed that it performed better than the established algorithms considered in the best
case and worst case scenarios, but second to MDIS. The results of the performance comparison
of the algorithms considered also show their strengths and weaknesses in different scenarios.
This will guide prospective users as to the choice to be made depending on the nature of the list
to be sorted.
Keywords: Circlesort, Modified Diminishing Increment Sorting, Shellsort, Quicksort, Introsort,
Heapsort.
1. INTRODUCTION
In a bid to break the quadratic running time of sorting algorithms then, Shellsort was invented by
Donald Shell [1]. The sorting algorithm divides the whole list of elements to be sorted into smaller
subsequences and applies Insertion Sort on each of the sublists. Even though any sequence c1,
c2, c3, …, cn could be used in as much as the last is 1, the sequences proposed Donald are [n/2],
[n/4], [n/8], …[1, 2, 3], where n is the number of elements in the list. Towards improving the
performance of Shellsort, many increments and approaches have been proposed: Hibbard’s
sequence, Papernov and Stasevich’ sequence [2, 4], sequences (2k – (-1)k/3 and (3k -1)/2,
Fibonacci numbers, the Incerpi-Sedgewick sequences, Prat-tlike sequences, N. Tokuda’s
increment [2] and the MDIS [5]. Among all these approaches, the MDIS is the most efficient in the
worst case scenario [5]. This approach has also been used to enhance the performance of Bubble
Sort [6], Quicksort [7] and Introsort [8]. Further still, it has been employed in a “collision detection
algorithm to detect collision and self-collision, between complex models undergoing rigid motion
and deformation to reduce the time complexity of collision detection performed” [9].
This article presents Circlesort, which is a variant of the Modified Diminishing Increment Sorting.
The algorithm was implemented and the results of its performance in different scenarios
compared with Heapsort, Shellsort (using Shell’s sequence), Quicksort, Modified Diminishing
Increment Sort and Introsort are presented. Furthermore, even though the performance of the
2. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 15
Modified Diminishing Increment Sorting was compared with Shellsort employing Sedgewick’s
sequence and Tokuda’s sequence in the worst case scenario, its performance was not compared
with major sorting algorithms like Introsort, Heapsort and Quicksort in any scenario. In the light of
this, this article also reports the performance comparison of these algorithms experimentally with
the Modified Diminishing Increment Sorting vis-à-vis sorted data, unsorted data, inverted data and
partially sorted data. This provides an insight into its behaviour in these scenarios.
2. OTHER RELATED WORKS
In [10], Plaxton et al. proved better lower bounds for Shellsort: (nlog
2
n/(loglog n)
2
). The bounds
particularly applied to increment sequences that are non-monotonic in nature, Shellsort algorithms
that are adaptive and some variants of Shellsort like Shaker Sort. Jiang et al. [11] proved that the
running time for Shellsort in the average case is Ω(pn
1+1/p
). Goodrich in [12] presented a
randomized Shellsort which is unsuspecting of the nature of the data and that runs in O(nlogn).
This algorithm uses the increments
n
/2,
n
/4,
n
/8 …1. In [13], a report of the upper bounds, lower
bounds, average cases, among others of the following variants of Shellsort is presented: Pratt,
Papernov-Stasevich, Sedgewick, Incerpi-Sedgewick, Poonen, Knuth, etc both theoretically and
empirically. Dobosiewicz [14] proposed the use of bubble sort instead of insertion sort, and
carrying out comparison and swapping from left to right of elements that are h-distance apart
where h stands for the increment. However, no proof was made of any result on performance
3. MODIFIED DIMINISHING INCREMENT SORTING (MDIS)
According to Oyelami [5], this approach consists of two stages. The first involves comparing the
first and last element on the list to be sorted and swap accordingly if the last element is less than
the first when the task is to sort in ascending order of magnitude. Next, the second to the last
element and the second element are compared and necessary action taken until the last two
middle elements are compared and necessary action taken (when the list contains an even
number of elements) or when it remains only one element in the middle (when the list contains
odd number of elements). After this, the second stage applies Insertion Sort to the partially-sorted
list to complete the sorting process.
4. CIRCLESORT
The basis of Circlesort is the first stage of the Modified Diminishing Increment Sorting algorithm.
Instead of applying it once, the list is split into two and both halves are subjected to the same
algorithm once more. This recursion continues until the list consists of only one single element
(see Figure 1). If no swaps are made during a complete cycle, the list is sorted.
3. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 16
FIGURE 1: Circlesort when the list size is even.
A single cycle has a time complexity of O(nlog n). However, one cycle is rarely enough to sort a
list completely. On average, log n iterations are required, so the time complexity for a complete
sort is )logO( 2
nn . The name of the algorithm was inspired by the concentric circles in the
diagram, which clearly illustrates the subsequent comparisons within each iteration of a cycle.
Figure 2 below shows how the algorithm behaves when the list size is odd. The list is split into
two, with the overlapping element in gray. Dotted lines indicate the pointers that are used to
assemble these lists - one from the original list and one that is obtained by switching the
elements. It can clearly be seen that they always end up in even arrays, because the centre
element is overlapping.
4. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 17
FIGURE 2: Circlesort when the list size is odd.
A C implementation is presented in Listing 1. The outer loop calculates the indexes of the first and
last elements and passes them as parameters to the inner loop. The inner loop compares the
elements at those indexes and swaps them if required. The number of swaps is maintained by
variable s. The indexes are adjusted until they cross each other in the middle of the list. The list is
split into two by combining the original start index with the adjusted end index and the adjusted
begin index with the original end index. Then, both pairs of indexes are again passed to the inner
loop. The number of swaps performed during the recursions are added to variable s.
This algorithm works for lists of any size. However, lists with a size of a power of two allow
parallization. The algorithm is very simple and can easily be memorized. It consists of only two
loops and two branches, which is only marginally more complex than other well known simple
sorting algorithms like Bubble Sort, Simple Sort and Selection Sort. However, its worst case
performance is much better.
/* Circlesort inner loop */
int CircleSort (int* a, int* b)
{
int* sta = a;
int* end = b;
int s = 0;
if (sta == end) return (0);
while (sta < end) {
if (Compare (sta, end)) {
5. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 18
Swap (sta, end);
s++;
}
sta++; end--;
}
s += CircleSort (a, end);
s += CircleSort (sta, b);
return (s);
}
main () {
/* array declaration and initialization */
int n;
int myarray [n];
/* Circlesort outer loop */
while (CircleSort (myarray, myarray + n - 1));
}
LISTING 1: C implementation of Circlesort.
5. PERFORMANCE OF CIRCLESORT
Two approaches are usually used to measure the performance of an algorithm [15]: experimental
approach and analytical approach. The experimental approach involves carrying out an
experiment to determine the amount of the running time and space used by the algorithm
implemented in a program while the analytical approach involves identifying the factors the
memory space and the running time depend on and calculating their respective contributions. The
experimental approach was adopted in this study. The algorithm was tested using a sorted array
(best case situation), an unsorted array, a partially sorted array (average case) and an inverted
array (worst case situation) and the results compared with Heapsort, Shellsort, Quicksort,
Modified Diminishing Increment Sorting (MDIS) and Introsort. The results are presented in tables
1 to 4.
5.1 Results
In the experimentation of the algorithm, the sets used were obtained by randomizing an
incrementing sequence of numbers, without any duplicates. Table 1 shows the results for a sorted
array, Table 2 for an unsorted array randomized by applying a Knut shuffle to the sorted array,
Table 3 for the partially sorted array and Table 4 for an inverted array.
6. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 19
List
Size
100 1,000 10,000 100,000
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
MDIS 149 0 149 MDIS 1,499 0 1,499 MDIS 14,999 0 14,999 MDIS 149,999 0
149,999
Circle
372 0 372
Circle
5,052 0 5,052
Circle
71,712 0 71,712
Circle
877,968 0
877,968
Shell 503 0 503 Shell 8,006 0 8,006 Shell 120,005 0 120,005 Shell 1,500,006 0
1,500,006
Quick
480 345 825
Quick
7,987 4,960 12,947
Quick
113,631 66,421 180,052
Quick
1,468,946 846,100
2,315,046
Intro 574 371 945 Intro 11,107 6,452 17,559 Intro 170,968 104,236 275,204 Intro 2,386,569 1,307,525
3,694,094
Heap 1,081 640 1721 Heap 17,583 9,708 27,291 Heap 244,460 131,956 376,416 Heap 3,112,517 1,650,854
4,763,371
TABLE 1: Sorted Array.
The set was randomized by applying a Knut shuffle to the sorted set.
List
Size
100 1,000 10,000 100,000
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
Compares Swaps
Total
Operations
Intro 581 399 980 Quick 10,815 6,585 17,400 Quick 156,257 92,747 249,004 Quick 1,933,288 1,061,619
2,994,907
Quick 656 496 1,152 Intro 12,342 7,097 19,439 Intro 180,411 96,470 276,881 Intro 2,585,629 1,468,727 4,054,356
Shell 840 392 1,232 Shell 15,141 7,662 22,803 Heap 235,279 124,114 359,393 Heap 3,019,553 1,574,977
4,594,530
Heap 1,025 588 1,613 Heap 16,868 9,096 25,964 Shell 254,343 139,442 393,785 Shell 4,248,005 2,798,437 7,046,442
Circle 2,604 426 3,030 Circle 50,520 9,218 59,738 Circle 1,075,680 187,088 1,262,768 Circle 16,681,392 3,436,571 20,117,963
MDIS 1,717 1,596 3,313 MDIS 168,568 167,330 335,898 MDIS 16,906,048 16,893,598 33,799,646 MDIS 1,664,412,460 1,664,287,655
3,328,700,115
TABLE 2: Unsorted Array.
The partially sorted set was obtained by sorting half the number of elements of the unsorted set.
8. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 21
6. DISCUSSION
The behaviour of the algorithm has been extensively studied and the following observations were
made:
FIGURE 3: Distribution of 100 Elements After One Cycle.
After one cycle, the largest and the smallest element have been placed in their proper positions.
Reducing the set accordingly does not lead to any significant optimalization, since the number of
cycles is logarithmically bound. As a matter of fact, the number of swaps and comparisions
required was actually increased; after even one cycle, the set is already partially sorted and
takes on a ”saw-tooth” like shape, as is shown in Figure 3.
The number of swaps drops significantly around half the number of cycles required to sort the set.
If the set contains a significant number of duplicates, the set is almost completely sorted at that
point (see Figure 4). If there are no duplicates in the set, the number of swaps per cycle stabilizes
around this point and starts dropping quickly again in the last few cycles (see Figure 5).
In the last few cycles it seems that swaps are more concentrated in the centre of the subsets.
This observed behaviour could not yet be turned into an optimization of the algorithm.
9. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 22
FIGURE 4: Number of swaps per cycle in sets with duplicates.
FIGURE 5: Number of swaps per cycle in sets without duplicates.
This would suggest that at least in certain situations, the Circlesort algorithm could benefit from
further improvements, like finishing the sort using a different, more suited algorithm which can
take advantage of the partially sorted state of the set. This, of course, would completely eradicate
the elegant simplicity of the algorithm.
How economical the algorithm can be implemented
1
was investigated by comparing the number
of bytecodes it generates in 4tH, an implementation of the Forth language (see Table 5). It is
clear that Circlesort is the smallest of all of the investigated algorithms and only slightly larger
than Insertion Sort, but significantly larger than Simple Sort.
1
In case of multiple implementations or variants, the smallest was selected.
10. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 23
Algorithm Size (bytecode)
Simple Sort 30
Insertion Sort 44
Circlesort 61
Shellsort 78
Heapsort 87
MDIS 87
Quicksort 90
TABLE 5: Number of bytecodes per algorithm.
When comparing the results, the number of swaps Circlesort required is comparable to other
sorting algorithms - as long as the set is relatively small. The number of comparisons, however, is
significantly higher and even rises when the size of the set increases. On the one hand, although
the results suggest that the algorithm benefits from a partially sorted set, the effect is considered
too small to justify the conclusion that Circlesort can be considered to be an ”adaptive” algorithm.
On the other hand, the effect of a low number of different keys compared to the number of
elements is too dramatic to ignore. In this case, Circlesort performs significantly better.
Due to the characteristics of the algorithm, Circlesort performs much better on an inverted set.
However, if one element is displaced, this advantage diminishes significantly. This effect is largely
due to the number of comparisons it must make. The number of swaps required is the same as
that of the best performing algorithm in this situation - MDIS. Circlesort is the second fastest
algorithm when a set is completely sorted and when it is inverted, being outperformed only by
MDIS. There seems to be no situation where the behaviour of Circlesort becomes pathological,
since it does not require a pivot or special distribution of values in the set. Since Circlesort
compares elements separated by large gaps, there is no indication that Circlesort suffers from
slow moving elements (”turtles”).
From the results presented in the tables below, it can clearly be seen that for all the sizes of the
set to be sorted for an already sorted list, the performance is as follows from the best to the worst
in efficiency: MDIS, Circlesort, Shellsort, Quicksort, Introsort and Heapsort.
For an unsorted list got by randomizing the sorted set using Knut Shuffle, the performance is as
follows in order of efficiency: for a set containing 100 elements: Introsort, Quicksort, Shellsort,
heapsort, Ciclesort and MDIS. For a set of size 1000: Heapsort, MDIS, Shellsort, Quicksort,
introsort and Ciclesort. For a set of 10,000 elements: Heapsort, MDIS, Quicksort, Shellsort,
Introsort and Circlesort. For a set of 100,000 elements: Heapsort, MDIS, Quicksort, Shellsort,
Introsort and Circlesort.
From these results presented above, it is clear that Introsort and Quicksort are the most efficient
for randomized list of size 100 while both Circlesort and MDIS are the worst. Heapsort and
Shellsort have average efficiency. However, as the size of of the list increases, Heapsort
becomes the best followed by MDIS. Quicksort and Shellsort perform averagely while Circlesort
and Introsort become the most inefficient.
For a all sizes of a partially sorted list, Introsort and Quicksort are the most efficient in that order
while MDIS and Cirlcesort are the most inefficient. Shellsort and Heapsort perform averagely.
In the case of an inverted list, for all sizes of the list, the performance is as follows from the best
to the worst: MDIS, Ciclesort, Quicksort, Shellsort, Introsort and Heapsort.
7. CONCLUSION
Circlesort has proven that the underlying principle of MDIS can be turned into a full-fledged
sorting algorithm, which is not only simple and elegant, but also outperforms some other known
sorting algorithms in some instances. The fact that it can be easily turned into a parallized version
11. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 24
for sets with a size of a power of two and that the behaviour of the algorithm suggests that further
optimizations are feasible, justifies in our opinion further study and research.
Since “There is no known ‘best’ way to sort; there are many best methods, depending on what is
to be sorted, on what machine and for what purpose” [2], Circlesort adds to the list of simple-to-
implement sorting algorithms for those concerned about simplicity. The algorithm is therefore
recommended for sorting in the best case and worst case scenarios because of its efficiency in
these situations. Further research will be carried as per the following:
i. Would a single bout of quicksort (non-recursive) boost the algorithm?
ii. Would the algorithm benefit from a selection sort when the sets are becoming
small enough?
iii. There is a point where only a small percentage of the elements are unsorted. This point is
reached much faster when there are duplicates. Still, it takes several iterations to move in
these at the proper place. How far are these elements at that point from the required
position?
iv. Are there pure form (non-hybrid) derivates that perform better?
8. REFERENCES
[1] M. A. Weiss. Data Structures and Algorithm Analysis in C++. 3
rd
edition, Boston: Pearson
Addison-Wesley, 2006, pp. 266
[2] E. K. Donald. The Art of Computer Programming, Volume 3, Sorting and Searching, Second
Edition., Boston: Addison-Wesley, 1998, pp. 74, 83, 84, 93.
[3] D. L. Shell. “A High-Speed Sorting Procedure.” Communications of the ACM, vol. 2, pp. 30-
32, Jan. 1959.
[4] A. A. Papernov and G.V. Stasevich. “A Method of Information Sorting in Computer
Memories.” Problems of Information Transmission, vol. 1, pp. 63-75, 1965.
[5] M. O. Oyelami. “A Modified Diminishing Increment Sort for Overcoming the Search for Best
Sequence of Increment for Shellsort.” Journal of Applied Sciences Research, vol. 4, pp. 760-
766, 2008.
[6] O. M. Oyelami (2008, August). “Improving the performance of bubble sort using a modified
diminishing increment sorting.” Scientific Research and Essay [On-line]. 4(8), pp. 740-744.
Available: http://www.academicjournals.org/journal/SRE/article-stat/2A1D65C19516
[October 31, 2016].
[7] M.O. Oyelami and I.O. Akinyemi (2011, April). “Improving the Performance of Quicksort for
Average Case Through a Modified Diminishing Increment Sorting.” Journal of Computing
[On-line]. 3(4), pp. 193-197. Available:
https://www.scribd.com/document/54847050/Improving-the-Performance-of-Quicksort-for-
Average-Case-Through-a-Modified-Diminishing-Increment-Sorting [October 31, 2016].
[8] O. M. Oyelami (2013, November). “Bidirectional Bubble Sort Approach to Improving the
Performance of Introsort in the Worst Case Size for Large Input.” International Journal of
Experimental Algorithms [On-line]. 4 (2), pp. 17-24 Available:
http://www.cscjournals.org/library/ma.scriptinfo.php?mc=IJEA-35. [October 31, 2016].
[9] X. Yi-Si, X. P. Liu and X. Shao-Ping. “Efficient collision detection based on AABB trees and
sort algorithm,” in Proc. 8th IEEE International Conference on Control & Automation (ICCA
'10), 2010, pp. 328-332.
[10] C.G. Plaxton, B. Poonen and T. Suel. “Improved lower bounds for Shellsort,” in Proc. 33rd
IEEE` Symp. Foundat. Comput. Sci., 1992, pp. 226–235.
12. Hans Bezemer & Oyelami Moses Olufemi
International Journal of Experimental Algorithms (IJEA), Volume (6) : Issue (2) : 2016 25
[11] T. Jiang, M. Li, and P. Vitány (2000, September). “A lower bound on the average-case
complexity of Shellsort.” Journal of the ACM (JACM) [On-line]. 47 (5), pp. 905-911.Available:
http://homepages.cwi.nl/~paulv/papers/shellsort.pdf [December 20, 2016].
[12]. M. T. Goodrich. ”Randomized shellsort: A simple oblivious sorting algorithm,” in Proc.
Twenty-first annual ACM-SIAM symposium on Discrete Algorithms, 2010, pp. 1262-1277.
[13] R. Sedgewick. “Analysis of Shellsort and related algorithms,” inProc. ESA ’96: The Fourth
Annual European Symposium on Algorithms, 1996, pp. 1–11.
[14] W. Dobosiewicz. “An efficient variation of bubble sort.”.Inf. Process. Lett., vol. 11. pp. 5–6,
Jan. 1980.
[15] S. Sartaj. Data Structures, Algorithms and Applications in Java, International Edition,
Boston, Massachusetts: McGrawHill, 2000, pp. 67.