Abstract Sorting a list means selection of the particular permutation of the members of that list in which the final permutation contains members in increasing or in decreasing order. Sorted list is prerequisite of some optimized operations such as searching an element from a list, locating or removing an element to/ from a list and merging two sorted list in a database etc. As volume of information is growing up day by day in the world around us and these data are unavoidable to manage for real life situations, the efficient and cost effective sorting algorithms are required. There are several numbers of fundamental and problem oriented sorting algorithms but still now sorting a problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently and effectively despite of its simple and familiar statements. Algorithms having same efficiency to do a same work using different mechanisms must differ their required time and space. For that reason an algorithm is chosen according to one’s need with respect to space complexity and time complexity. Now a day, space (Memory) is available in market comparatively in cheap cost. So, time complexity is a major issue for an algorithm. Here, the presented approach is to sort a list with linear time and space complexity using divide and conquer rule by partitioning a problem into n (input size) number of sub problems then these sub problems are solved recursively. Required time and space for the algorithm is optimized through reducing the height of the recursive tree and reduced height is too small (as compared to the problem size) to evaluate. So, asymptotic efficiency of this algorithm is very high with respect to time and space. Keywords: sorting, searching, permutation, divide and conquer algorithm, asymptotic efficiency, space complexity, time complexity, and recursion.
In computer science a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions:
The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
The output is a permutation (reordering) of the input.
Further, the data is often taken to be in an array, which allows random access, rather than a list, which only allows sequential access, though often algorithms can be applied with suitable modification to either type of data.
In computer science a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions:
The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
The output is a permutation (reordering) of the input.
Further, the data is often taken to be in an array, which allows random access, rather than a list, which only allows sequential access, though often algorithms can be applied with suitable modification to either type of data.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
Introduction to datastructure and algorithmPratik Mota
Introduction to data structure and algorithm
-Basics of Data Structure and Algorithm
-Practical Examples of where Data Structure Algorithms is used
-Asymptotic Notations [ O(n), o(n), θ(n), Ω(n), ω(n) ]
-Calculation of Time and Space Complexity
-GNU gprof basic
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
Introduction to datastructure and algorithmPratik Mota
Introduction to data structure and algorithm
-Basics of Data Structure and Algorithm
-Practical Examples of where Data Structure Algorithms is used
-Asymptotic Notations [ O(n), o(n), θ(n), Ω(n), ω(n) ]
-Calculation of Time and Space Complexity
-GNU gprof basic
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
PROPOSAL OF A TWO WAY SORTING ALGORITHM AND PERFORMANCE COMPARISON WITH EXIST...IJCSEA Journal
An algorithm is any well-defined procedure or set of instructions, that takes some input in the form of some values, processes them and gives some values as output. Sorting involves rearranging information into either ascending or descending order. Sorting is considered as a fundamental operation in computer science as it is used as an intermediate step in many operations. A new sorting algorithm namely ‘An Endto-End Bi-directional Sorting (EEBS) Algorithm’ is proposed to address the shortcomings of the current popular sorting algorithms. The goal of this research is to perform an extensive empirical analysis of the newly developed algorithm and present its functionality. The results of the analysis proved that EEBS is much more efficient than the other algorithms having O(n2 ) complexity, like bubble, selection and insertion sort..
Selection Sort with Improved Asymptotic Time Boundstheijes
Sorting and searching are the most fundamental problems in computer science. Sorting is used for most of the times to help in searching. One of the most well known sorting algorithms that are taught at introductory computer science courses is the classical selection sort. While such an algorithms is easy to explain and grasp at the introductory computer science level, it is far from being an efficient sorting technique, since it requires 푶(풏 ퟐ ) time to sort a list of n numbers. It does so by repeatedly finding the minimum. In this paper we explore the benefit of reducing the search time for the minimum on each pass of the algorithm, and show that we can obtain a worst case time bound of 푶(풏 풏 ퟐ ) by making only minor modifications to the input list. Thus our bound is a factor 푶( 풏 ퟐ ) of faster than the classical selections sort and other classical sorts such as insertion and bubble sort.
Analysis and Comparative of Sorting Algorithmsijtsrd
There are many popular problems in different practical fields of computer sciences, computer networks, database applications and artificial intelligence. One of these basic operations and problems is the sorting algorithm. Sorting is also a fundamental problem in algorithm analysis and designing point of view. Therefore, many computer scientists have much worked on sorting algorithms. Sorting is a key data structure operation, which makes easy arranging, searching, and finding the information. Sorting of elements is an important task in computation that is used frequently in different processes. For accomplish, the task in a reasonable amount of time efficient algorithm is needed. Different types of sorting algorithms have been devised for the purpose. Which is the best suited sorting algorithm can only be decided by comparing the available algorithms in different aspects. In this paper, a comparison is made for different sorting algorithms used in the computation. Htwe Htwe Aung "Analysis and Comparative of Sorting Algorithms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26575.pdfPaper URL: https://www.ijtsrd.com/computer-science/programming-language/26575/analysis-and-comparative-of-sorting-algorithms/htwe-htwe-aung
Data structure and algorithm using javaNarayan Sau
This presentation created for people who like to go back to basics of data structure and its implementation. This presentation mostly helps B.Tech , Bsc Computer science students as well as all programmer who wants to develop software in core areas.
A Variant of Modified Diminishing Increment Sorting: Circlesort and its Perfo...CSCJournals
The essence of the plethora of sorting algorithms available is to have varieties that suit different characteristics of data to be sorted. In addition, the real goal is to have a sorting algorithm that is both efficient and easy to implement. Towards achieving this goal, Shellsort improved on Insertion sort, and various sequences have been proposed to further improve the performance of Shellsort. The best of all the improvements on Shellsort in the worst case is the Modified Diminishing Increment Sorting (MDIS). This article presents Circlesort, a variant of MDIS. The results of the implementation and experimentation of the algorithm with MDIS and some notable sorting algorithms showed that it performed better than the established algorithms considered in the best case and worst case scenarios, but second to MDIS. The results of the performance comparison of the algorithms considered also show their strengths and weaknesses in different scenarios. This will guide prospective users as to the choice to be made depending on the nature of the list to be sorted.
K-Sort: A New Sorting Algorithm that Beats Heap Sort for n 70 Lakhs!idescitation
Sundararajan and Chakraborty [10] introduced a new
version of Quick sort removing the interchanges. Khreisat
[6] found this algorithm to be competing well with some other
versions of Quick sort. However, it uses an auxiliary array
thereby increasing the space complexity. Here, we provide a
second version of our new sort where we have removed the
auxiliary array. This second improved version of the algorithm,
which we call K-sort, is found to sort elements faster than
Heap sort for an appreciably large array size (n 70,00,000)
for uniform U[0, 1] inputs.
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
Abstract
Soil stabilization has proven to be one of the oldest techniques to improve the soil properties. Literature review conducted revealed
that uses of natural inorganic stabilizers are found to be one of the best options for soil stabilization. In this regard an attempt has
been made to evaluate the influence of RBI-81 stabilizer on properties of black cotton soil through laboratory investigations. Black
cotton soil with varying percentages of RBI-81 viz., 0, 0.5, 1, 1.5, 2, and 2.5 percent were studied for moisture density relationships
and strength behaviour of soils. Also the effect of curing period was evaluated as literature review clearly emphasized the strength
gain of soils stabilized with RBI-81 over a period of time. The results obtained shows that the unconfined compressive strength of
specimens treated with RBI-81 increased approximately by 250% for a curing period of 28 days as compared to virgin soil. Further
the CBR value improved approximately by 400%. The studies indicated an increasing trend for soil strength behaviour with
increasing percentage of RBI-81 suggesting its potential applications in soil stabilization.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
Abstract
Increase in traffic along with heavier magnitude of wheel loads cause rapid deterioration in pavements. There is a need to improve
density, strength of soil subgrade and other pavement layers. In this study an attempt is made to improve the properties of locally
available loamy soil using twin approaches viz., i) increasing the compaction of soil and ii) treating the soil with chemical stabilizer.
Laboratory studies are carried out on both untreated and treated soil samples compacted by different compaction efforts. Studies
show that increase in compaction effort results in increase in density of soil. However in soil treated with chemical stabilizer, rate of
increase in density is not significant. The soil treated with chemical stabilizer exhibits improvement in both strength and performance
properties.
Keywords: compaction, density, subgradestabilization, resilient modulus
Geographical information system (gis) for water resources managementeSAT Journals
Abstract
Water resources projects are inherited with overlapping and at times conflicting objectives. These projects are often of varied sizes
ranging from major projects with command areas of millions of hectares to very small projects implemented at the local level. Thus,
in all these projects there is seldom proper coordination which is essential for ensuring collective sustainability.
Integrated watershed development and management is the accepted answer but in turn requires a comprehensive framework that can
enable planning process involving all the stakeholders at different levels and scales is compulsory. Such a unified hydrological
framework is essential to evaluate the cause and effect of all the proposed actions within the drainage basins.
The present paper describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) which is
intended to meet the specific information needs of the various line departments of a typical State connected with water related aspects.
The HIS consist of a hydrologic information database coupled with tools for collating primary and secondary data and tools for
analyzing and visualizing the data and information. The HIS also incorporates hydrological model base for indirect assessment of
various entities of water balance in space and time. The framework would be maintained and updated to reflect fully the most
accurate ground truth data and the infrastructure requirements for planning and management.
Keywords: Hydrological Information System (HIS); WebGIS; Data Model; Web Mapping Services
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
Abstract
This paper presents an outlook on experimental behavior and a comparison with predicted formula on the behaviour of circular
concentrically loaded self-consolidating fibre reinforced concrete filled steel tube columns (HSSCFRC). Forty-five specimens were
tested. The main parameters varied in the tests are: (1) percentage of fiber (2) tube diameter or width to wall thickness ratio (D/t
from 15 to 25) (3) L/d ratio from 2.97 to 7.04 the results from these predictions were compared with the experimental data. The
experimental results) were also validated in this study.
Keywords: Self-compacting concrete; Concrete-filled steel tube; axial load behavior; Ultimate capacity.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
Abstract
Efficiency of the road network system is analyzed by travel time reliability measures. The study overlooks on an important measure of
travel time reliability and prioritizing Tiruchirappalli road network. Traffic volume and travel time were collected using license plate
matching method. Travel time measures were estimated from average travel time and 95th travel time. Effect of non-motorized vehicle
on efficiency of road system was evaluated. Relation between buffer time index and traffic volume was created. Travel time model has
been developed and travel time measure was validated. Then service quality of road sections in network were graded based on
travel time reliability measures.
Keywords: Buffer Time Index (BTI); Average Travel Time (ATT); Travel Time Reliability (TTR); Buffer Time (BT).
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
Abstract
Land and water are the two vital natural resources, the optimal management of these resources with minimum adverse environmental
impact are essential not only for sustainable development but also for human survival. Satellite remote sensing with geographic
information system has a pragmatic approach to map and generate spatial input layers of predicting response behavior and yield of
watershed. Hence, in the present study an attempt has been made to understand the hydrological process of the catchment at the
watershed level by drawing the inferences from moprhometric analysis and runoff. The study area chosen for the present study is
Yagachi catchment situated in Chickamaglur and Hassan district lies geographically at a longitude 75⁰52’08.77”E and
13⁰10’50.77”N latitude. It covers an area of 559.493 Sq.km. Morphometric analysis is carried out to estimate morphometric
parameters at Micro-watershed to understand the hydrological response of the catchment at the Micro-watershed level. Daily runoff
is estimated using USDA SCS curve number model for a period of 10 years from 2001 to 2010. The rainfall runoff relationship of the
study shows there is a positive correlation.
Keywords: morphometric analysis, runoff, remote sensing and GIS, SCS - method
-
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
A unique sorting algorithm with linear time & space complexity
1. IJRET: International Journal of Research in Engineering and Technology eISSN:2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 11 | Nov-2014, Available @ http://www.ijret.org 264
A UNIQUE SORTING ALGORITHM WITH LINEAR TIME & SPACE
COMPLEXITY
Sanjib Palui1
, Somsubhra Gupta2
1
PLP, CVPR Unit, Indian Statistical Institute, WB, India
2
HOD, Information Technology, JISCE, WB, India
Abstract
Sorting a list means selection of the particular permutation of the members of that list in which the final permutation contains
members in increasing or in decreasing order. Sorted list is prerequisite of some optimized operations such as searching an
element from a list, locating or removing an element to/ from a list and merging two sorted list in a database etc. As volume of
information is growing up day by day in the world around us and these data are unavoidable to manage for real life situations,
the efficient and cost effective sorting algorithms are required. There are several numbers of fundamental and problem oriented
sorting algorithms but still now sorting a problem has attracted a great deal of research, perhaps due to the complexity of solving
it efficiently and effectively despite of its simple and familiar statements.
Algorithms having same efficiency to do a same work using different mechanisms must differ their required time and space. For
that reason an algorithm is chosen according to one’s need with respect to space complexity and time complexity. Now a day,
space (Memory) is available in market comparatively in cheap cost. So, time complexity is a major issue for an algorithm. Here,
the presented approach is to sort a list with linear time and space complexity using divide and conquer rule by partitioning a
problem into n (input size) number of sub problems then these sub problems are solved recursively. Required time and space for
the algorithm is optimized through reducing the height of the recursive tree and reduced height is too small (as compared to the
problem size) to evaluate. So, asymptotic efficiency of this algorithm is very high with respect to time and space.
Keywords: sorting, searching, permutation, divide and conquer algorithm, asymptotic efficiency, space complexity,
time complexity, and recursion.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
An algorithm [1] [2] [3] [6] [7] is a finite set of instructions,
that if followed, accomplish a particular task. Algorithm
must satisfy some characteristic like having input, output,
definiteness, finiteness, effectiveness.
Sorting [2] [5] [8] a list means a particular permutation of a
given sequence in which the elements of the sequence are
arranged in increasing/decreasing order. Sorting algorithms
used in computer science are often classified by:
Computational complexity [5] [9] [10] (worst, average
and best behavior) of element comparisons in terms of
the size of the list.
Computational complexity of swaps (for "in place"
algorithms) are sometimes characterized in terms of
the performances that the algorithms yield and the
amount of time that the algorithms take.
Requirements of memory [11] [12] and other computer
resources.
Recursion [13]. Some algorithms are either recursive
or non-recursive, while others may be both.
Stability: stable sorting algorithms maintain the
relative order of records with equal keys i.e. values.
Whether or not they are a comparison sort. A
comparison sort examines the data only by comparing
two elements with a comparison operator.
General method: insertion, exchange, selection,
merging, etc.
Adaptability: Whether or not the pre-sortedness of the
input affects the running time. Algorithms that take
this into account are known to be adaptive.
Now a day there are many effective sorting algorithms.
Although lots of researchers are working on this, but unlike
other field of research, number of proposed new, innovative
and cost effective work is very few in the field of sorting
algorithm.
We have designed and applied one sorting algorithm to
achieve linear time complexity. In this paper, this new
algorithm is proposed. As compared to existing algorithm, it
gives better result and also it has linear time and space
complexity, we named this work of algorithm as --“A
Unique Sorting Algorithm with Linear Time & Space
Complexity”.
Here, our work is organized as follows: PREVIOUS
RELATED WORKS is given in section 2, ALGORITHM
OF PROPOSED WORK is in section 3, ANALYSIS OF
THE ALGORITHM is in section 4, and finally conclusion is
given on section 5 and then REFERENCEs.
2. IJRET: International Journal of Research in Engineering and Technology eISSN:2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 11 | Nov-2014, Available @ http://www.ijret.org 265
2. PREVIOUS RELATED WORKS
Some well known sorting algorithm are bubble sort,
selection sort, insertion sort ,merge sort, quick sort ,heap
sort ,radix sort , cocktail sort ,shell sort etc. All these
algorithms can be classified according to their average case
and worst case time complexity (asymptotic complexity-big
theta notation-θ and big oh notation – O respectively). Time
complexity of some algorithms are given in Table 1 along
with the stability of these algorithms i.e. they are stable or
not. Here (with respect to the Table 1) n is the input size of
the list to be sorted, d is number of digit in the largest
number among inputs and k is all possible digit/word (for
example- k=10 as decimal).
Some comparative study [11] [14] [15] [16] have been
carried out in this field and situations of better suitemate for
these algorithms (Table 1) are clearly notified. Depend on
applications or data structure, which among sorting
algorithm or modified version of an existing algorithm has
less complexity than the original one or which among these
algorithms is the best fitted in a particular circumstances are
drawn on some proposed approach [17] [18] [19] [20].
Table -1: Information about Some algorithms
Name of the
Algorithm
Time Complexity Stable?
Average case Worst Case
Bubble sort Θ(n2
) O(n2
) Yes
Selection sort Θ(n2
) O(n2
) No
Insertion sort Θ(n2
) O(n2
) Yes
Merge sort Θ(n log2 n) O(n log2 n) Yes
Quick sort Θ(n log2 n) O(n2
) No
Bucket sort Θ(d(n+k)) O(n2
) Yes
Heap sort Θ(n log2 n) O(n log2 n) No
Depending on inputs and some predefined conditions, some
new algorithms [18] [21] [22] have been projected to
achieve better complexity. Some algorithm is designed for
linear complexity [23] [24] but they can be executed in the
system platform having some special characteristics as these
algorithms demand for that. Poll sort, simple pancake sort,
bead sort are the example of this type of sorting algorithms.
Pancake sorting algorithm is not stable but has linear time
complexity. Bead sort algorithm requires special hardware
design to execute. Poll sort algorithm takes linear-time to
execute but it is an analog algorithm for sorting a sequence
of items, requiring O(n) stack space, and the sorting
algorithm is stable but n (number of elements in input list)
parallel processors are required.
There is an algorithm called Randomized Select [24] [25]
algorithm for sorting. The expected running time of this
algorithm is θ (n); a linear asymptotic running time where n
is the input size of the problem. This algorithm works like
Randomized Quick-sort [26] (where pivot element is select
randomly from the list).Two main constraint of this
algorithm
All the elements in the input sub-problem are
distinct.
Partition is done based on random selection of an
element.
Complexities of algorithms are strictly dependent on input
size, caches performance etc., briefly description about
complexities of algorithms are given some of the noble work
[28] [29] [30].
3. ALGORITHM OF PROPOSED WORK
The pseudo-code of our work, --“A Unique Sorting
Algorithm with Linear Time & Space Complexity” is given
below
Sort (A, lb, ub)
Here A is a 1-D input list of decimal integer with n elements
in memory where n=ub-lb+1 and ub=upper bound, lb=lower
bound of the list.
Begin
1. if (lb<ub) then,
2. find min and max
3. if (min!=max) then,
4. set n = ub-lb+1
5. create n number of empty List
6. set div = (max - min) /n +1
7. for i=lb to ub by 1do,
8. j= (A[i] - min)/div
9. add A[i] to the jth
List
10. end for
11. set k = lb
12. for i =0 to n by 1 do,
13. set l = k
14. set size =number of elements in ith
List
15. for j =0 to size by 1 do,
16. A [k]= jth
element of ith
List
17. set k = k +1
18. end for
19. If ((l <k-1) && (! ((l = = lb)&&(k = = ub))))
20. call sort( A, l, k -1)
21. end if
22. end for
23. end if
24. end if
End
Algorithm1.
min and max in line number 2 in Algorithm1 are minimum
and maximum number from the list respectively. Here we
have designed one variable for every “sort (x, y, z)” function
call, called div which plays major role in the above
mentioned pseudo code of the designed sorting algorithm.
Every element of the input list forms its own index as it will
be in the output list with the help of div variable[line
number 8 of Algorithm1].Then all the elements of the input
list with same indexes (as it is generated in line number 8 of
Algorithm1) are solved as sub problems[27]. Elements with
same index are treated here as one sub problem. For
increasing order output----
3. IJRET: International Journal of Research in Engineering and Technology eISSN:2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 11 | Nov-2014, Available @ http://www.ijret.org 266
Every sub problem (except left most) as single unit
has all smaller elements from the input list in its left.
Every sub problem (except right most) as single unit
has all larger elements from the input list in its right.
Every sub problem can have 0 to n-1 elements.
If number of elements in one sub problem is one it
takes unit time cost to sort.
Ideally list with n elements are partitioned in to n sub
problems each of which have one element.
It takes linear time complexity, O (n).
Relative positions of two elements having same value
are not changed, so this algorithm is stable.
All other strategies that are followed to execute this
algorithm properly and efficiently-
If the input list is combination of positive and
negative integers, then two list are formed i.e. one list
for negative numbers and another list for positive
numbers. Then all the elements of the list of negative
numbers are made positive. After performing
algorithmic operations to make them as sorted, sign
of elements are changed again and the list is reversed.
Then operations are performed on positive list. At
last, these two lists are concatenated.
If the numbers are real, this algorithm can be applied
easily. Problem is divided into sub problems
according to their absolute value and if absolute
values are same for all elements then partition is done
based on their precision value.
4. ANALYSIS OF THE ALGORITHM
From the number statistics if the numbers are in uniform
distribution then almost no recursions are happened,
otherwise after first partition of the array, it will make
greater than or equal to two uniform distributed array where
complexity is linearly dependant on n. Here, after partition
of the array some input elements are taken place in the array
like in quick sort one element is fixed in exact position.
Here number of fixed elements is 1 to n where n is the
number of elements to be sorted.
4.1 Time Complexity
Asymptotic time complexities [3] [4] of this algorithm in
different cases are described here-
4.1.1 Best Case
If the elements are uniform distributed, then ideally
problems are divided in to n sub problems. So no recursive
call is executed because almost each sub problem has one
element.
So, the required time is
T (n) =c +T(1)+T(1)+T(1)+T(1)…………n times
≡c + O (n) where c=constant time
≡ O (n) as n is very large
4.1.2 Average Case
If number of elements n becomes very high, in first function
call, divided sub-problems will be almost uniformly
distributed. Let, here recursive call is happened m times
where (m<n) and sub-problems have n1, n2… nm elements
respectively.
So, n1+n2+…..+nm +p= n ---- Eq. (1)
where p is the number of elements that are already taken
place in the input list as sorted elements. So, time
complexity is
T(n)=c.n+ T(n1)+T(n2)+……+T(nm)
Where
c=constant time
=c. n+ c.n1 +c.n2+ ……+c.nm
[From the best case as the sub problems are uniform
distributed of , ,…, elements, there is rare chance for
every sub problem to give average case time complexity ]
= c. n+ c. ( + +……+ )
=c. n+ c. n [from, Eq. (1) ]
=2cn
≡O(n)
4.1.3 Worst Case
The algorithm gives worst case time complexity when,
Inputs elements are randomly distributed.
All elements except the largest one of the input list
have values less than the value of div as it is
generated in line number 6 in our algorithm1 and it is
happen again and again for every sub problem.
The input series is one of all possible permutation of
the elements of the series-----
a, i*(i-1)th
term +c ∀ i=2 to n - --Eq.(2)
where a=starting element of this series and having a
positive value and c > = 0.
4.1.3.1 Case-I
According to the proposed algorithm, at that situation, time
complexity is
T(n)=c .n+ T(n-1)
≡O(n2
)
which is possible theoretically but not in real life because
we are talking about asymptotic time complexity. So,
number of input elements is very very high. Every real life
database collects and stores similar kinds of data and it is a
very rare chance that at least some of elements in the sorted
output series satisfy Eq. (2).
So, for every real life problem this algorithm gives average
case time complexity.
4. IJRET: International Journal of Research in Engineering and Technology eISSN:2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 11 | Nov-2014, Available @ http://www.ijret.org 267
4.1.3.2 Case-II
Depend on that situation of the worst case we have added
some more instruction with the original above mentioned
algorithm (i.e. Algorithm1) in between line number 3 and
line number 4.
Second largest number is found out along with the
largest number.
Checking is done according to the series of Eq. (2).
If match then max (variable of the algorithm1) is
set to second largest number instead of the largest
number.
After this, it is seen that the problem is partitioned at least in
to three sub problems. It is empirically designed in such a
way that time complexity of the worst case for this
algorithm become .
4.2 Space Complexity
Space complexities [4] [8] in different cases are fully
dependent on number of times the subroutine (as described
in Algorithm1) is called recursively and input size for that
sub-problem for which this subroutine is called.
When input size is n ,this algorithm needs more memory for
the n numbers of list which are initially empty and store the
elements of same index in the time of inputs being
processed (from Algorithm1-line number 5 and line
number 9). Now total number of elements in the n numbers
of list is n.
So, required memory unit for every subroutine call is
n (for input elements) +n (for index lists)+c
= 2*n +c
Where c = memory unit for other variables
So, asymptotic space complexity is also linearly dependent
on input size as this algorithm is implemented to reduce the
depth of recursion tree.
Complexity and stability details of presented algorithm are
in Table-2---
Table -2: Information about Presented algorithm
Time Complexity Space
Complexity
Stable?
Best Case O(n)
Same as
time
complexity
Yes
Average
Case
O(n)
Worst
Case
O(n2
) or
Only for Eq.(2)
O(n)
Some experimental results with respect to input sizes and
required times of this algorithm are given in Table-3 as
compared to the results of some other well known
algorithms where computer environment, paradigm, inputs
are same.
Table -3: Experiment Results
Input
Size
Required Time in Millisecond
Bubbl
e Sort
Selectio
n Sort
Insertio
n Sort
Merg
e Sort
Proposed
Approac
h
15000 1248 442 328 7 7
30000 4863 1746 1325 22 13
60000 19904 7056 6883 71 25
90000 43550 15750 14921 150 27
12000
0
76099 28917 25132 265 36
15000
0
12241
6
44925 38081 394 45
5. CONCLUSIONS
This presented algorithm is implemented successfully
through a repeatable task of design, carrying out and
analyze. The advantages of this algorithm are its speed,
requirement of less memory than existing one and stability.
Selection of sorting algorithm is application and situation
dependent but this algorithm works well in every field. For
real life sorting problem, time complexity & space
complexity of this algorithm are linear.
There is a broad future scope to experiment the proposed
algorithm for finding out short-coming (if any) based on
some uncovered real life test suits with solutions.
REFERENCES
[1]. D.E. Kunth, Fundamental Algorithms, The Art of
Computer Programming: Vol.1, Addison-Wesley, 1968.
Third Edition, 1997.
[2]. D.E. Kunth, Sorting and Searching, The Art of
Computer Programming: Vol. 3, Addison-Wesley,
1973.Second Edition, 1998.
[3]. D.E. Kunth, Semi numerical Algorithms, The Art of
Computer Programming: Vol.2, Addison-Wesley, 1969.
Third Edition, 1997.
[4]. D.E. Kunth, Big omicron and big omega and big theta.
SIGACT News,8(2)18-23,1976.
[5]. Cormen T., Leiserson C., Rivest R., and Stein C.,
Introduction to Algorithm McGraw Hill, 2001.
[6]. A. V. Aho, J. E. Hopcroft and J. D. Ullman, The Design
and Analysis of Computer Algorithms.Addison-
Wesley,1974.
[7]. A. V. Aho, J. E. Hopcroft and J. D. Ullman Data
Structures and Algorithms.Addison-Wesley,1983
[8]. A. Aggarwal and J. Scott Vitter.The input/output
Complexity of sorting and related problems.
Communication of the ACM, 31(9):1116-1127,1998.
[9]. S. Baase and A. V. Gelder. Computer Algorithm:
Introduction to Design and Analysis. Addison-Wesley,
Third edition,2000.
[10]. Lavore, Robert. Arrays, Big O Notation and Simple
Sorting. Data Structures and Algorithms in Java (2nd
Edition). Sams, 978-0-672-32453-6..
5. IJRET: International Journal of Research in Engineering and Technology eISSN:2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 11 | Nov-2014, Available @ http://www.ijret.org 268
[11]. A. Tridgell, Efficient Algorithms for Sorting and
Synchronization, Ph.D. Thesis, Dept. of Computer Science,
The Australian National University, 1999.
[12]. K. D. Cooper and L. Xu. An efficient static analysis
algorithm to detect redundant memory operations. In
Workshop on Memory Systems Performance (MSP ’02),
Berlin, Germany, June 2002.
[13]. M. Akra and L. Bazzi.On the solution of linear
recurrence equation. Computational Optimization and
Application,10(2):195-210,1998.
[14]. S. Jadoon, S. F. Solehria and M. Qayum, (2011)
“Optimized Selection Sort Algorithm is faster than
Insertion Sort Algorithm: a Comparative Study”
International Journal of Electrical & Computer Sciences,
IJECS-IJENS, Vol: 11 No: 02.
[15]. Y. Yang, P. Yu, Y. Gan, (2011) “Experimental Study
on the Five Sort Algorithms”, International Conference on
Mechanic Automation and Control Engineering (MACE).
[16]. V. Estivill-Castro and D. Wood. A survey of adaptive
sorting algorithms. ACM Computing Surveys, 24:441–476,
1992.
[17]. W. Min (2010) “Analysis on 2-Element Insertion Sort
Algorithm”, International Conference on Computer Design
And Appliations (ICCDA).
[18]. E. Kapur, P. Kumar and S. Gupta-“Proposal Of A Two
Way Sorting Algorithm And Performance Comparison With
Existing Algorithms”- International Journal of Computer
Science, Engineering and Applications (IJCSEA) Vol.2,
No.3, June 2012
[19]. M. Peczarski. New results in minimum-comparison
sorting. Algorithmica,40:133–145, 2004
[20]. J.-L. Lambert. Sorting the sums (xi +yj ) in o(n2)
comparisons. Theoretical Computer Science, 103:137–141,
1992
[21]. S. Z. Iqbal, H. Gull, A. W. Muzaffar, (2009) “A New
Friends Sort Algorithm”,IEEE Second International
Conference on Computer Science and Information
Technology.
[22]. J.L. Bentley and R. Sedgewick, Fast Algorithms for
Sorting and Searching Strings, ACM-SIAM SODA”, pp.
360-369, 1997.
[23]. Box R. and Lacey S., A Fast Sort, Computer Journal of
Byte Magazine, vol. 16, no. 4, pp. 315-315, 1991.
[24]. C. A. Hoare. Algorithm 63 (PARTITION) and
algorithm 65 (FIND). Communications of the ACM, 4(7),
pp. 321-322, 1961.
[25]. R. W. Floyd and R.L. Rivest. Expected time bounds
for selection. Communications of the ACM, 18(3):pp. 165-
172, 1975.
[26]. M. D. McIlroy. A killer adversary for quicksort.
Software -Practice and Experience, 29(4):341-344, 1999.
[27]. J. L. Bentley, D. Haken and J. B. Saxe. A general
methods for solving divide and conquer recurrence.
SIGACT News,12(3):36-44,1980.
[28]. L. Trevisan, Lecture Notes on Computational
Complexity, Computer Science Division, U.C. Berkeley,
2002.
[29]. J. Hartmanis and R. E. Stearns, on the computational
complexity of algorithms,1963.
[30]. A. LaMarca and R. Ladner. The influence of caches on
the performance of sorting. J. Algorithms, 31:66–104, 1999.
BIOGRAPHIES
Sanjib Palui has completed his bachelor
degree in Information Technology and
master degree in Software Engineering
in the year of 2012 and 2014
respectively. Then he has joined Indian
Statistical Institute as Project Linked
Person in the unit of Computer Vision & Pattern
Recognition (CVPR). He is working on Data Structure &
algorithm, Image Processing and Optical Character
Recognition fields.
Dr. Somsubhra Gupta is presently the
Assistant Professor of the Department of
Information Technology, JIS College of
Engineering (An Autonomous
Institution). He is graduated from the
University of Calcutta and completed his Masters from
Indian Institute of Technology, Kharagpur. He received Ph.
D. award entitled “Multiobjective Decision Making in
Inexact Environment using Genetic Algorithm: Theory and
Applications” form University of Kalyani. His area of
teaching is Algorithm and allied domains and research area
is Machine Intelligence. In research, he has around 56
papers including Book Chapters so far in National /
International Journal / Proceedings and over 40 citations. He
is Principal Investigator / Project Coordinator to some
Research projects (viz. RPS scheme AICTE). He was the
Convener of International Conference on Computation and
Communication Advancement (IC3A-2013). He is invited in
the Technical Programme Committee of number of
Conferences, delivered as an invited speaker, an INS
(Elsevier Science) reviewer and attended NAFSA-2013
conference of international educators at St. Louis, Missouri,
USA