The Bin Packing Problem is one of the most important optimization problems. In recent years, due to its
NP-hard nature, several approximation algorithms have been presented. It is proved that the best
algorithm for the Bin Packing Problem has the approximation ratio 3/2 and the time orderO(n),
unlessP=NP. In this paper, first, a
-approximation algorithm is presented, then a modification to FFD
algorithm is proposed to decrease time order to linear. Finally, these suggested approximation algorithms
are compared with some other approximation algorithms. The experimental results show the suggested
algorithms perform efficiently.
In summary, the main goal of the research is presenting methods which not only enjoy the best theoretical
criteria, but also perform considerably efficient in practice.
A novel work for bin packing problem by ant colony optimizationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A novel work for bin packing problem by ant colony optimizationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The best known deterministic polynomial-time algorithm for primality testing right now is due to
Agrawal, Kayal, and Saxena. This algorithm has a time complexity O(log15=2(n)). Although this algorithm is
polynomial, its reliance on the congruence of large polynomials results in enormous computational requirement.
In this paper, we propose a parallelization technique for this algorithm based on message-passing
parallelism together with four workload-distribution strategies. We perform a series of experiments on an
implementation of this algorithm in a high-performance computing system consisting of 15 nodes, each with
4 CPU cores. The experiments indicate that our proposed parallelization technique introduce a significant
speedup on existing implementations. Furthermore, the dynamic workload-distribution strategy performs
better than the others. Overall, the experiments show that the parallelization obtains up to 36 times speedup.
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
For further information
https://github.com/ashim888/dataStructureAndAlgorithm
References:
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation
http://web.mit.edu/16.070/www/lecture/big_o.pdf
https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
https://justin.abrah.ms/computer-science/big-o-notation-explained.html
Environmental awareness was given ultimate importance in ancient India.
conservation of flora-fauna,prevention of pollution has been depicted briefly.
What was the punishment decided by administration during the period of Chanakya.
Protect the Mother of Perish. Pollution prevention is best solution,
The best known deterministic polynomial-time algorithm for primality testing right now is due to
Agrawal, Kayal, and Saxena. This algorithm has a time complexity O(log15=2(n)). Although this algorithm is
polynomial, its reliance on the congruence of large polynomials results in enormous computational requirement.
In this paper, we propose a parallelization technique for this algorithm based on message-passing
parallelism together with four workload-distribution strategies. We perform a series of experiments on an
implementation of this algorithm in a high-performance computing system consisting of 15 nodes, each with
4 CPU cores. The experiments indicate that our proposed parallelization technique introduce a significant
speedup on existing implementations. Furthermore, the dynamic workload-distribution strategy performs
better than the others. Overall, the experiments show that the parallelization obtains up to 36 times speedup.
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.
For further information
https://github.com/ashim888/dataStructureAndAlgorithm
References:
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation
http://web.mit.edu/16.070/www/lecture/big_o.pdf
https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
https://justin.abrah.ms/computer-science/big-o-notation-explained.html
Environmental awareness was given ultimate importance in ancient India.
conservation of flora-fauna,prevention of pollution has been depicted briefly.
What was the punishment decided by administration during the period of Chanakya.
Protect the Mother of Perish. Pollution prevention is best solution,
Rapidly changing marketplaces, intense competition, the stress of constantly having to do more with less, and the aftermath of mergers and acquisitions test the resilience of organizations.
A resilient workforce has superior performance, higher productivity and creativity, better health, and more financial success.
Organizational drivers of resilience include: managing workload, offering access to training and development, giving employees more control over their work, developing effective managers, and fostering work-life integration.
Some ideas about how to write a business plan.
The second chapter : information & analytics
See the first chapter : introduction (http://www.slideshare.net/oscarz9/how-to-write-a-business-plan-01)
Created by Pengyuan Zhao
Like & Share, Follow me
Twitter: @zpy2789
Contact : zpy2789@hotmail.com
A location based movie recommender systemijfcstjournal
Available recommender systems mostly provide recommendations based on the users’ preferences by
utilizing traditional methods such as collaborative filtering which only relies on the similarities between users and items. However, collaborative filtering might lead to provide poor recommendation because it does not rely on other useful available data such as users’ locations and hence the accuracy of the recommendations could be very low and inefficient. This could be very obvious in the systems that locations would affect users’ preferences highly such as movie recommender systems. In this paper a new locationbased movie recommender system based on the collaborative filtering is introduced for enhancing the
accuracy and the quality of recommendations. In this approach, users’ locations have been utilized and
take in consideration in the entire processing of the recommendations and peer selections. The potential of
the proposed approach in providing novel and better quality recommendations have been discussed through experiments in real datasets.
Attain Attractive Funds with Quick Decision LoansMichal Wickam
Quick decision loans are not meant for securing a huge amount of cash. Under these funds you can avail small and sufficient amount of money to meet your temporary and unplanned cash demands well on the same day of the application. www.quickdecisionloans.org.uk
BIN PACKING PROBLEM: A LINEAR CONSTANTSPACE -APPROXIMATION ALGORITHMijcsa
Since the Bin Packing Problem (BPP) is one of the main NP-hard problems, a lot of approximation algorithms have been suggested for it. It has been proven that the best algorithm for BPP has the approximation ratio of and the time order of , unless In the current paper, a linear approximation algorithm is presented. The suggested algorithm not only has the best possible theoretical factors, approximation ratio, space order, and time order, but also outperforms the other approximation
algorithms according to the experimental results; therefore, we are able to draw the conclusion that the algorithms is the best approximation algorithm which has been presented for the problem until now
Parallel sorting algorithms order a set of elements USING MULTIPLE processors in order to enhance the performance of sequential sorting algorithms. In general, the performance of sorting algorithms are EVALUATED IN term of algorithm growth rate according to the input size. In this paper, the running time, parallel speedup and parallel efficiency OF PARALLEL bubble sort is evaluated and measured. Message Passing Interface (MPI) IS USED for implementing the parallel version of bubble sort and IMAN1 supercomputer is used to conduct the results. The evaluation results show that parallel bubble sort has better running time as the number of processors increases. On other hand, regarding parallel efficiency, parallel bubble sort algorithm is more efficient to be applied OVER SMALL number of processors.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the
Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems.
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdfRGPV De Bunkers
Title: Algorithmic Foundations: Analysis, Design, and Efficiency
Description:
This PDF document explores the fundamental concepts of algorithms in the subject "Analysis & Design of Algorithm." Delve into the intricate world of algorithmic problem-solving as we cover various topics, including algorithms, designing algorithms, analyzing algorithms, asymptotic notations, heap and heap sort, introduction to the divide and conquer technique, and analysis, design, and comparison of various algorithms based on this technique.
Discover the essence of algorithmic efficiency and learn to evaluate the performance of algorithms using asymptotic notations, such as Big O, Omega, and Theta. Understand the principles of designing algorithms using the divide and conquer approach, which involves breaking complex problems into manageable subproblems and combining their solutions to solve the original problem.
Explore prominent sorting algorithms like merge sort and quick sort, which showcase the power of divide and conquer in tackling real-world challenges. Witness the elegance of Strassen's matrix multiplication, a divide and conquer-based method that optimizes matrix multiplication for large datasets.
This comprehensive PDF is a valuable resource for computer science enthusiasts, students, and professionals seeking to enhance their algorithmic knowledge and design efficient solutions for computational problems. Immerse yourself in the world of algorithms, unravel their intricacies, and master the art of crafting algorithms with optimal performance.
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
Branch and Bound technique (B&B) is commonly used for intelligent search in finding a set of integer solutions within a space of interest. The corresponding binary tree structure provides a natural parallelism allowing concurrent evaluation of sub-problems using parallel computing technology. Flower pollination Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of Flower pollination Meta-heuristic Algorithm, (FPPSO), for solving integer programming problems. The proposed algorithm combines the standard flower pollination algorithm (FP) with the particle swarm optimization (PSO) algorithm to improve the searching accuracy. Numerical results show that the FPPSO is able to obtain the optimal results in comparison to traditional methods (branch and bound) and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm.Branch and bound, flower pollination Algorithm; meta-heuristics; optimization; the particle swarm optimization; integer programming.
The New Hybrid COAW Method for Solving Multi-Objective Problemsijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
ENHANCING ENGLISH WRITING SKILLS THROUGH INTERNET-PLUS TOOLS IN THE PERSPECTI...ijfcstjournal
This investigation delves into incorporating a hybridized memetic strategy within the framework of English
composition pedagogy, leveraging Internet Plus resources. The study aims to provide an in-depth analysis
of how this method influences students’ writing competence, their perceptions of writing, and their
enthusiasm for English acquisition. Employing an explanatory research design that combines qualitative
and quantitative methods, the study collects data through surveys, interviews, and observations of students’
writing performance before and after the intervention. Findings demonstrate a beneficial impact of
integrating the memetic approach alongside Internet Plus tools on the writing aptitude of English as a
Foreign Language (EFL) learners. Students reported increased engagement with writing, attributing it to
the use of Internet plus tools. They also expressed that the memetic approach facilitated a deeper
understanding of cultural and social contexts in writing. Furthermore, the findings highlight a significant
improvement in students’ writing skills following the intervention. This study provides significant insights
into the practical implementation of the memetic approach within English writing education, highlighting
the beneficial contribution of Internet Plus tools in enriching students' learning journeys.
A SURVEY TO REAL-TIME MESSAGE-ROUTING NETWORK SYSTEM WITH KLA MODELLINGijfcstjournal
Messages routing over a network is one of the most fundamental concept in communication which requires
simultaneous transmission of messages from a source to a destination. In terms of Real-Time Routing, it
refers to the addition of a timing constraint in which messages should be received within a specified time
delay. This study involves Scheduling, Algorithm Design and Graph Theory which are essential parts of
the Computer Science (CS) discipline. Our goal is to investigate an innovative and efficient way to present
these concepts in the context of CS Education. In this paper, we will explore the fundamental modelling of
routing real-time messages on networks. We study whether it is possible to have an optimal on-line
algorithm for the Arbitrary Directed Graph network topology. In addition, we will examine the message
routing’s algorithmic complexity by breaking down the complex mathematical proofs into concrete, visual
examples. Next, we explore the Unidirectional Ring topology in finding the transmission’s
“makespan”.Lastly, we propose the same network modelling through the technique of Kinesthetic Learning
Activity (KLA). We will analyse the data collected and present the results in a case study to evaluate the
effectiveness of the KLA approach compared to the traditional teaching method.
A COMPARATIVE ANALYSIS ON SOFTWARE ARCHITECTURE STYLESijfcstjournal
Software architecture is the structural solution that achieves the overall technical and operational
requirements for software developments. Software engineers applied software architectures for their
software system developments; however, they worry the basic benchmarks in order to select software
architecture styles, possible components, integration methods (connectors) and the exact application of
each style.
The objective of this research work was a comparative analysis of software architecture styles by its
weakness and benefits in order to select by the programmer during their design time. Finally, in this study,
the researcher has been identified architectural styles, weakness, and Strength and application areas with
its component, connector and Interface for the selected architectural styles.
SYSTEM ANALYSIS AND DESIGN FOR A BUSINESS DEVELOPMENT MANAGEMENT SYSTEM BASED...ijfcstjournal
A design of a sales system for professional services requires a comprehensive understanding of the
dynamics of sale cycles and how key knowledge for completing sales is managed. This research describes
a design model of a business development (sales) system for professional service firms based on the Saudi
Arabian commercial market, which takes into account the new advances in technology while preserving
unique or cultural practices that are an important part of the Saudi Arabian commercial market. The
design model has combined a number of key technologies, such as cloud computing and mobility, as an
integral part of the proposed system. An adaptive development process has also been used in implementing
the proposed design model.
AN ALGORITHM FOR SOLVING LINEAR OPTIMIZATION PROBLEMS SUBJECTED TO THE INTERS...ijfcstjournal
Frank t-norms are parametric family of continuous Archimedean t-norms whose members are also strict
functions. Very often, this family of t-norms is also called the family of fundamental t-norms because of the
role it plays in several applications. In this paper, optimization of a linear objective function with fuzzy
relational inequality constraints is investigated. The feasible region is formed as the intersection of two
inequality fuzzy systems defined by frank family of t-norms is considered as fuzzy composition. First, the
resolution of the feasible solutions set is studied where the two fuzzy inequality systems are defined with
max-Frank composition. Second, some related basic and theoretical properties are derived. Then, a
necessary and sufficient condition and three other necessary conditions are presented to conceptualize the
feasibility of the problem. Subsequently, it is shown that a lower bound is always attainable for the optimal
objective value. Also, it is proved that the optimal solution of the problem is always resulted from the
unique maximum solution and a minimal solution of the feasible region. Finally, an algorithm is presented
to solve the problem and an example is described to illustrate the algorithm. Additionally, a method is
proposed to generate random feasible max-Frank fuzzy relational inequalities. By this method, we can
easily generate a feasible test problem and employ our algorithm to it.
LBRP: A RESILIENT ENERGY HARVESTING NOISE AWARE ROUTING PROTOCOL FOR UNDER WA...ijfcstjournal
Underwater detector network is one amongst the foremost difficult and fascinating analysis arenas that
open the door of pleasing plenty of researchers during this field of study. In several under water based
sensor applications, nodes are square measured and through this the energy is affected. Thus, the mobility
of each sensor nodes are measured through the water atmosphere from the water flow for sensor based
protocol formations. Researchers have developed many routing protocols. However, those lost their charm
with the time. This can be the demand of the age to supply associate degree upon energy-efficient and
ascendable strong routing protocol for under water actuator networks. During this work, the authors tend
to propose a customary routing protocol named level primarily based routing protocol (LBRP), reaching to
offer strong, ascendable and energy economical routing. LBRP conjointly guarantees the most effective use
of total energy consumption and ensures packet transmission which redirects as an additional reliability in
compare to different routing protocols. In this work, the authors have used the level of forwarding node,
residual energy and distance from the forwarding node to the causing node as a proof in multicasting
technique comparisons. Throughout this work, the authors have got a recognition result concerning about
86.35% on the average in node multicasting performances. Simulation has been experienced each in a
wheezy and quiet atmosphere which represents the endorsement of higher performance for the planned
protocol.
STRUCTURAL DYNAMICS AND EVOLUTION OF CAPSULE ENDOSCOPY (PILL CAMERA) TECHNOLO...ijfcstjournal
This research paper examined and re-evaluates the technological innovation, theory, structural dynamics
and evolution of Pill Camera(Capsule Endoscopy) technology in redirecting the response manner of small
bowel (intestine) examination in human. The Pill Camera (Endoscopy Capsule) is made up of sealed
biocompatible material to withstand acid, enzymes and other antibody chemicals in the stomach is a
technology that helps the medical practitioners especially the general physicians and the
gastroenterologists to examine and re-examine the intestine for possible bleeding or infection. Before the
advent of the Pill camera (Endoscopy Capsule) the colonoscopy was the local method used but research
showed that some parts (bowel) of the intestine can’t be reach by mere traditional method hence the need
for Pill Camera. Countless number of deaths from stomach disease such as polyps, inflammatory bowel
(Crohn”s diseases), Cancers, Ulcer, anaemia and tumours of small intestines which ordinary would have
been detected by sophisticated technology like Pill Camera has become norm in the developing nations.
Nevertheless, not only will this paper examine and re-evaluate the Pill Camera Innovation, theory,
Structural dynamics and evolution it unravelled and aimed to create awareness for both medical
practitioners and the public.
AN OPTIMIZED HYBRID APPROACH FOR PATH FINDINGijfcstjournal
Path finding algorithm addresses problem of finding shortest path from source to destination avoiding
obstacles. There exist various search algorithms namely A*, Dijkstra's and ant colony optimization. Unlike
most path finding algorithms which require destination co-ordinates to compute path, the proposed
algorithm comprises of a new method which finds path using backtracking without requiring destination
co-ordinates. Moreover, in existing path finding algorithm, the number of iterations required to find path is
large. Hence, to overcome this, an algorithm is proposed which reduces number of iterations required to
traverse the path. The proposed algorithm is hybrid of backtracking and a new technique(modified 8-
neighbor approach). The proposed algorithm can become essential part in location based, network, gaming
applications. grid traversal, navigation, gaming applications, mobile robot and Artificial Intelligence.
EAGRO CROP MARKETING FOR FARMING COMMUNITYijfcstjournal
The Major Occupation in India is the Agriculture; the people involved in the Agriculture belong to the poor
class and category. The people of the farming community are unaware of the new techniques and Agromachines, which would direct the world to greater heights in the field of agriculture. Though the farmers
work hard, they are cheated by agents in today’s market. This serves as a opportunity to solve
all the problems that farmers face in the current world. The eAgro crop marketing will serve as a better
way for the farmers to sell their products within the country with some mediocre knowledge about using
the website. This would provide information to the farmers about current market rate of agro-products,
their sale history and profits earned in a sale. This site will also help the farmers to know about the market
information and to view agricultural schemes of the Government provided to farmers.
EDGE-TENACITY IN CYCLES AND COMPLETE GRAPHSijfcstjournal
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA)
Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to
explain the way as how the Proposed Genetic Algorithm (GA), the Proposed Simulated Annealing (SA)
Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm can be
employed in finding the best solution of N Queens Problem and also, makes a comparison between these
four algorithms. It is entirely a review based work. The four algorithms were written as well as
implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better
than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and
the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the
Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute
Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more
time to provide result than the Proposed SA using GA.
PSTECEQL: A NOVEL EVENT QUERY LANGUAGE FOR VANET’S UNCERTAIN EVENT STREAMSijfcstjournal
In recent years, the complex event processing technology has been used to process the VANET’s temporal
and spatial event streams. However, we usually cannot get the accurate data because the device sensing
accuracy limitations of the system. We only can get the uncertain data from the complex and limited
environment of the VANET. Because the VANET’s event streams are consist of the uncertain data, so they
are also uncertain. How effective to express and process these uncertain event streams has become the core
issue for the VANET system. To solve this problem, we propose a novel complex event query language
PSTeCEQL (probabilistic spatio-temporal constraint event query language). Firstly, we give the definition
of the possible world model of VANET’s uncertain event streams. Secondly, we propose an event query
language PSTeCEQL and give the syntax and the operational semantics of the language. Finally, we
illustrate the validity of the PSTeCEQL by an example.
CLUSTBIGFIM-FREQUENT ITEMSET MINING OF BIG DATA USING PRE-PROCESSING BASED ON...ijfcstjournal
Now a day enormous amount of data is getting explored through Internet of Things (IoT) as technologies
are advancing and people uses these technologies in day to day activities, this data is termed as Big Data
having its characteristics and challenges. Frequent Itemset Mining algorithms are aimed to disclose
frequent itemsets from transactional database but as the dataset size increases, it cannot be handled by
traditional frequent itemset mining. MapReduce programming model solves the problem of large datasets
but it has large communication cost which reduces execution efficiency. This proposed new pre-processed
k-means technique applied on BigFIM algorithm. ClustBigFIM uses hybrid approach, clustering using kmeans algorithm to generate Clusters from huge datasets and Apriori and Eclat to mine frequent itemsets
from generated clusters using MapReduce programming model. Results shown that execution efficiency of
ClustBigFIM algorithm is increased by applying k-means clustering algorithm before BigFIM algorithm as
one of the pre-processing technique.
A MUTATION TESTING ANALYSIS AND REGRESSION TESTINGijfcstjournal
Software testing is a testing which conducted a test to provide information to client about the quality of the
product under test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. In this paper we
focused on two main software testing –mutation testing and mutation testing. Mutation testing is a
procedural testing method, i.e. we use the structure of the code to guide the test program, A mutation is a
little change in a program. Such changes are applied to model low level defects that obtain in the process
of coding systems. Ideally mutations should model low-level defect creation. Mutation testing is a process
of testing in which code is modified then mutated code is tested against test suites. The mutations used in
source code are planned to include in common programming errors. A good unit test typically detects the
program mutations and fails automatically. Mutation testing is used on many different platforms, including
Java, C++, C# and Ruby. Regression testing is a type of software testing that seeks to uncover
new software bugs, or regressions, in existing functional and non-functional areas of a system after
changes such as enhancements, patches or configuration changes, have been made to them. When defects
are found during testing, the defect got fixed and that part of the software started working as needed. But
there may be a case that the defects that fixed have introduced or uncovered a different defect in the
software. The way to detect these unexpected bugs and to fix them used regression testing. The main focus
of regression testing is to verify that changes in the software or program have not made any adverse side
effects and that the software still meets its need. Regression tests are done when there are any changes
made on software, because of modified functions.
GREEN WSN- OPTIMIZATION OF ENERGY USE THROUGH REDUCTION IN COMMUNICATION WORK...ijfcstjournal
Advances in micro fabrication and communication techniques have led to unimaginable proliferation of
WSN applications. Research is focussed on reduction of setup operational energy costs. Bulk of operational
energy costs are linked to communication activities of WSN. Any progress towards energy efficiency has a
potential of huge savings globally. Therefore, every energy efficient step is an endeavour to cut costs and
‘Go Green’. In this paper, we have proposed a framework to reduce communication workload through: Innetwork compression and multiple query synthesis at the base-station and modification of query syntax
through introduction of Static Variables. These approaches are general approaches which can be used in
any WSN irrespective of application.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
AGENT ENABLED MINING OF DISTRIBUTED PROTEIN DATA BANKSijfcstjournal
Mining biological data is an emergent area at the intersection between bioinformatics and data mining
(DM). The intelligent agent based model is a popular approach in constructing Distributed Data Mining
(DDM) systems to address scalable mining over large scale distributed data. The nature of associations
between different amino acids in proteins has also been a subject of great anxiety. There is a strong need to
develop new models and exploit and analyze the available distributed biological data sources. In this study,
we have designed and implemented a multi-agent system (MAS) called Agent enriched Quantitative
Association Rules Mining for Amino Acids in distributed Protein Data Banks (AeQARM-AAPDB). Such
globally strong association rules enhance understanding of protein composition and are desirable for
synthesis of artificial proteins. A real protein data bank is used to validate the system.
International Journal on Foundations of Computer Science & Technology (IJFCST)ijfcstjournal
International Journal on Foundations of Computer Science & Technology (IJFCST) is a Bi-monthly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of the Foundations of Computer Science & Technology. Over the last decade, there has been an explosion in the field of computer science to solve various problems from mathematics to engineering. This journal aims to provide a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals. Topics of interest include, but are not limited to the following:
Because the technology is used largely in the last decades; cybercrimes have become a significant
international issue as a result of the huge damage that it causes to the business and even to the ordinary
users of technology. The main aims of this paper is to shed light on digital crimes and gives overview about
what a person who is related to computer science has to know about this new type of crimes. The paper has
three sections: Introduction to Digital Crime which gives fundamental information about digital crimes,
Digital Crime Investigation which presents different investigation models and the third section is about
Cybercrime Law.
DISTRIBUTION OF MAXIMAL CLIQUE SIZE UNDER THE WATTS-STROGATZ MODEL OF EVOLUTI...ijfcstjournal
In this paper, we analyze the evolution of a small-world network and its subsequent transformation to a
random network using the idea of link rewiring under the well-known Watts-Strogatz model for complex
networks. Every link u-v in the regular network is considered for rewiring with a certain probability and if
chosen for rewiring, the link u-v is removed from the network and the node u is connected to a randomly
chosen node w (other than nodes u and v). Our objective in this paper is to analyze the distribution of the
maximal clique size per node by varying the probability of link rewiring and the degree per node (number
of links incident on a node) in the initial regular network. For a given probability of rewiring and initial
number of links per node, we observe the distribution of the maximal clique per node to follow a Poisson
distribution. We also observe the maximal clique size per node in the small-world network to be very close
to that of the average value and close to that of the maximal clique size in a regular network. There is no
appreciable decrease in the maximal clique size per node when the network transforms from a regular
network to a small-world network. On the other hand, when the network transforms from a small-world
network to a random network, the average maximal clique size value decreases significantly
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Bin packing problem two approximation
1. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
DOI:10.5121/ijfcst.2015.5401 1
BIN PACKING PROBLEM: TWO APPROXIMATION
ALGORITHMS
AbdolahadNoori Zehmakan
Department of Computer Science, Sharif University of Technology, Tehran, Iran
ABSTRACT
The Bin Packing Problem is one of the most important optimization problems. In recent years, due to its
NP-hard nature, several approximation algorithms have been presented. It is proved that the best
algorithm for the Bin Packing Problem has the approximation ratio 3/2 and the time orderO(n),
unlessP=NP. In this paper, first, a -approximation algorithm is presented, then a modification to FFD
algorithm is proposed to decrease time order to linear. Finally, these suggested approximation algorithms
are compared with some other approximation algorithms. The experimental results show the suggested
algorithms perform efficiently.
In summary, the main goal of the research is presenting methods which not only enjoy the best theoretical
criteria, but also perform considerably efficient in practice.
KEYWORDS
Bin Packing Problem, approximation algorithm, approximation ratio, optimization problems, FFD (First-
Fit Decreasing)
1. INTRODUCTION
The Bin Packing Problem has several applications, including filling containers, loading trucks
with weight capacity constraints, creating file backups in removable media and technology
mapping in Field-programmable gate arraysemiconductor chip design. Unfortunately, this
problem is NP-hard therefore many approximation algorithms [1,2,3,4,5] have been suggested.
In computer science and operational research, approximation algorithms are used to find
approximate solutions to optimization problems. Approximation algorithms are often associated
with NP-hard problems. They are also increasingly used for problems where exact polynomial-
time algorithms are known but too expensive due to the input size. The quality and ability of an
approximation algorithm depend on its approximation ratio and time order. For some
approximation algorithms, it is possible to prove certain properties about the approximation of the
optimal result. A ρ-approximation algorithm A is defined to be an algorithm for which it been
proven that the value of the approximate solution A(x) to an instance x will not be more (or less,
depending on the situation) than a factor ρ times the value, OPT, of an optimum solution.
In the classical one-dimensional Bin Packing Problem, a list of items = { , . . . , }, each with a
size ∈ 0,1 , is given and we are asked to pack them into minimum number of unit-
capacity bins.
Many variations of this problem is proposed, such as 2D and 3D bin packing [6,7,8,9,10], with
item fragmentation [11], fragile objects [12,13], extendable bins [14] packing by cost [3] and
variable size bin packing [15]. In this paper, the original and off-line version of the problem is
considered, due to its applications and importance.
2. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
2
Simchi-Levi in [16] proved that the FF (First-Fit) and BF (Best-Fit) algorithms, two of the
foremost approximation algorithms for the Bin Packing Problem, have an absolute worst-case
ratio of 7/4. He also proved that the FFD and BFD algorithms have an absolute worst-case ratio
of 3/2. Zhang and Cai in [17] provided a linear time constant space off-line approximation
algorithm with absolute approximation ratio of 3/2. Their algorithm depends on two kind of
active and extra bins and follows a simple but exact procedure. In 2003, Rudolf and Florian in
[18] presented an approximation algorithm for the BPP with a linear running time and an absolute
approximation factor of 3/2. As mentioned, it is proven that the best algorithm for the Bin
Packing Problem has the approximation ratio of 3/2 and the time order of , unless =
[16].
In [20] Martel defined the asymptotic approximation ratio instead of the approximation ratio and
proved his proposed algorithm has a 4/3 asymptotic approximation ratio. Furthermore, in [20] the
method of Martel was expanded and a 5/4 asymptotic approximation algorithm was suggested.
In this paper two new approximation algorithms are presented. The first algorithm works based
on a kind of sorting and after classification items into 4 ranges tries to choose the best matching
between them. The second algorithm is a time improved version of FFD. In this algorithm, we try
to decrease FFD time order while maintaining the instructive qualities of FFD and its
performance.
Finally, the two suggested algorithms are compared with two approximation algorithms [17,18],
and FFD. Experimental results show the two suggested algorithms perform much better than the
others.
The reminder of this paper is organized as follows. In section 2, two suggested algorithms are
presented. Furthermore, it is proved that the approximation factor of the first algorithm is 3/2.
Then in sections 3 the experimental results and computational analysis are discussed. Finally, in
section 4 conclusions of the results are drawn and some methods for enhancing previous
algorithms are suggested.
2. THE PROPOSED ALGORITHMS
In this section, two proposed algorithm A1 and A2 are discussed. Algorithm A1 utilizes ranging
technique and classifies inputs into 4 ranges. It will be proved that this algorithm's approximation
ratio is 3/2. Furthermore, a new linear version of FFD algorithm is presented.
2.1. The Proposed Algorithm A1
The algorithm tries to create output bins which are at least 2/3 full. It is proved that in this
condition the approximation ratio of the algorithm is 3/2.
As mentioned, in this algorithm inputs are classified into 4 ranges (0- ), ( -
.
), (
.
- ) and ( -1)
called , , and , respectively.
In first step, items are put in separate output bins, then and are sorted. We try to match
any item in with the biggest possible item in . Obviously, after that this step, some items
will be remained in and . We match items with each other and add
| |
to ! −
#$% &'( (The number of used bins). In next step, we try to match items with items. Finally,
items are matched with each other.
3. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
3
Definition1: ) is the number of bins in OPT solution and )∗
is the number of bins in the
proposed algorithm.
Lemma1: If at least size of each output bin is full, the approximation ratio is at least .
Proof: consider the worst condition that all output bins are completely full in OPT solution.
Suppose that W is the sum of input items. In this condition:
≥ , & ∗
≤
,
⇒ ∗
/ ≤
3
2
∎
Theorem1: The proposed algorithm A1 is a -approximation algorithm.
Proof: Based on the algorithm in first step, all items are put in separated bins and obviously at
least 2/3 size of these output bins are full. After that, some items are matched with some
items. Definitely, in this step at least 2/3 size of output bins are also full since a item is at least
1/3 and a item is at least 1.5/3. Consequently, their sum is at least 2/3.
In next step, items are matched with each other 2 by 2 and put in separated bins. At least 2/3
size of these bins are full since an item is at least 1/3. After that the rest of items with
items are matched. Now there are two cases:
Case1:
4567 48
| 6|
>
4. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
4
Case2:
4567 48
| 6|
≤
, : The sum of all items which remain in this step.
,: : The sum of all S items.
| 2| : The number of all items which are remain in this step.
We claim all output bins are more than fill in this step. According on the algorithm, at first we
match some items with some items. Obviously the output bins in this step are more than
full because a item is no more than and we close a bin when it does not have enough space for
a S item. After that, two configurations are possible:
C1: If there are just some items left we put all of them into separate bins therefore the number
of output bins is | 2|. Consequently the average of the output bins equals
4567 48
| 6|
that is more
than based on case1 assumption.
C2: If there are only some items, the output bins in this step are also more than full because a
item is at most .
In case2, the bins that have some S items like case1 are at least full. Therefore we only consider
the bins which have only one item. We claim that in the OPT solution these items are also
associated separate bins because:
On one hand, they cannot be matched with the items and with the items because a bin does
not have enough space for an item and an item or for two items. On the other hand, if a
item (primary item) is matched with a item in the OPT solution, in the suggested
algorithm it will be matched with a item or its complement (meaning the item matched
with it in the OPT solution) is matched with another item (second item). The second
item is bigger than the primary item since the items are sorted. Therefore, the primary
item can be put in every bin that the second has been put (in this condition the algorithm has
been performed better than OPT solution until now).
Based on the mentioned reasons and discussions, for any output bin in the proposed algorithm
which is less than full, there is a bin in the OPT solution that its used capacity is equal or less
than it. Furthermore, all other bins are more than full. In conclusion, based on the lemma1 the
approximation ratio of the suggested algorithm is . ∎
2.2. The Proposed Algorithm A2
As mentioned, the second proposed algorithm is based on the Firs-Fit Decreasing algorithm. In
FFD, the items are packed in order of non-decreasing size, and next item is always packed into
the first bin in which it fits; that is, we first open bin1 and we only start bin k+1 when the current
item does not fit into any of the bins 1, … , <.
In the algorithm A2, we consider 10 classes of bins and 10 ranges of items and in any step we
check at most one bin in each class. The order of choosing items and checking the bins classes are
considered completely intelligently. A pseudocode of the algorithm A2is shown.
5. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
5
Obviously, the running time of the algorithm A2 is (n is the number of input items) since for
making decision about each item the algorithm at most spend 10 time-unit for checking 10 classes
of bins.
We also can make the algorithm more efficient and consider the Scale Parameter r that shows the
number of ranges and bins classes in the algorithm. This parameter can be chosen based on the
number of inputs. For instance, if the number of inputs is 10 =
is reasonable choose ( = 10
instead of ( = 10.
3. COMPUTATIONAL RESULTS
In this section, at first the computational results of two suggested algorithms and three other
algorithms are presented, and it is shown that the proposed algorithms perform considerably
much more efficient. Furthermore, we compare the algorithm A1 with the Algorithm A2 from an
application point of view and their utilization in variant fields and stipulations.
In this section, the two proposed algorithms are compared with two other approximation
algorithms [18, 19] which are the only algorithms have the best possible approximation ratio.
This comparison has been drawn based on all standard instances for BPP from OR-LIBRARY
[21]. We define Ratio as the proportion of the proposed algorithm solution to the OPT solution.
Obviously, ratiohas a direct relationship with algorithm’s approximation ratio. Consequently,
ratio is utilized as a factor for measuring approximation algorithms’ performances.
As mentioned, the standard instances in OR-LIBRARY are used for simulations. Each set of
instances contains 20 instances for the Bin Packing Problem. The two proposed algorithm have
been compared with the Guochuan's algorithm [17], and the Berghammer's algorithm [18] based
on the 8 set of instances. The results of these comparisons for bp1, bp2, bp3, bp4, bp5, bp6, bp7
and bp8 are shown in Fig1, Fig2, Fig3, Fig4, Fig5, Fig6, Fig7, Fig8, respectively.
6. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
Figure 1. The ratios of the algorithms for the set
Figure 2. The ratios of the algorithms for the set problems of instance bp2
Figure 3. The ratios of the algorithms for the set problems of instance bp3
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
The ratios of the algorithms for the set problems of instance bp1
The ratios of the algorithms for the set problems of instance bp2
The ratios of the algorithms for the set problems of instance bp3
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
6
7. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
Figure 4. The ratios of the algorithms for the set problems of
Figure 5. The ratios of the algorithms for the set problems of instance bp5
Figure 6. The ratios of the algorithms for the set problems of instance bp6
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
The ratios of the algorithms for the set problems of instance bp4
The ratios of the algorithms for the set problems of instance bp5
The ratios of the algorithms for the set problems of instance bp6
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
7
8. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
Figure 7. The ratios of the algorithms for the set problems of instance bp7
Figure 8. The ratios of the algorithms for the set problems of instance bp8
The diagrams show the two
algorithms. As mentioned, the
the best possible approximation factor. Furthermore, the algorithm
acceptable than the algorithm
similarity between performances of
The results are measured for 20 instances in any class, but for simplification of understanding
the points corresponding to an algorithm are joined by a line.
In Fig9, the average of the simulations results is shown for four mentioned algorithms for the
all sets of instances. This diagram shows that the proposed algorithm
performs more efficiently. After that, the suggested algorithm
performance. Therefore, two suggested algorithms are completely superior to two other ones,
in practice.
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
The ratios of the algorithms for the set problems of instance bp7
The ratios of the algorithms for the set problems of instance bp8
two suggested algorithms perform much better than
algorithms. As mentioned, the two other algorithms are only approximation algorithms with
the best possible approximation factor. Furthermore, the algorithm A1 performance is more
acceptable than the algorithm A2. Another interesting point in the experimental results is the
ween performances ofGuochuan's algorithm, and the Berghammer
The results are measured for 20 instances in any class, but for simplification of understanding
the points corresponding to an algorithm are joined by a line.
f the simulations results is shown for four mentioned algorithms for the
all sets of instances. This diagram shows that the proposed algorithm A1 in all instances
performs more efficiently. After that, the suggested algorithm A2 has much better
. Therefore, two suggested algorithms are completely superior to two other ones,
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
8
suggested algorithms perform much better than two other
other algorithms are only approximation algorithms with
performance is more
. Another interesting point in the experimental results is the
Berghammer's algorithm.
The results are measured for 20 instances in any class, but for simplification of understanding
f the simulations results is shown for four mentioned algorithms for the
in all instances
has much better
. Therefore, two suggested algorithms are completely superior to two other ones,
9. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
Figure 9. The average of ratios for the 4 algorithms based on the all instances
In Fig 10, the experimental results of the
based on the all sets of instances.
Figure 10. The average of ratios for two suggested algorithms and FFD based on the all instances
The results show that the two suggested algorithms perform much better than
bp5, bp6, bp7, and bp8, but the
bp3 and bp4. It seems their performances are very similar in average. We claim that the
suggested algorithms are more effective and efficient than
order are similar, but FFD is an on
algorithm) while the algorithm A1
superior to FFD because it is a linear time algorithm while the running time of
even in worst-case .
We drew the conclusion that the algorithms
criteria, but also execute better than other ones
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
The average of ratios for the 4 algorithms based on the all instances
In Fig 10, the experimental results of the two suggested algorithms and FFD algorithm are shown
based on the all sets of instances.
The average of ratios for two suggested algorithms and FFD based on the all instances
suggested algorithms perform much better than FFD
bp5, bp6, bp7, and bp8, but the FFD algorithm performances are more acceptable in bp1, bp2,
bp3 and bp4. It seems their performances are very similar in average. We claim that the
suggested algorithms are more effective and efficient than FFD. The algorithm A1 and
is an on-line space algorithm (it means that it save all bins during the
algorithm) while the algorithm A1 use much less space. Furthermore, the algorithm
is a linear time algorithm while the running time of FFD
We drew the conclusion that the algorithms A1 and A2 not only enjoy the best possible theoretical
than other ones in practice, but a natural question which comes up
International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
9
algorithm are shown
The average of ratios for two suggested algorithms and FFD based on the all instances
FFD algorithm in
algorithm performances are more acceptable in bp1, bp2,
bp3 and bp4. It seems their performances are very similar in average. We claim that the two
and FFD time
line space algorithm (it means that it save all bins during the
use much less space. Furthermore, the algorithm A2 is also
FFD is >$?
not only enjoy the best possible theoretical
, but a natural question which comes up
10. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
10
is that "Which algorithm should be used in practice, A1 or A2?". The answer is that it depends. In
the following paragraphs we try to clarify this point.
Firstly, obviously if the important factor is accuracy, AlgorithmA1 is the better one, but if the
significant criterion is speed, AlgorithmA2 will be the choice inasmuch as AlgorithmA1 shows
better performance based on the aforementioned outputs; on the other hand, AlgorithmA2 is a
linear time algorithm. Another point which can be taken into consideration is that AlgorithmA1 is
a constant-space one while the second one is not. Therefore, if space order is a noteworthy factor,
we should exploit AlgorithmA1.
Needless to say, if the input items are almost sorted, the algorithm A1 performs a lot better, but if
the number of input items is significantly high or they are distributed homogenously, the
algorithm A2 will be the option In that AlgorithmA1 needs to sort the items, and the algorithm A2
is much more flexible and is able to use Scale Factor. The aforementioned computational results
confirm this claim because the number of items in the instances increases from bp1 to bp8.
If the number of S (small) items is considerable, Algorithm A1 performs more efficiently. On the
other hand, if the number of L (large) items is high, the second one is the right choice. Moreover,
the state that nearly all items are relevant to the ranges M1 and M2 (are medium) forces the user
to utilize the algorithm A2.
For instance, in packing trucks and ships when the goods are small, we use the first one, but in the
state that they are large enough by considering the capacity unit in the ship or truck, the choice is
second one. Furthermore, in assigning tasks to machines in machine scheduling problem if the
durations of different tasks are approximately equal with each other, the second algorithm
executes better.
Consider the problem of placing computer files with specified sizes into memory blocks of fixed
size. For example, recording all of a computer's music where the length of the pieces to be
recorded are the weights and the bin capacity is the amount of time that can be sorted on an audio
(say 80 minutes). If we want to save the information for a long time, it is better to use the first
algorithm to amplify the accuracy, but if we want to rewrite the information several times, using
the second one is a rational solution. If all items are similar in size, for instance all of them are
songs, probably AlgorithmA1 works acceptably.
Table 1 tries to summarize the aforementioned discussions regarding the application of the
algorithms A1 and A2 in different situations.
Table 1. Choosing between algorithms 1 and 2 based on different factors and condition.
Factor/Condition Algorithnm1 Algorithm2
Accuracy Yes
Speed Yes
Space Yes
Sorted Items Yes
High Number of Items Yes
Homogenous Distribution of
Items
Yes
Majority by S Items Yes
Majority by L Items Yes
Majority by M Items Yes
11. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.4, July 2015
11
3. CONCLUSIONS
Two approximation algorithms A1, and A2 were proposed in this paper. It was proved that the A1
approximation ratio is . After that we observed the results of experimental simulations and
analyzed them. Based on the results, we can claim that the two proposed algorithms in this article
are the best presented approximation algorithms for the Bin Packing Problem, in theory and in
practice until now.
In future researches, the focus on Scaling Factor r can enhance the algorithm A2 more and more.
REFERENCES
[1] B. Xia and Z. Tan, (2010) "Tighter bounds of the First Fit algorithm for the bin-packing problem",
Discrete Applied Mathematics, Vol. 158, No. 15, pp1668-1675.
[2] L. Epstein, A. Levin, (2008) "Asymptotic fully polynomial approximation schemes for variants of
open-end bin packing", Information Processing Letters, Vol. 109, pp32-37.
[3] Y. Joseph, T. Leung and Ch. Li, (2008)"An asymptotic approximation scheme for the concave cost
bin packing problem", European Journal of Operational Research, Vol. 191, pp582-586.
[4] J. Balogh, J. Békési and G. Galambos, (2012)"New lower bounds for certain classes of bin packing
algorithms", Theoretical Computer Science, Vol. 441, pp1-13.
[5] W. Bein, R. Correa and X. Han, (2008)"A fast asymptotic approximation scheme for bin packing with
rejection", Theoretical Computer Science, Vol. 393, pp14-22.
[6] W. Bein, R. Correa and X. Han, (2008)"A fast asymptotic approximation scheme for bin packing with
rejection", Theoretical Computer Science, Vol. 393, pp14-22.
[7] C. Blum, V. Schmid, (2013)"Solving the 2D Bin Packing Problem by Means of a Hybrid
Evolutionary Algorithm", Procedia Computer Science, Vol. 18, pp899-908.
[8] A. Lodi, S. Martello and D. Vigo,(1999) "Approximation algorithms for the oriented two-dimensional
bin packing problem", European Journal of Operational Research, Vol. 112, pp158-166.
[9] J. Gonçalves and M. Resende, (2013) "A biased random key genetic algorithm for 2D and 3D bin
packing problems",International Journal of Production Economics, Vol. 145, pp500-510.
[10] J. Bennell, L. Lee and C. Potts, (2013)"A genetic algorithm for two-dimensional bin packing with due
dates", International Journal of Production Economics, Vol. 145, pp547-560.
[11] M. Casazza and A. Ceselli, (2014)"Mathematical programming algorithms for bin packing problems
with item fragmentation", Computers & Operations Research,Vol. 46, pp1-11.
[12] M. Martínez, F. Clautiaux, M. Dell’Amico and M. Iori, (2013)"Exact algorithms for the bin packing
problem with fragile objects", Discrete Optimization, Vol. 10, pp210-220.
[13] F. Clautiaux, M. Dell’Amico, M. Iori and A. Khanafer, (2014)"Lower and upper bounds for the Bin
Packing Problem with Fragile Objects", Discrete Applied Mathematics, Vol. 163, pp73-86.
[14] P. Dell'Olmo, H. Kellerer, M. Speranza and Z. Tuza, (1998)"A 13/12 approximation algorithm for bin
packing with extendable bins", Information Processing Letters, Vol. 65, pp229-233.
[15] J. Bang-Jensen and R, (2012)"Larsen. Efficient algorithms for real-life instances of the variable size
bin packing problem", Computers & Operations Research, Vol. 39, pp2848-2857.
[16] D. Simchi-Levi, (1994)"New worst-case results for the bin packing problem", Naval Res. Logist.,Vol.
41, pp579-584.
[17] G. Zhang, X. Cai and C. Wong, (2000)"Linear time-approximation algorithms for bin packing",
Operations Research Letters, Vol. 26, pp217-222.
[18] R. Berghammer and F. Reuter, (2003)"A linear approximation algorithm for bin packing with
absolute approximation factor 3/2", Science of Computer Programming, Vol. 48, pp67-80.
[19] C. Martel,(1985) "A linear time bin-packing algorithm", Operations Research Letters, Vol. 4, pp189-
192.
[20] J. Békési, G. Galambos and H. Kellerer, (2000)"A 5/4 Linear Time Bin Packing Algorithm", Journal
of Computer and System Sciences, Vol. 60, pp145-160.
[21] Beasley J.E. (2013). OR-LIBRARY, Bin packing – One-dimensional,
http://people.brunel.ac.uk/_mastjjb/jeb/orlib/binpackinfo.html.