Cloud computing has become more commercial and familiar. The Cloud data centers havhuge challenges to maintain QoS and keep the Cloud performance high. The placing of virtual machines among physical machines in Cloud is significant in optimizing Cloud performance. Bin packing based algorithms are most used concept to achieve virtual machine placement(VMP). This paper presents a rigorous survey and comparisons of the bin packing based VMP methods for the Cloud computing environment. Various methods are discussed and the VM placement factors in each methods are analyzed to understand the advantages and drawbacks of each method. The scope of future research and studies are also highlighted.
This document discusses scheduling algorithms for batches of MapReduce jobs in heterogeneous cloud environments with budget and deadline constraints. It proposes two optimization problems: 1) Given a fixed budget B, how to efficiently schedule tasks to minimize workflow completion time without exceeding the budget. 2) Given a fixed deadline D, how to efficiently schedule tasks to minimize monetary cost without missing the deadline. It presents an optimal dynamic programming algorithm for the first problem that runs in O(κB2) time, and two faster greedy algorithms. It also briefly discusses reducing the second problem to a knapsack problem. The goal is to help cloud service providers deploy MapReduce cost-effectively given user constraints.
The document proposes a new convolutional block called EffNet that aims to improve computational efficiency of convolutional neural networks while maintaining accuracy. EffNet separates the 3x3 convolution into two 1x3 and 3x1 convolutions, applies max pooling after the first convolution, and uses a less aggressive bottleneck than prior works to reduce data compression. Experiments on small image datasets show EffNet can replace convolutional layers in efficient networks without significant loss of accuracy compared to baseline and prior methods like MobileNet and ShuffleNet.
This document summarizes a research paper on DAME, an environment that supports SPMD (Single Program Multiple Data) programming on heterogeneous networks of workstations. DAME provides dynamic data redistribution to improve efficiency when resources vary over time. It uses a virtual mesh topology and maps data proportionally to node capabilities. DAME monitors resource usage and transparently migrates data from overloaded to underloaded nodes without requiring changes to the SPMD program code. Experimental results show DAME achieves satisfactory efficiency on irregular and changing platforms through dynamic workload redistribution.
Competitive Demand Response Trading in Electricity Markets: Aggregator and En...Nur Mohammad
Competitive Demand Response Trading Mechanism using Bi-level Optimization Framework.
Keywords: Demand-side management, electricity pricing, auction-based demand response mechanism design, bi-level optimization, game theory, power system network economics.
Goal: The research project aims to identify the day-ahead operational behavior of market participants both from supply and demand sides of a trans-active market mechanism. We want to provide a coordinated DR exchange (DRX) mechanism, integrated it into a market clearing model to suppress locational marginal price (LMP) spikes and operating cost.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
A new efficient fpga design of residue to-binary converterVLSICS Design
In this paper, we introduce a new 6n bit Dynamic Range Moduli set { 22n, 22n + 1, 22n – 1} and then present
its associated novel reverse converters. First, we simplify the Chinese Remainder Theorem in order to
obtain an efficient reverse converter which is completely memory less and adder based. Next, we present a
low complexity implementation that does not require the explicit use of modulo operation in the conversion
process and we demonstrate that theoretically speaking it outperforms state of the art equivalent reverse
converters. We also implemented the proposed converter and the best equivalent state of the art reverse
converters on Xilinx Spartan 6 FPGA. The experimental results confirmed the theoretical evaluation. The
FPGA synthesis results indicate that, on the average, our proposal is about 52.35% and 43.94% better in
terms of conversion time and hardware resources respectively.
Assessing no sql databases for telecom applicationsJoão Gabriel Lima
This document discusses migrating a telecom application from a relational database to a NoSQL database to take advantage of cloud computing. It presents the data model used for call management, including entities for customers, accounts, services, and plans. It then describes migrating the process of selecting a subscriber's services to Cassandra, a distributed NoSQL database, and compares performance to PostgreSQL in different workloads. The document aims to assess how NoSQL databases can adapt telecom business models to cloud environments.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
This document discusses scheduling algorithms for batches of MapReduce jobs in heterogeneous cloud environments with budget and deadline constraints. It proposes two optimization problems: 1) Given a fixed budget B, how to efficiently schedule tasks to minimize workflow completion time without exceeding the budget. 2) Given a fixed deadline D, how to efficiently schedule tasks to minimize monetary cost without missing the deadline. It presents an optimal dynamic programming algorithm for the first problem that runs in O(κB2) time, and two faster greedy algorithms. It also briefly discusses reducing the second problem to a knapsack problem. The goal is to help cloud service providers deploy MapReduce cost-effectively given user constraints.
The document proposes a new convolutional block called EffNet that aims to improve computational efficiency of convolutional neural networks while maintaining accuracy. EffNet separates the 3x3 convolution into two 1x3 and 3x1 convolutions, applies max pooling after the first convolution, and uses a less aggressive bottleneck than prior works to reduce data compression. Experiments on small image datasets show EffNet can replace convolutional layers in efficient networks without significant loss of accuracy compared to baseline and prior methods like MobileNet and ShuffleNet.
This document summarizes a research paper on DAME, an environment that supports SPMD (Single Program Multiple Data) programming on heterogeneous networks of workstations. DAME provides dynamic data redistribution to improve efficiency when resources vary over time. It uses a virtual mesh topology and maps data proportionally to node capabilities. DAME monitors resource usage and transparently migrates data from overloaded to underloaded nodes without requiring changes to the SPMD program code. Experimental results show DAME achieves satisfactory efficiency on irregular and changing platforms through dynamic workload redistribution.
Competitive Demand Response Trading in Electricity Markets: Aggregator and En...Nur Mohammad
Competitive Demand Response Trading Mechanism using Bi-level Optimization Framework.
Keywords: Demand-side management, electricity pricing, auction-based demand response mechanism design, bi-level optimization, game theory, power system network economics.
Goal: The research project aims to identify the day-ahead operational behavior of market participants both from supply and demand sides of a trans-active market mechanism. We want to provide a coordinated DR exchange (DRX) mechanism, integrated it into a market clearing model to suppress locational marginal price (LMP) spikes and operating cost.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
A new efficient fpga design of residue to-binary converterVLSICS Design
In this paper, we introduce a new 6n bit Dynamic Range Moduli set { 22n, 22n + 1, 22n – 1} and then present
its associated novel reverse converters. First, we simplify the Chinese Remainder Theorem in order to
obtain an efficient reverse converter which is completely memory less and adder based. Next, we present a
low complexity implementation that does not require the explicit use of modulo operation in the conversion
process and we demonstrate that theoretically speaking it outperforms state of the art equivalent reverse
converters. We also implemented the proposed converter and the best equivalent state of the art reverse
converters on Xilinx Spartan 6 FPGA. The experimental results confirmed the theoretical evaluation. The
FPGA synthesis results indicate that, on the average, our proposal is about 52.35% and 43.94% better in
terms of conversion time and hardware resources respectively.
Assessing no sql databases for telecom applicationsJoão Gabriel Lima
This document discusses migrating a telecom application from a relational database to a NoSQL database to take advantage of cloud computing. It presents the data model used for call management, including entities for customers, accounts, services, and plans. It then describes migrating the process of selecting a subscriber's services to Cassandra, a distributed NoSQL database, and compares performance to PostgreSQL in different workloads. The document aims to assess how NoSQL databases can adapt telecom business models to cloud environments.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Memory and I/O optimized rectilinear Steiner minimum tree routing for VLSI IJECEIAES
As the size of devices are scaling down at rapid pace, the interconnect delay play a major part in performance of IC chips. Therefore minimizing delay and wire length is the most desired objective. FLUTE (Fast Look-Up table) presented a fast and accurate RSMT (Rectilinear Steiner Minimum Tree) construction for both smaller and higher degree net. FLUTE presented an optimization technique that reduces time complexity for RSMT construction for both smaller and larger degree nets. However for larger degree net this technique induces memory overhead, as it does not consider the memory requirement in constructing RSMT. Since availability of memory is very less and is expensive, it is desired to utilize memory more efficiently which in turn results in reducing I/O time (i.e. reduce the number of I/O disk access). The proposed work presents a Memory Optimized RSMT (MORSMT) construction in order to address the memory overhead for larger degree net. The depth-first search and divide and conquer approach is adopted to build a Memory optimized tree. Experiments are conducted to evaluate the performance of proposed approach over existing model for varied benchmarks in terms of computation time, memory overhead and wire length. The experimental results show that the proposed model is scalable and efficient.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
Improvement at Network Planning using Heuristic Algorithm to Minimize Cost of...Yayah Zakaria
Wireless Mesh Networks (WMN) consists of wireless stations that are connected with each other in a semi-static configuration. Depending on the configuration of a WMN, different paths between nodes offer different levels of efficiency. One areas of research with regard to WMN is cost minimization. A Modified Binary Particle Swarm Optimization (MBPSO) approach was used to optimize cost. However, minimized cost does not
guarantee network performance. This paper thus, modified the minimization function to take into consideration the distance between the different nodes so as to enable better performance while maintaining cost balance. The results were positive with the PDR showing an approximate increase of 17.83% whereas the E2E delay saw an approximate decrease of 8.33%.
3 - A critical review on the usual DCT Implementations (presented in a Malays...Youness Lahdili
1) The document reviews and compares various implementations of the Discrete Cosine Transform (DCT), which is widely used in video and image compression standards.
2) It finds that the Loeffler algorithm achieves the theoretical minimum of 11 multiplications for an 8-point DCT, while the Arai algorithm requires only 5 multiplications and 29 additions.
3) The document concludes that while the DCT has been very successful, other transforms like the Discrete Wavelet Transform used in the Daala standard may provide alternatives worth further research to reduce blocking artifacts at high compression.
Whitepaper nebucom intelligent application broking and provisioning in a hybr...Nebucom
The document discusses intelligent application broking and provisioning in hybrid cloud environments. It compares the performance of virtual machines (VMs) and containers on various benchmarks. Containers show negligible overhead while VMs show significant overhead, especially for disk I/O. The document also describes a platform developed to intelligently broker and provision applications across cloud platforms and virtualization technologies like VMs and containers. The platform has a modular architecture with layers for brokering, provisioning and framework abstraction.
A bi objective minimum cost-time network flow problemAmirhadi Zakeri
The Minimum Cost-Time Network Flow (MCTNF) problem deals with shipping the available supply through the directed network to satisfy demand at minimal total cost and minimal total time.
directed network to satisfy demand at minimal total cost and minimal total time
E XTENDED F AST S EARCH C LUSTERING A LGORITHM : W IDELY D ENSITY C LUSTERS ,...csandit
CFSFDP (clustering by fast search and find of densi
ty peaks) is recently developed density-
based clustering algorithm. Compared to DBSCAN, it
needs less parameters and is
computationally cheap for its non-iteration. Alex.
at al have demonstrated its power by many
applications. However, CFSFDP performs not well whe
n there are more than one density peak
for one cluster, what we name as "no density peaks"
. In this paper, inspired by the idea of a
hierarchical clustering algorithm CHAMELEON, we pro
pose an extension of CFSFDP,
E_CFSFDP, to adapt more applications. In particular
, we take use of original CFSFDP to
generating initial clusters first, then merge the s
ub clusters in the second phase. We have
conducted the algorithm to several data sets, of wh
ich, there are "no density peaks". Experiment
results show that our approach outperforms the orig
inal one due to it breaks through the strict
claim of data sets
This document evaluates the performance of a hybrid differential evolution-genetic algorithm (DE-GA) approach for load balancing in cloud computing. It first provides background on cloud computing and load balancing. It then describes the DE-GA approach, which uses differential evolution initially and switches to genetic algorithm if needed. The results show that the hybrid DE-GA approach improves performance over differential evolution and genetic algorithm alone, reducing makespan, average response time, and improving resource utilization. The study demonstrates the benefits of the hybrid evolutionary algorithm for an important problem in cloud computing.
Flexible channel allocation using best Secondary user detection algorithmijsrd.com
This document proposes a flexible channel allocation algorithm for cooperative cognitive radio networks using secondary user detection. It introduces Flexible Channel Cooperation (FLEC) which allows secondary users to optimize their use of resources including channels and time slots from primary users. The document develops efficient resource allocation algorithms for FLEC, including a distributed bargaining algorithm and centralized heuristic algorithm. It evaluates the performance of FLEC and shows it provides throughput improvements of 20-60% over conventional identical channel cooperation. A centralized heuristic algorithm achieves near-optimal performance with only 5% loss compared to the optimal centralized algorithm, providing a good tradeoff between performance and complexity.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
logical effort based dual mode logic gates by mallikaMallika Naidu
Logic optimization and timing estimations are basic tasks for digital circuit designers. Dual Mode Logic (DML) allows operation in two modes such as static and dynamic modes. DML gates can be switched between these two modes on feature very low power dissipation in the static mode and high speed of operation in dynamic mode which is achieved at the expense of increased power dissipation. We introduce the logical effort (LE) methodology for the CMOS-based family. The proposed methodology allows path length, delay and power optimization for number of stages with load. Logical effort is the transistor sizing optimization methodology reduces the delay and power with number of stages with any static logic gate. The proposed optimization is shown for dual mode logic gates with logical effort using Digital Schematic Tool (DSCH).
This document proposes an algorithm to recommend the most suitable sparse storage format for implementing sparse matrix vector multiplication (SpMV) on a GPU. The algorithm uses k-means clustering on metrics of an input sparse matrix like row length to analyze the matrix pattern. Based on this analysis and predefined heuristics about the performance of different SpMV algorithms, the algorithm recommends a storage format like CSR, ELL, Hybrid, or Aligned COO that is well-suited for the matrix pattern. The performance of the recommended storage format is demonstrated on a conjugate gradient solver application to show the effectiveness of the algorithm at choosing a high performing approach.
Robust Watermarking Using Hybrid Transform of DCT, Haar and Walsh and SVDIJERD Editor
The document summarizes a research paper that proposes a novel approach for digital image watermarking using a hybrid transform and singular value decomposition (SVD). The hybrid transform is generated by taking the Kronecker product of existing orthogonal transforms like discrete cosine transform (DCT), Haar, and Walsh. The watermark is embedded by modifying the singular values of the host image after applying the hybrid transform and SVD. Experimental results show the proposed approach improves robustness against compression by 59%, noise addition by 70%, and resizing by 32-56% compared to other hybrid wavelet transform techniques. The performance is evaluated under various attacks like compression, cropping, noise addition, resizing, and histogram equalization.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
A Host Selection Algorithm for Dynamic Container Consolidation in Cloud Data ...IRJET Journal
This document proposes a novel host selection algorithm called Energy-Efficient Particle Swarm Optimization (EE-PSO) for dynamic container consolidation in cloud data centers. The goal of the algorithm is to reduce energy consumption while maintaining quality of service levels. It was tested using the ContainerCloudSim toolkit on real-world workloads and was found to outperform existing algorithms in terms of energy savings, quality of service guarantees, number of new virtual machines created, and number of container migrations.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Survey on virtual machine placement techniques in cloud computing environmentijccsa
In traditional data center numbers of services are run onto the dedicated physical servers. Most of the
time, these data centers are not used their full capacity in term of resources. Virtualization allows the
movement of VM from one host to the another host ,which is called virtual machine migration, so these
data centers can consolidate their services onto lesser number of physical servers than originally required.
Virtual machine placement is the part of the VM migration. To map the virtual machines to the physical
machines is called the VM placement. In other word, VM placement is the process to select the
appropriate host for the given VM. For the efficient utilization of the physical resources, VM should be
placed on to the suitable host. So many virtual machine placement algorithms have been proposed by
different researchers that run under cloud computing environment. Most of the VM placement algorithms
try to achieve some goal. This goal can either saving energy by shutting down some severs or it can be
maximizing the resources utilization. Four steps are involved in the VM machine migration process. First
step is to select the PM which is overload or undreloaded, next step is to select one or more VM, and then
select the PM where selected VM can be placed and last step is to transfer the VM. Selecting the suitable
host is one of the challenging task in the migration process, because wrong selection of host can increased
the number of migration, resource wastage and energy consumption. This paper only focuses to the third
step that is selecting a suitable PM that can host the VM. It shows an analysis of different existing Virtual
Machine’s placement algorithms with their anomalies.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
Memory and I/O optimized rectilinear Steiner minimum tree routing for VLSI IJECEIAES
As the size of devices are scaling down at rapid pace, the interconnect delay play a major part in performance of IC chips. Therefore minimizing delay and wire length is the most desired objective. FLUTE (Fast Look-Up table) presented a fast and accurate RSMT (Rectilinear Steiner Minimum Tree) construction for both smaller and higher degree net. FLUTE presented an optimization technique that reduces time complexity for RSMT construction for both smaller and larger degree nets. However for larger degree net this technique induces memory overhead, as it does not consider the memory requirement in constructing RSMT. Since availability of memory is very less and is expensive, it is desired to utilize memory more efficiently which in turn results in reducing I/O time (i.e. reduce the number of I/O disk access). The proposed work presents a Memory Optimized RSMT (MORSMT) construction in order to address the memory overhead for larger degree net. The depth-first search and divide and conquer approach is adopted to build a Memory optimized tree. Experiments are conducted to evaluate the performance of proposed approach over existing model for varied benchmarks in terms of computation time, memory overhead and wire length. The experimental results show that the proposed model is scalable and efficient.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
Improvement at Network Planning using Heuristic Algorithm to Minimize Cost of...Yayah Zakaria
Wireless Mesh Networks (WMN) consists of wireless stations that are connected with each other in a semi-static configuration. Depending on the configuration of a WMN, different paths between nodes offer different levels of efficiency. One areas of research with regard to WMN is cost minimization. A Modified Binary Particle Swarm Optimization (MBPSO) approach was used to optimize cost. However, minimized cost does not
guarantee network performance. This paper thus, modified the minimization function to take into consideration the distance between the different nodes so as to enable better performance while maintaining cost balance. The results were positive with the PDR showing an approximate increase of 17.83% whereas the E2E delay saw an approximate decrease of 8.33%.
3 - A critical review on the usual DCT Implementations (presented in a Malays...Youness Lahdili
1) The document reviews and compares various implementations of the Discrete Cosine Transform (DCT), which is widely used in video and image compression standards.
2) It finds that the Loeffler algorithm achieves the theoretical minimum of 11 multiplications for an 8-point DCT, while the Arai algorithm requires only 5 multiplications and 29 additions.
3) The document concludes that while the DCT has been very successful, other transforms like the Discrete Wavelet Transform used in the Daala standard may provide alternatives worth further research to reduce blocking artifacts at high compression.
Whitepaper nebucom intelligent application broking and provisioning in a hybr...Nebucom
The document discusses intelligent application broking and provisioning in hybrid cloud environments. It compares the performance of virtual machines (VMs) and containers on various benchmarks. Containers show negligible overhead while VMs show significant overhead, especially for disk I/O. The document also describes a platform developed to intelligently broker and provision applications across cloud platforms and virtualization technologies like VMs and containers. The platform has a modular architecture with layers for brokering, provisioning and framework abstraction.
A bi objective minimum cost-time network flow problemAmirhadi Zakeri
The Minimum Cost-Time Network Flow (MCTNF) problem deals with shipping the available supply through the directed network to satisfy demand at minimal total cost and minimal total time.
directed network to satisfy demand at minimal total cost and minimal total time
E XTENDED F AST S EARCH C LUSTERING A LGORITHM : W IDELY D ENSITY C LUSTERS ,...csandit
CFSFDP (clustering by fast search and find of densi
ty peaks) is recently developed density-
based clustering algorithm. Compared to DBSCAN, it
needs less parameters and is
computationally cheap for its non-iteration. Alex.
at al have demonstrated its power by many
applications. However, CFSFDP performs not well whe
n there are more than one density peak
for one cluster, what we name as "no density peaks"
. In this paper, inspired by the idea of a
hierarchical clustering algorithm CHAMELEON, we pro
pose an extension of CFSFDP,
E_CFSFDP, to adapt more applications. In particular
, we take use of original CFSFDP to
generating initial clusters first, then merge the s
ub clusters in the second phase. We have
conducted the algorithm to several data sets, of wh
ich, there are "no density peaks". Experiment
results show that our approach outperforms the orig
inal one due to it breaks through the strict
claim of data sets
This document evaluates the performance of a hybrid differential evolution-genetic algorithm (DE-GA) approach for load balancing in cloud computing. It first provides background on cloud computing and load balancing. It then describes the DE-GA approach, which uses differential evolution initially and switches to genetic algorithm if needed. The results show that the hybrid DE-GA approach improves performance over differential evolution and genetic algorithm alone, reducing makespan, average response time, and improving resource utilization. The study demonstrates the benefits of the hybrid evolutionary algorithm for an important problem in cloud computing.
Flexible channel allocation using best Secondary user detection algorithmijsrd.com
This document proposes a flexible channel allocation algorithm for cooperative cognitive radio networks using secondary user detection. It introduces Flexible Channel Cooperation (FLEC) which allows secondary users to optimize their use of resources including channels and time slots from primary users. The document develops efficient resource allocation algorithms for FLEC, including a distributed bargaining algorithm and centralized heuristic algorithm. It evaluates the performance of FLEC and shows it provides throughput improvements of 20-60% over conventional identical channel cooperation. A centralized heuristic algorithm achieves near-optimal performance with only 5% loss compared to the optimal centralized algorithm, providing a good tradeoff between performance and complexity.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
logical effort based dual mode logic gates by mallikaMallika Naidu
Logic optimization and timing estimations are basic tasks for digital circuit designers. Dual Mode Logic (DML) allows operation in two modes such as static and dynamic modes. DML gates can be switched between these two modes on feature very low power dissipation in the static mode and high speed of operation in dynamic mode which is achieved at the expense of increased power dissipation. We introduce the logical effort (LE) methodology for the CMOS-based family. The proposed methodology allows path length, delay and power optimization for number of stages with load. Logical effort is the transistor sizing optimization methodology reduces the delay and power with number of stages with any static logic gate. The proposed optimization is shown for dual mode logic gates with logical effort using Digital Schematic Tool (DSCH).
This document proposes an algorithm to recommend the most suitable sparse storage format for implementing sparse matrix vector multiplication (SpMV) on a GPU. The algorithm uses k-means clustering on metrics of an input sparse matrix like row length to analyze the matrix pattern. Based on this analysis and predefined heuristics about the performance of different SpMV algorithms, the algorithm recommends a storage format like CSR, ELL, Hybrid, or Aligned COO that is well-suited for the matrix pattern. The performance of the recommended storage format is demonstrated on a conjugate gradient solver application to show the effectiveness of the algorithm at choosing a high performing approach.
Robust Watermarking Using Hybrid Transform of DCT, Haar and Walsh and SVDIJERD Editor
The document summarizes a research paper that proposes a novel approach for digital image watermarking using a hybrid transform and singular value decomposition (SVD). The hybrid transform is generated by taking the Kronecker product of existing orthogonal transforms like discrete cosine transform (DCT), Haar, and Walsh. The watermark is embedded by modifying the singular values of the host image after applying the hybrid transform and SVD. Experimental results show the proposed approach improves robustness against compression by 59%, noise addition by 70%, and resizing by 32-56% compared to other hybrid wavelet transform techniques. The performance is evaluated under various attacks like compression, cropping, noise addition, resizing, and histogram equalization.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
A Host Selection Algorithm for Dynamic Container Consolidation in Cloud Data ...IRJET Journal
This document proposes a novel host selection algorithm called Energy-Efficient Particle Swarm Optimization (EE-PSO) for dynamic container consolidation in cloud data centers. The goal of the algorithm is to reduce energy consumption while maintaining quality of service levels. It was tested using the ContainerCloudSim toolkit on real-world workloads and was found to outperform existing algorithms in terms of energy savings, quality of service guarantees, number of new virtual machines created, and number of container migrations.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Survey on virtual machine placement techniques in cloud computing environmentijccsa
In traditional data center numbers of services are run onto the dedicated physical servers. Most of the
time, these data centers are not used their full capacity in term of resources. Virtualization allows the
movement of VM from one host to the another host ,which is called virtual machine migration, so these
data centers can consolidate their services onto lesser number of physical servers than originally required.
Virtual machine placement is the part of the VM migration. To map the virtual machines to the physical
machines is called the VM placement. In other word, VM placement is the process to select the
appropriate host for the given VM. For the efficient utilization of the physical resources, VM should be
placed on to the suitable host. So many virtual machine placement algorithms have been proposed by
different researchers that run under cloud computing environment. Most of the VM placement algorithms
try to achieve some goal. This goal can either saving energy by shutting down some severs or it can be
maximizing the resources utilization. Four steps are involved in the VM machine migration process. First
step is to select the PM which is overload or undreloaded, next step is to select one or more VM, and then
select the PM where selected VM can be placed and last step is to transfer the VM. Selecting the suitable
host is one of the challenging task in the migration process, because wrong selection of host can increased
the number of migration, resource wastage and energy consumption. This paper only focuses to the third
step that is selecting a suitable PM that can host the VM. It shows an analysis of different existing Virtual
Machine’s placement algorithms with their anomalies.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
This document summarizes a study on using replication and failover clusters to maximize system uptime for cloud services. It discusses challenges in ensuring high availability of cloud services from a provider perspective. The study aims to present a high availability solution using load balancing, elasticity, replication, and disaster recovery configuration. It reviews related literature on digital media distribution platforms, content delivery networks, auto-scaling strategies, and database replication impact. It also covers methodologies like CloudFront, state machine replication, neural networks, Markov decision processes, and sliding window protocols. The scope is to build a scalable, fault-tolerant environment with disaster recovery and ensure continuous availability. The conclusion is that data replication and failover clusters are necessary to plan data
Load Balancing in Cloud Computing Through Virtual Machine PlacementIRJET Journal
This document discusses load balancing in cloud computing through virtual machine placement. It proposes using a binary search tree approach to map virtual machines to host machines in a way that optimizes resource utilization, minimizes resource allocation time, and reduces violations of service level agreements. The approach is analyzed using the CloudSim simulator and compared to other placement strategies. The document provides background on resource allocation, types of virtual machine placement algorithms, and related work on power-aware and energy-efficient placement strategies.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
Affinity based virtual machine migration (AVM) approach for effective placeme...IRJET Journal
This document discusses an affinity-based virtual machine migration (AVM) approach to optimize VM placement in cloud environments. The AVM approach uses a genetic algorithm to group VMs based on affinity metrics like traffic rate and communication cost, with the goal of reducing communication cost. It introduces non-randomness to the initial population to reduce search space and time. The algorithm considers network topology and aims to minimize the number of networking devices between VMs to reduce communication cost and traffic rate. Simulation results show the AVM approach reduces communication cost by 15% and execution time compared to existing genetic algorithms.
Efficient fault tolerant cost optimized approach for scientific workflow via ...IAESIJAI
Cloud computing is one of the dispersed and effective computing models, which offers tremendous opportunity to address scientific issues with big scale characteristics. Despite having such a dynamic computing paradigm, it faces several difficulties and falls short of meeting the necessary quality of services (QoS) standards. For sustainable cloud computing workflow, QoS is very much required and need to be addressed. Recent studies looked on quantitative fault-tolerant programming to reduce the number of copies while still achieving the reliability necessity of a process on the heterogeneous infrastructure as a service (IaaS) cloud. In this study, we create an optimal replication technique (ORT) about fault tolerance as well as cost-driven mechanism and this is known as optimal replication technique with fault tolerance and cost minimization (ORT-FTC). Here ORT-FTC employs an iterative-based method that chooses the virtual machine and its copies that have the shortest makespan in the situation of specific tasks. By creating test cases, ORT-FTC is tested while taking into account scientific workflows like CyberShake, laser interferometer gravitational-wave observatory (LIGO), montage, and sipht. Additionally, ORT-FTC is shown to be only slightly improved over the current model in all cases.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes research on techniques for virtual machine (VM) scheduling and management to improve energy efficiency in cloud computing. It discusses how VM scheduling algorithms aim to optimally map VMs to physical servers while minimizing costs and power consumption. Precopy and postcopy live migration techniques are described for managing VMs. The document surveys various algorithms for VM scheduling, including ones based on data transfer time, linear programming, and combinatorial optimization. It also discusses factors that affect VM migration efficiency such as hypervisor options and network configuration. Overall, the document provides an overview of energy-efficient approaches for VM scheduling and management in cloud computing.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
Similar to Bin packing algorithms for virtual machine placement in cloud computing: a review (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Bin packing algorithms for virtual machine placement in cloud computing: a review
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 9, No. 1, February 2019, pp. 512∼ 524
ISSN: 2088-8708, DOI: 10.11591/ijece.v9i1.pp512-524 512
Bin packing algorithms for virtual machine placement in
cloud computing: a review
Kumaraswamy S1
and Mydhili K Nair2
1
Department of Computer Science and Engineering, Global Academy of Technology, Bengaluru 560098, India
2
Department of Information Science and Engineering, Ramaiah Institute of Technology, Bengaluru 560054, India
Article Info
Article history:
Received Dec 31, 2017
Revised Jul 25, 2018
Accepted Jul 29, 2018
Keywords:
Cloud computing
Virtual machine placement
Bin packing
ABSTRACT
Cloud computing has become more commercial and familiar. The Cloud data centers
have huge challenges to maintain QoS and keep the Cloud performance high. The
placing of virtual machines among physical machines in Cloud is significant in opti-
mizing Cloud performance. Bin packing based algorithms are most used concept to
achieve virtual machine placement (VMP). This paper presents a rigorous survey and
comparisons of the bin packing based VMP methods for the Cloud computing environ-
ment. Various methods are discussed and the VM placement factors in each methods
are analyzed to understand the advantages and drawbacks of each method. The scope
of future research and studies are also highlighted.
Copyright c 2019 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
Kumaraswamy S,
Dept. of Computer Science and Engineering,
Global Academy of Technology,
Rajarajeshwari Nagar, Bengaluru - 560098 India.
+919886363412
Email: skswamy99@gmail.com
1. INTRODUCTION
1.1. Back Ground
In recent years Cloud computing architectures have received an increasing attention due to their great
promises in enabling distributed computing paradigm [1]. It provides pool of computing resources enabling
the Cloud user’s application to use it [2]. These resources can be rented to users or customers by Cloud Service
Providers (CSPs) on pay-as-you-go model, like public utility such as gas, water and electricity [3, 4].
Virtualization is an enabled technology for cloud computing. It provides the flexibility to cloud. It
minimizes the energy cost, maximizes the CPU utilization, and maximizes the usage of memory/disk.
Multiple VMs on a PM may degrades the performance of PM or overutilize the PM. To overcome
these issues, virtualization layer provides the mobility to place or move VM to other PMs within cloud. The
plan for placing VMs among PMs plays an important role in optimizing cloud performance is called Virtual
Machine Placement (VMP) which finds the optimal VM placement/mapping with PMs with various objectives
and constraints.
In principle, the VMP problem is related to the classical Bin Packing (BP) problem, where the aim is
to pack or place or map set of items / objects of different size into a minimum number of unit-capacity bins.
It is proved in literature that both BP and VMP are NP-hard problems [5]. If there are ’m’ PMs and ’n’ VMs,
then, mn
placement solutions. If m and n are very big numbers, solutions are huge. One can easily say that
VMP is as hard as BP. The differences between Classical BP and BP-based VMP are detailed in Table 1
Journal Homepage: http://iaescore.com/journals/index.php/IJECE
2. IJECE ISSN: 2088-8708 513
Since VMP problem is an NP hard problem, exact algorithms takes more time to get solutions, hence
this problem can be effectively approximated using approximation algorithms such as greedy algorithms, ge-
netic algorithms to accomplish results. These provide a solution, which though very good in most cases, may
not be an optimal solution but near-optimal.
Packing algorithms like First Fit, Best Fit, Worst Fit, First Fit Decreasing (FFD), Best Fit Decreasing
(BFD), Worst Fit Decreasing (WFD), greedy algorithms etc., are used to place VMs optimally. Also, authors in
Ref. [6, 7, 8, 9] present few heuristics like First-Fit (FF), Best-Fit (BF), and Worst-Fit (WF) for load balancing,
some of which could be tuned to serve the purpose of VMP.
In the FF, the VM is placed in the PM which is selected as first machine with available capacity. In BF,
the name itself says fit to be best when left over space is least after placing the VM in the PM. In the WF, left
over space is more compared to previous methods when the VM is packed in the PM. In First Fit Decreasing
(FFD), Best Fit Decreasing (BFD), and Worst Fit Decreasing (WFD) heuristics VMs are sorted in decreasing
order before packing. Once sorted, the VMs are placed according to original heuristics.
Table 1. Differences between Classical BP and BP-based VMP schemes
Classical BP BP-based VMP
Item size is fixed once placed Item size will vary even after place-
ment due to SLAs
mono-objective optimization e.g.,
to minimize number of bins
multi-objective optimization e.g.,
minimize number of bins and max-
imize VMs performance
single-dimensional resource pack-
ing e.g., CPU or memory
multi-dimensional resource pack-
ing e.g., CPU and memory
The VMP problem can also be considered as a multi-dimensional or multi-capacity problem within
a PM [10]. In Multi-Capacity Bin Packing (MCBP) problem, once item is allocated to bin, its resources like
CPU, memory, bandwidth etc., cannot be allocated or available to other items in a bin.
In multi-dimensional problem, capacity is shared and is not dedicated to single item. This leads to
conflict among items to get dedicated resources. Figure 1 shows the comparison between multi-dimensional
and multi-capacity bin packing.
Figure 1. multi-dimensional vs. multi-capacity bin packing [10]
1.2. Problem Description
There has been considerable contributions in the literature regarding BP problem and its application
for the VMP issue. Many approximation techniques to solve BP problem for VMP issue have been presented;
however, comparative study on all these techniques is still lacking in the literature. Comparing all the major
techniques for VMP issue through BP problem modeling, would aid in understanding the relative merits and
design limitations of all these techniques, and would also aid in framing future problems in this area. The
main focus of this paper is to provide comprehensive survey and simulation study on various approximation
technique–regarding BP solution techniques–used for VMP issue; also, to provide the merits and limitations
for these techniques, and to identify the future research problems.
Bin packing algorithms for virtual machine... (Kumaraswamy S)
3. 514 ISSN: 2088-8708
1.3. Contributions
In this survey paper, the important considerations and challenges such as: resource utilization, reliabil-
ity e.t.c, to efficiently design BP based solution for VMP issue are outlined. The three important approximation
solutions for BP problem: First Fit, Best Fit and Best Fit Decreasing approaches, are described; also, the
recently extended techniques for these approximation solutions presented in the literature are also described.
Heuristics for VMP issue, usually do not guarantee performance bounds; nevertheless, they provide empirically
demonstrated performance merits, and different heuristic techniques used for VMP issue are also outlined. Rel-
ative simulation oriented performance comparison for different approximation solutions for BP problem based
VMP issue is presented, and corresponding merits and limitations are outlined. Finally, the future directions
for the VMP issue are highlighted.
This paper is organized as follows; we start with a discussion on the considerations and challenges for
VMP (Section 2), followed by BP-based problem definition in Section 3. Section 4 presents BP based VMP.
Challenges of BP algorithms are presented in section 5. Section 6 presents performance evaluation of VMP
schemes. This is followed by future studies (Section 7), and conclusion (Section 8).
2. CONSIDERATIONS AND CHALLENGES OF VMP
Several placement problem exist with regard to determining as to where data needs to be stored and
where the job in VMs can be executed. Here we discuss the considerations and challenges of VMP.
2.1. Considerations of VMP
There are several factors involved in deciding as to when and where to place or reallocate VMs for
performing computations in Cloud computing environment. The main factors are as follows.
(a) Performance: To see that the utilization of physical infrastructures are improved, data centers (DC)
need to employ virtualization that could facilitate large number of applications to run simultane-
ously [11].
(b) Cost: Recent trend in Cloud market shows that dynamic pricing schemes utilization is being in-
creased [12] to reduce the investment in the DC. Therefore, internal cost for VMP also needs to be
considered.
(c) Locality: If accessibility and usability for users need to be considered, VMs should be closely
located to users.
(d) Reliability and continuous availability: Reliability and availability of the different DC, and its
expected usage frequency must be taken into account.
2.2. Challenges of VMP
Variation of scenarios in which the applications are to be deployed need relevant parameters consid-
erations that results in the following challenges while placing VMs.
(a) Firstly, non-existence of generic model for representing various scenarios for resource scheduling.
(b) Secondly, parameterization of model for bigger problem size.
(c) Thirdly, the VMP problem is typically mapped to multiple-knapsack problem, belonging to a cate-
gory of NP hard problems [13]. Thus, tradeoffs between execution time and quality of solution is
an important issue to be tackled, given the size of real life DC.
3. BIN PACKING PROBLEM
Let, S = S1, S2, ...Sm be the set of bins and all of them have the same size V . Let, a1, a2, ....an be
the set of n items that need to be packed in the bins. The task is to discover integer number of bins B and a B
partition of the set (1, 2, ..n) given by, S1 ∪ S2∪, ... ∪ SB, so that, i∈Sk
ai ≤ V ∀k = 1, 2, ..B. The optimal
IJECE Vol. 9, No. 1, February 2019 : 512 – 524
4. IJECE ISSN: 2088-8708 515
solution is to find minimal B. The integer Liner Programming formulation of Bin Packing problem is shown
in Equation 1.
min B =
n
i=1
yi (1)
subject to B ≥ 1,
n
j=1 ajxij ≤ V yi, ∀i ∈ (1, 2, ..n),
n
i=1 xij = 1, ∀j ∈ (1, 2, ..n),
yi ∈ (0, 1), ∀i ∈ (1, 2...n),
xij ∈ (0, 1), ∀i ∈ (1, 2, ...n), ∀j ∈ (1, 2, ...n)
Here, yi = 0 if bin i is not used, otherwise, yi = 1 and xij = 0 if item j is put into bin i, otherwise,
xij = 1.
4. BP-BASED VMP
We view the problem of placement of VMs in data centers(DC)s as similar to a BP problem. We
visulaize the classic BP Problem [5] in the context of VMP in the following way: VMs (objects) of different
sizes are placed in hosts (bins) to ensure minimum hosts being employed for placing all VMs. In other words,
the problem statement can be stated as follows: With ’x’ PMs being made available with resource capacities
in terms of memory, CPU and network bandwidth resources, we require ’y’ VMs to be placed such that the
total resource requirement of the VMs placed on a PM should not exceed its capacity. Several BP-based VMP
algorithms are used in finding optimal VMP are discussed next.
4.1. First Fit (FF) approach
The First Fit Bin Packing algorithm is described in Algorithm 1. The items are scanned in any order
and every item is attempted to be placed in the available bins sequentially. If it does not get fit into existing
bins then, new bin is created to place the item. This algorithm achieves an approximation ratio of 2.
Algorithm 1 First Fit Algorithm
for i = 1 to n do
for j = 1 to m do
if Item i with size ai fits in Bin Sj then
Place the item ai in Bin Sj.
j + +
end if
if Item i does not fit in Bin Sj then
j + +
end if
end for
if Item i does not fit in any Bin then
Open a new bin and place the item in it.
m = m + 1
end if
end for
Authors in [14] introduce a dynamic server and consolidation management algorithm that measures
historical data, forecasts future demand and re-maps VMs to PMs. To forecast future demand, time series
method is used. The BP heuristic method based on FF approximation is used to perform mapping between
VMs and PMs.
Authors in [15] investigated VMP problem in a Cloud as a generalized assignment problem (GAP)
and formulated a multi-level generalized assignment problem (MGAP). It uses FF heuristic to solve MGAP
and maximizes the profit under the SLA.
Bin packing algorithms for virtual machine... (Kumaraswamy S)
5. 516 ISSN: 2088-8708
In the FF approach, the VMs are placed in the host where they first fit, according to a predefined order
between active hosts. The FF algorithm [16, 5] accomplishes this by activating a single host at a time, as it
is filled up with VMs. This results in DC saving energy, since only hosts that contain some VMs need to be
switched on. However, since host resource usage levels tend to be close to the maximum, VM migration is
more likely to happen in the presence of elasticity, that degrades the performance, besides consuming some
extra energy.
4.2. Best Fit (BF) approach
The Best Fit technique is described in Algorithm 2. The algorithm works similar to First Fit, but,
the items are placed in that bin which has the lowest residual capacity. The Best Fit algorithm achieves an
approximation ratio of 2.
Algorithm 2 Best Fit Algorithm
Let r1 = V − 0, r2 = V − 0, ..rm = V − 0 be the initial residual capacity of each bin.
for i = 1 to n do
for j = 1 to m do
if Item i with size ai fits in Bin Sj then
Calculate scoreij = rj − ai
j + +
end if
end for
Fit item i into that bin which has the lowest scoreij.
rj = rj − ai
if Item i does not fit in any Bin then
Open a new bin and place the item in it.
m = m + 1
end if
end for
Authors in Ref. [17] consider greedy algorithm with two-stage to solve VMP for maximizing energy-
efficiency and network performance. In first stage, if no congestion, energy optimization took priority. Authors
combined minimum cut hierarchical clustering with Best Fit(BF) algorithm, enabled VMs with large traffic to
be placed on the same PM or the same access switch. In second stage, if there is congestion, they applied local
search algorithm to minimize Maximum Link Utilization(MLU) and link congestion with equal distribution of
network traffic.
Authors in [18] consider Fat-tree DC topology to propose a solution for VMP problem and migration.
The solution reduces the power cost and job delay by consolidating VMs to a few PMs and migrating VMs
to close locations. In VMP stage, it considers two mechanisms, namely random placement mechanism, and
power-efficient placement. In random placement, PM is selected randomly to place a VM. In power-efficient
mechanism, three algorithms, i.e., FF, BF and WF are discussed; an OpenFlow controller uses these algorithms
to find proper PM to deploy a VM based on the weighted difference.
The authors in [19] discuss a static disk threshold-based migration algorithm aiming at optimizing
the performance of the VMs. To prevent the degradation and congestion problems in the network, authors in
[20] investigate a VM placement method, called energy efficiency and quality of service aware VM placement
(EQVMP). To predict the future behavior of a VM on destination host, authors in Ref. [21, 17] present a VM
placement scheme named Backward Speculative Placement (BSP), that monitors the historical demand traces
of the deployed VMs.
4.3. First Fit Decreasing (FFD) and Best Fit Decreasing(BFD) approach
The Fit Fit Decreasing and Best Fit decreasing techniques work similar to First Fit and Best Fit tech-
niques respectively. But, the items are arranged in decreasing order of their sizes to improve the approximation
IJECE Vol. 9, No. 1, February 2019 : 512 – 524
6. IJECE ISSN: 2088-8708 517
ratio. The approximation ratio of both these techniques is < 2. Algorithms 3 and 4 describe these two tech-
niques.
Algorithm 3 First Fit Decreasing Algorithm
Item Set (1, 2, ..n) is arranged in decreasing order of their sizes.
a1 ≥ a2 ≥ ... ≥ an
for i = 1 to n do
for j = 1 to m do
if Item i with size ai fits in Bin Sj then
Place the item ai in Bin Sj.
j + +
end if
if Item i does not fit in Bin Sj then
j + +
end if
end for
if Item i does not fit in any Bin then
Open a new bin and place the item in it.
m = m + 1
end if
end for
Algorithm 4 Best Fit Decreasing Algorithm
Item Set (1, 2, ..n) is arranged in decreasing order of their sizes.
a1 ≥ a2 ≥ ... ≥ an
Let r1 = V − 0, r2 = V − 0, ..rm = V − 0 be the initial residual capacity of each bin.
for i = 1 to n do
for j = 1 to m do
if Item i with size ai fits in Bin Sj then
Calculate scoreij = rj − ai
j + +
end if
end for
Fit item i into that bin j which has the lowest scoreij.
rj = rj − ai
if Item i does not fit in any Bin then
Open a new bin and place the item in it.
m = m + 1
end if
end for
In the simplified model, since only single dimension is considered (e.g., CPU usage), VMs could be
ordered according to its resource requirement. In a complicated model, several dimensions are considered to
establish a ranking amongst the set of VMs. The ranking is established in the following manner.
(a) The volume of a VM is calculated by multiplying its demands in all dimensions (by employing the
volume method).
(b) The rank of each VM is determined with respect to a reference host which is, normally, the least
occupied active host or the last one to be activated. In this approach, during rank calculation, the
VM placement problem can be modeled as an instance of the 0-1, or Binary, Knapsack Problem
[17].
Bin packing algorithms for virtual machine... (Kumaraswamy S)
7. 518 ISSN: 2088-8708
For complicated model, directly we can not apply the FFD, generalization of FFD algorithm is essential. Con-
version of multi dimension in to single dimension is done. Such FFD algorithms are called FFD-based algo-
rithms.
Research in Refs. [22] considers Virtual Machine Monitor (VMMs) CPU cycles and migration over-
head in designing VMP problem, to maximize the performance of applications and reduce the number of PMs.
The evaluation of the problem is done in three different FFD based heuristics viz. Dot Product, Euclidean
Distance and Resource Imbalance Vector method. It reduces the number of migrations by more than 84%.
Authors [23] considered deterministic resources viz., CPU, memory, power consumption and network
bandwidth as a stochastic resource for formulating Multi-Dimensional Stochastic Virtual Machine Placement
Problem (MSVP). Since it is a NP-hard problem, authors proposed polynomial time algorithm called Max-
Min Multi-Dimensional Stochastic Bin Packing (M3SBP) to maximize the minimum utilization ratio of all the
resources of a PMs in large scale data centers. This algorithm is inspired by First Fit Decreasing (FFD) and
Dominant Resource First (DRF)
This paper [24] used data model for each VM, the data model accurately gives the resource usage
of the VM. Based on this forecast, VM placement algorithm with power-aware BFD heuristic algorithm is
designed. The simulation results of this algorithm show the effective reduction in power consumption, number
of VM migrations, and prevent SLA violations.
Authors [25] proposed the energy-efficient and QoS aware VM placement (EQVMP) mechanism.
This mechanism is a combination of three modules viz., hop reduction, energy saving and load balancing.
The energy saving module uses VM placement technique inspired by Best-Fit Decreasing (BFD) and max-min
multi-dimensional stochastic bin packing (M3SBP), so that it minimizes the number of server in the datacenter
without SLA violation
In [26], the First Fit Decreasing algorithm is used to solve the VMP problem. In this, for each bin
priority is provided. The highest priority bins consume too many resources or too few resources. So, the virtual
machines in these bins will be subjected to placement.
4.4. Other Heuristics
Few of the authors consider different packing type for VM consolidation in their works. For e.g.,
in Ref. [5], authors consider consolidation problem as a random variable packing problem with bandwidth
constraints, and solve it by approximation algorithm. Ref. [10] considers VM consolidation problem as multi-
capacity stochastic bin packing problem. It proposes heuristic method in order to increase the efficiency of
placement algorithm. [9] propose similar way of VM placement using bin packing, to increase resource utiliza-
tion and profit of CSP. Few authors also consider the Next Fit (NF) approach, in which the VM is placed into
the current PM (if VM resource demands are satisfied in current PM), otherwise new PM is started. It provides
performance ratio of 2.
In the paper [5], authors suggest energy saving by minimizing the number of idle resources in a cloud
computing environment. It has been achieved by far-reaching experiments, so as to quantify the performance
of the suggested algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware
Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the
FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.
According to Markov Decision Process (MDP) and based on reinforcement learning for auto-scaling
resources, authors[27] propose an automatic resource provisioning system. Proposed system simulation re-
sults shows better performance when compared to similar approaches with respect to rate of Service Level
Agreement (SLA) violation and stability.
The authors [28] proposed optimal technique based on four-adaptive threshold model to reduce energy
consumption. Hosts are clustered into five clusters using K-Means Clustering algorithm viz., 1. Hosts with low
load 2. Hosts with light load 3. Hosts with middle load 4. Hosts with heavy load and 5 hosts with heavy load.
VMs are migrated between these clusters on the basis threshold values using mathematical modeling approach
to reduce energy consumption.
Table 2 summarizes the already discussed heuristics in previous section by comparing their basic at-
tributes like: the problem type, the type of heuristics used, the objective type (energy or bandwidth or QoS
aware), type of placement, and finally objective function composite or single.
IJECE Vol. 9, No. 1, February 2019 : 512 – 524
8. IJECE ISSN: 2088-8708 519
Table 2. Bin Packing Algorithms Surveyed
Paper Problem type Heuristics Energy
aware
Bandwidth
aware
QoS
aware
Static/
Dy-
namic
place-
ment
Objective
function
[25] Energy-efficient
and QoS aware
VMP problem
BFD Yes Yes Yes Dynamic Dynamic
[24] Cloud server
consolidation
problem
BFD Yes No Yes Dynamic Composite
[18] Power-efficient
VM placement
and migration
problem
FF,BF,WF Yes Yes No Dynamic Composite
[17] multi-
dimensional
resource con-
straints packing
problem
BF based Yes Yes No Dynamic Composite
[15] multilevel gener-
alized assignment
problem (MGAP)
FF No Yes Static
and
Dy-
namic
single no
[14] Dynamic re-
source allocation
problem
FF NA No Yes Dynamic Single
[22] NA FFD-based NA No Yes Dynamic Single
[23] multi dimen-
sional stochastic
VM Placement
MSVP
FFD-based NA Yes Yes Dynamic Single
[10] multi-capacity
stochastic bin
packing opti-
mization problem
Hierarchical
based
Yes Yes No Dynamic Composite
[5] VM consolida-
tion problem
Random
Variable
Pack-
ing(RVP)
No Yes No Dynamic Composite
[9] Deterministic bin
packing
Worst Fit Yes No Yes Static NA
[26] VM consolida-
tion problem for
power saving
extended
FFD
Yes No No rank-
based
NA
Bin packing algorithms for virtual machine... (Kumaraswamy S)
9. 520 ISSN: 2088-8708
5. CHALLENGES OF BP ALGORITHMS
Some of the important challenge of BP algorithms are discussed as follows.
(a) Interference between items: Due to the multi-dimensional packing of resources used by several
VMs at same time, there is contention for resources among VMs. There may be affinity between
two VMs due to which they may be required to be placed together. There can be various possible
reasons for such affinity. For example, the network traffic between two communicating VMs may
suggest that they be placed together so as to reduce the network overheads. There may be business
constraints like the requirement of web-server and database layers to be together.
The contention may affect the performance of co-located VMs. There are several approaches that
have been proposed to mitigate the effect of contention such as resource isolation among VMs and
optimal mapping of VMs and PMs.
There could also be interference between two VMs which requires that they should not be placed
together. This may be due to technical constraints. For example, say there is a resource (some
remote data center) mirrored at two places, that the two VMs need to access. The VMs are pro-
grammed to access the nearest resource. Now, there may be a technical requirement to keep the
two VMs separately, so that they do not access the same copy. There may be a disk contention: two
VMs trying to access the same data on disk. Interference can also be due to business constraints
like not keeping VMs of competitors together. Or, so that one of the VM remains available even if
the physical host hosting the other VM fails.
(b) Item size and bin size: VM capacity cannot be always static over its lifetime due to SLAs. Dy-
namic size of VM can be considered in BP based algorithm. This is also true for bin/PM size.
For variable-sized BP, there are many open questions. In terms of asymptotic worst-case ratio,
following question arises:
i. which combination of sizes produces the smallest worst-case ratio?
ii. what can be said about the problem with at least three bin sizes?
iii. in terms of absolute worst-case ratio, what is a lower bound for off-line algorithms?
iv. how to design an optimal on-line algorithm?
(c) Partial packing: Existing research works do not discuss filling PMs/bins in optimal way, leaving
large resource fragments or residue. Hence, resources are underutilized. Better BP algorithms need
to be designed so to avoid resource fragments.
(d) No migration: most of BP based works doesn’t supports migration in VMs. Tasks cannot be relo-
cated once they have been allocated to any processor (i.e. no migration).
6. PERFORMANCE EVALUATION
In this section, the simulation analysis results for analyzing the relative performance of different VMP
algorithms are presented. The simulation was carried out through open source simulation tool kit: CloudSim.
Two performance metrics are utilized in simulation study: No of Severs and CPU utilization. The first metric
indicates the number of servers on which all the VMs are running. It is clear that, less number of servers indicate
that, efficient resource utilization has been achieved. The second metric indicates the total CPU utilization in
the cloud server. Higher values of CPU utilization indicates that, proper resource utilization is being achieved.
The simulation analysis results w.r.t. no of servers is illustrated in Figure 2. It is clear that, FF,
FFD and Max-Min algorithms achieve the maximum performance, mainly due to the theoretical performance
bounds established for these algorithms. Similarly, NF and S-NF achieve poor performance mainly due to
lack of effective theoretical performance bounds. Figure 3 illustrates the simulation analysis results w.r.t. CPU
utilization. All the VMP techniques show similar performance, mainly due to their primary design focus of on
achieving efficient CPU utilization.
From this simulation study, it can be inferred that, FF, FFD and Max-Min are the most favorable
techniques for achieving efficient load balancing in cloud.
IJECE Vol. 9, No. 1, February 2019 : 512 – 524
11. 522 ISSN: 2088-8708
7.2. Power-aware VMP schemes
Due to the high cost of power consumption as well as the concerns for global warming and CO2
emissions, the data centers need to employ Green IT for its functioning, for e.g., reduction in the number
of active server, networking or other data center’s components etc. The maintenance costs for the same is
drastically reduced with deployment of Green data centers. To facilitate placement decisions, following factors
of power-aware VM placement schemes could be considered:
(a) CPU/Server states: Four states that could be considered are idle, average, active, and over utilized
[7].
(b) Network elements: Reduce network routing element’s cost.
(c) Data center power usage: The cost could be divided into four categories: server base energy, server
dynamic energy, cooling energy, and peak power.
7.3. Network-aware VMP schemes
The inter-data center and intra-data center network traffic can have a significant effect on revenue of the
CSP owing to its dependency on the SLAs and the performance of the CSP. With the increasing trend towards
communication applications, network-aware VM placement schemes could be very significant to achieve the
following:
(a) Addressing the network traffic related issues with respect to VM placement.
(b) Distribution of the network traffic evenly.
7.4. Cost aware VMP
It is required to optimize the data centers maintenance cost for the CSP. Cost-aware VM placement
scheme could achieve the cost saving, considering the QoS and SLAs. For better placement with cost-aware
VM placement schemes, following factors need to be considered:
(a) VMs cost: The cost of switching (powering) on the VMs.
(b) PM cost: The cost of using the PM in the specific instantiation of time.
(c) Distance between the VMs and the clients: The user performance can be improved, by reducing the
network distance between the VMs and the clients.
(d) Cooling cost: The cost of the cooling system in the specific instantiation in DCs.
8. CONCLUSION
In this paper, we presented a rigorous survey of the BP based VMP algorithms in the Cloud. VMP is
studied and classified with reference to BP. Moreover, complete comparisons of BP inspired VMP algorithms
are presented showing the items that should be considered in future research. In these comparisons, for different
placement problems, BP based heuristics are used to improve energy and QoS issues in the Cloud.
REFERENCES
[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin,
I. Stoica et al., “A view of cloud computing,” Communications of the ACM, vol. 53, no. 4, pp. 50–58,
2010.
[2] I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud computing and grid computing 360-degree compared,” in
Grid Computing Environments Workshop, 2008. GCE’08. Ieee, 2008, pp. 1–10.
[3] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it plat-
forms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer
systems, vol. 25, no. 6, pp. 599–616, 2009.
IJECE Vol. 9, No. 1, February 2019 : 512 – 524
12. IJECE ISSN: 2088-8708 523
[4] S. S. Manvi and G. K. Shyam, “Resource management for infrastructure as a service (iaas) in cloud
computing: A survey,” Journal of Network and Computer Applications, vol. 41, pp. 424–440, 2014.
[5] M. Wang, X. Meng, and L. Zhang, “Consolidating virtual machines with dynamic bandwidth demand in
data centers,” in INFOCOM, 2011 Proceedings IEEE. IEEE, 2011, pp. 71–75.
[6] A. Singh, M. Korupolu, and D. Mohapatra, “Server-storage virtualization: integration and load balancing
in data centers,” in Proceedings of the 2008 ACM/IEEE conference on Supercomputing. IEEE Press,
2008, p. 53.
[7] N. Rasouli, M. R. Meybodi, and H. Morshedlou, “Virtual machine placement in cloud systems using
learning automata,” in Fuzzy Systems (IFSC), 2013 13th Iranian Conference on. IEEE, 2013, pp. 1–5.
[8] Z. Xiao, J. Jiang, Y. Zhu, Z. Ming, S. Zhong, and S. Cai, “A solution of dynamic vms placement prob-
lem for energy consumption optimization based on evolutionary game theory,” Journal of Systems and
Software, vol. 101, pp. 260–272, 2015.
[9] K. R. Babu and P. Samuel, “Virtual machine placement for improved quality in iaas cloud,” in Advances
in Computing and Communications (ICACC), 2014 Fourth International Conference on. IEEE, 2014,
pp. 190–194.
[10] I. Hwang and M. Pedram, “Hierarchical virtual machine consolidation in a cloud computing system,” in
Cloud Computing (CLOUD), 2013 IEEE Sixth International Conference on. IEEE, 2013, pp. 196–203.
[11] O. Tickoo, R. Iyer, R. Illikkal, and D. Newell, “Modeling virtual machine performance: challenges and
approaches,” ACM SIGMETRICS Performance Evaluation Review, vol. 37, no. 3, pp. 55–60, 2010.
[12] J. L. L. Simarro, R. Moreno-Vozmediano, R. S. Montero, and I. M. Llorente, “Dynamic placement of
virtual machines for cost optimization in multi-cloud environments,” in High Performance Computing
and Simulation (HPCS), 2011 International Conference on. IEEE, 2011, pp. 1–7.
[13] S. Martello, “Knapsack problems: algorithms and computer implementations,” Wiley-Interscience series
in discrete mathematics and optimization, 1990.
[14] N. Bobroff, A. Kochut, and K. Beaty, “Dynamic placement of virtual machines for managing sla viola-
tions,” in Integrated Network Management, 2007. IM’07. 10th IFIP/IEEE International Symposium on.
IEEE, 2007, pp. 119–128.
[15] W. Shi and B. Hong, “Towards profitable virtual machine placement in the data center,” in Utility and
Cloud Computing (UCC), 2011 Fourth IEEE International Conference on. IEEE, 2011, pp. 138–145.
[16] A. Beloglazov and R. Buyya, “Energy efficient allocation of virtual machines in cloud data centers,”
in Cluster, Cloud and Grid Computing (CCGrid), 2010 10th IEEE/ACM International Conference on.
IEEE, 2010, pp. 577–578.
[17] J. Dong, H. Wang, X. Jin, Y. Li, P. Zhang, and S. Cheng, “Virtual machine placement for improving
energy efficiency and network performance in iaas cloud,” in Distributed Computing Systems Workshops
(ICDCSW), 2013 IEEE 33rd International Conference on. IEEE, 2013, pp. 238–243.
[18] S. Fang, R. Kanagavelu, B.-S. Lee, C. Foh, and K. Aung, “Power-efficient virtual machine placement
and migration in data centers,” in Green Computing and Communications (GreenCom), 2013 IEEE and
Internet of Things (iThings/CPSCom), IEEE International Conference on and IEEE Cyber, Physical and
Social Computing. IEEE, 2013, pp. 1408–1413.
[19] P. N. Sayeedkhan and S. Balaji, “Virtual machine placement based on disk i/o load in cloud,” vol, vol. 5,
pp. 5477–5479, 2014.
[20] S. Wang, Z. Liu, Z. Zheng, Q. Sun, and F. Yang, “Particle swarm optimization for energy-aware vir-
tual machine placement optimization in virtualized data centers,” in Parallel and Distributed Systems
(ICPADS), 2013 International Conference on. IEEE, 2013, pp. 102–109.
[21] N. M. Calcavecchia, O. Biran, E. Hadad, and Y. Moatti, “Vm placement strategies for cloud scenarios,”
in Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on. IEEE, 2012, pp. 852–859.
[22] A. Anand, J. Lakshmi, and S. Nandy, “Virtual machine placement optimization supporting performance
slas,” in Cloud Computing Technology and Science (CloudCom), 2013 IEEE 5th International Conference
on, vol. 1. IEEE, 2013, pp. 298–305.
[23] H. Jin, D. Pan, J. Xu, and N. Pissinou, “Efficient vm placement with multiple deterministic and stochastic
resources in data centers,” in Global Communications Conference (GLOBECOM), 2012 IEEE. IEEE,
2012, pp. 2505–2510.
[24] D. Dong and J. Herbert, “Energy efficient vm placement supported by data analytic service,” in Cluster,
Bin packing algorithms for virtual machine... (Kumaraswamy S)
13. 524 ISSN: 2088-8708
Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on. IEEE, 2013,
pp. 648–655.
[25] S.-H. Wang, P. P.-W. Huang, C. H.-P. Wen, and L.-C. Wang, “Eqvmp: Energy-efficient and qos-aware vir-
tual machine placement for software defined datacenter networks,” in Information Networking (ICOIN),
2014 International Conference on. IEEE, 2014, pp. 220–225.
[26] S. Takeda and T. Takemura, “A rank-based vm consolidation method for power saving in datacenters,”
IPSJ Online Transactions, vol. 3, pp. 88–96, 2010.
[27] B. Asgari, M. G. Arani, and S. Jabbehdari, “An effiecient approach for resource auto-scaling in cloud
environments,” International Journal of Electrical and Computer Engineering, vol. 6, no. 5, p. 2415,
2016.
[28] A. Mohazabiyeh and K. Amirizadeh, “Energy-aware adaptive four thresholds technique for optimal vir-
tual machine placement,” International Journal of Electrical and Computer Engineering (IJECE), vol. 8,
no. 6, 2018.
BIOGRAPHY OF AUTHORS
Kumaraswamy S is an Associate Professor in Global Academy of Technology. He is a PhD student
in Ramaiah Institute of Technology, India with Master of Technology from University of Mysore
(2001). He obtained Bachelor Degree in Electrical and Electronics Engineering from Bangalore
University in 2009. He authored few research articles on Cloud Computing and Virtualization. His
research interests are cloud computing and storage area networks.
Mydhili K Nair is working as a Professor in Ramaiah Institute of Technology, Bangalore, since
2004. She has a mixed bag of academic as well as Industrial experience both spanning close to
a decade each. In the IT Industry, she has adorned various roles ranging from Technical Lead to
Project Manager in different IT companies. She has won the prestigious IBM Faculty Award in
the year 2011 for her collaborative research association with IBM. She has authored many book
chapters and Scopus indexed research papers in reputed publishing avenues such as Springer, CRC
Press, IEEE etc. She has been active in IEEE Women in Engineering forums in organizing inter-
national conferences of repute, chairing sessions and give invited talks. She and her students have
won many project competitions and best paper awards at State and National level.
IJECE Vol. 9, No. 1, February 2019 : 512 – 524