Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Php project aim is to develop dynamic and attractive web application as per user requirement. you can easily develop web application with our guidance............
for more details..... contact us..........
softroniics
calicut || palakkad || coimbatore
9037061113 , 9037291113
www.softroniics.in
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
This document presents an analytical model for evaluating the performance of cloud centers with a high degree of virtualization and Poisson batch task arrivals. The model accounts for generally distributed task service times, batch sizes, and the deterioration of performance due to increased workload on each physical machine. It allows calculation of key performance indicators like response time, waiting time, queue length, blocking probability, and probability distribution of tasks in the system. The model shows that performance is highly dependent on the variability of service times and batch sizes. Partitioning incoming requests based on these factors may improve performance for large batches and highly variable service times.
This summarizes a research paper that proposes a utilization-based virtual machine (VM) consolidation scheme to improve power efficiency in cloud data centers. The scheme aims to reduce the number of VM migrations and associated overhead by migrating VMs to stable hosts with higher utilization. It defines a performance function that considers power consumption and quality of service violations. The proposed Utilization-based Migration Algorithm (UMA) classifies hosts as overloaded, fully loaded, or underloaded. It computes migration probabilities for VMs on overloaded hosts and consolidates VMs from underloaded hosts to improve overall utilization and minimize the performance function value. The results show UMA reduces migrations by 77.5-82.4% and saves 39.3-42
This document presents a master's thesis project that proposes algorithms for energy-efficient and traffic-aware virtual machine management in cloud computing. It introduces dynamic virtual machine consolidation as a way to reduce power consumption in data centers by migrating VMs between hosts. The author develops new algorithms that improve upon existing approaches by considering both CPU utilization and communication traffic between VMs when consolidating and placing VMs on hosts. The algorithms aim to increase energy efficiency while also reducing communication latency. The performance of the new algorithms is evaluated and shown to significantly reduce energy consumption, VM migrations, and SLA violations compared to existing approaches.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
An Enhanced Throttled Load Balancing Approach for Cloud EnvironmentIRJET Journal
The document proposes an enhanced throttled load balancing approach for cloud environments. It discusses existing load balancing techniques like round robin, weighted round robin, and throttled approaches. It identifies that existing throttled approaches can lead to overloading as they do not consider task size when assigning tasks to virtual machines. The proposed approach aims to improve performance for cloud users by enhancing the basic throttled mapping approach to better distribute tasks among resources. The approach is evaluated using the CloudAnalyst simulator and results show it performs better than original techniques.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Php project aim is to develop dynamic and attractive web application as per user requirement. you can easily develop web application with our guidance............
for more details..... contact us..........
softroniics
calicut || palakkad || coimbatore
9037061113 , 9037291113
www.softroniics.in
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
This document presents an analytical model for evaluating the performance of cloud centers with a high degree of virtualization and Poisson batch task arrivals. The model accounts for generally distributed task service times, batch sizes, and the deterioration of performance due to increased workload on each physical machine. It allows calculation of key performance indicators like response time, waiting time, queue length, blocking probability, and probability distribution of tasks in the system. The model shows that performance is highly dependent on the variability of service times and batch sizes. Partitioning incoming requests based on these factors may improve performance for large batches and highly variable service times.
This summarizes a research paper that proposes a utilization-based virtual machine (VM) consolidation scheme to improve power efficiency in cloud data centers. The scheme aims to reduce the number of VM migrations and associated overhead by migrating VMs to stable hosts with higher utilization. It defines a performance function that considers power consumption and quality of service violations. The proposed Utilization-based Migration Algorithm (UMA) classifies hosts as overloaded, fully loaded, or underloaded. It computes migration probabilities for VMs on overloaded hosts and consolidates VMs from underloaded hosts to improve overall utilization and minimize the performance function value. The results show UMA reduces migrations by 77.5-82.4% and saves 39.3-42
This document presents a master's thesis project that proposes algorithms for energy-efficient and traffic-aware virtual machine management in cloud computing. It introduces dynamic virtual machine consolidation as a way to reduce power consumption in data centers by migrating VMs between hosts. The author develops new algorithms that improve upon existing approaches by considering both CPU utilization and communication traffic between VMs when consolidating and placing VMs on hosts. The algorithms aim to increase energy efficiency while also reducing communication latency. The performance of the new algorithms is evaluated and shown to significantly reduce energy consumption, VM migrations, and SLA violations compared to existing approaches.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
An Enhanced Throttled Load Balancing Approach for Cloud EnvironmentIRJET Journal
The document proposes an enhanced throttled load balancing approach for cloud environments. It discusses existing load balancing techniques like round robin, weighted round robin, and throttled approaches. It identifies that existing throttled approaches can lead to overloading as they do not consider task size when assigning tasks to virtual machines. The proposed approach aims to improve performance for cloud users by enhancing the basic throttled mapping approach to better distribute tasks among resources. The approach is evaluated using the CloudAnalyst simulator and results show it performs better than original techniques.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
AN EFFICIENT ALGORITHM FOR THE BURSTING OF SERVICE-BASED APPLICATIONS IN HYB...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
This document discusses scheduling algorithms for batches of MapReduce jobs in heterogeneous cloud environments with budget and deadline constraints. It proposes two optimization problems: 1) Given a fixed budget B, how to efficiently schedule tasks to minimize workflow completion time without exceeding the budget. 2) Given a fixed deadline D, how to efficiently schedule tasks to minimize monetary cost without missing the deadline. It presents an optimal dynamic programming algorithm for the first problem that runs in O(κB2) time, and two faster greedy algorithms. It also briefly discusses reducing the second problem to a knapsack problem. The goal is to help cloud service providers deploy MapReduce cost-effectively given user constraints.
It is well-known that SRPT is optimal for minimizing
ow time on machines that run one job at a time.
However, running one job at a time is a big under-
utilization for modern systems where sharing, simultane-
ous execution, and virtualization-enabled consolidation
are a common trend to boost utilization. Such machines,
used in modern large data centers and clouds, are
powerful enough to run multiple jobs/VMs at a time
subject to overall CPU, memory, network, and disk
capacity constraints.
Motivated by this pr
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
A profit maximization scheme with guaranteed quality of service in cloud comp...Pvrtechnologies Nellore
- The document proposes a double resource renting scheme for cloud service providers that combines short-term and long-term server renting. This aims to guarantee quality of service for all requests while reducing resource waste.
- A profit maximization problem is formulated to determine the optimal configuration of servers. Solutions are obtained for ideal and actual scenarios to maximize profit compared to a single renting scheme.
- Comparisons show the double renting scheme can guarantee complete service quality and obtain more profit than a single renting scheme that does not ensure quality of service.
This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
Iaetsd improved load balancing model based onIaetsd Iaetsd
This document proposes an improved load balancing model for cloud computing based on partitioning. It analyzes static and dynamic load balancing schemes using the CloudAnalyst tool. Static schemes like round robin performed similarly regardless of system load. Dynamic schemes analyzed current system status and allocated jobs accordingly. Analysis showed dynamic schemes had better response times than static schemes, with throttled and equally spread current execution performing best by balancing load based on system conditions. The proposed model implements multiple dynamic algorithms to further reduce response times and improve user satisfaction in cloud systems.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Enhancing minimal virtual machine migration in cloud environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Hierarchical SLA-based Service Selection for Multi-Cloud EnvironmentsSoodeh Farokhi
Cloud computing popularity is growing rapidly and consequently the number of companies offering their services in the form of Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) is increasing. The diversity and usage benefits of IaaS offers are encouraging SaaS providers to lease resources from the Cloud instead of operating their own data centers. However, the question remains for them how to, on the one hand, exploit Cloud benefits to gain less maintenance overheads and on the other hand, maximize the satisfactions of customers with a wide range of requirements. The complexity of addressing these issues prevent many SaaS providers to benefit from the Cloud infrastructures. In this paper, we propose HS4MC approach for automatic service selection by considering SLA claims of SaaS providers. The novelty of our approach lies
in the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences. The HS4MC approach first constructs a set of SLAs based on the given accumulated SaaS provider requirements. Then, it selects a set of services that best fulfills the SLAs. We evaluate our approach in a simulated environment by comparing it with a state-of-the-art utility based algorithm. The evaluation results show that our approach selects services that more effectively satisfy the SLAs.
This document provides a summary of a student's seminar paper on resource scheduling algorithms. The paper discusses the need for resource scheduling algorithms in cloud computing environments. It then describes several types of algorithms commonly used for resource scheduling, including genetic algorithms, bee algorithms, ant colony algorithms, workflow algorithms, and load balancing algorithms. For each algorithm type, it provides a brief introduction, overview of the basic steps or concepts, and some examples of applications where the algorithm has been used. The paper was submitted by a student named Shilpa Damor to fulfill requirements for a degree in information technology.
Bin packing algorithms for virtual machine placement in cloud computing: a re...IJECEIAES
Cloud computing has become more commercial and familiar. The Cloud data centers havhuge challenges to maintain QoS and keep the Cloud performance high. The placing of virtual machines among physical machines in Cloud is significant in optimizing Cloud performance. Bin packing based algorithms are most used concept to achieve virtual machine placement(VMP). This paper presents a rigorous survey and comparisons of the bin packing based VMP methods for the Cloud computing environment. Various methods are discussed and the VM placement factors in each methods are analyzed to understand the advantages and drawbacks of each method. The scope of future research and studies are also highlighted.
A profit maximization scheme with guaranteed quality of service in cloud comp...syeda yasmeen
The document proposes a double resource renting scheme for cloud service providers to maximize profits while guaranteeing quality of service. It involves combining short-term and long-term server renting to adapt to varying demand and reduce waste. The system is modeled as an M/M/m+D queuing model. An optimization problem is formulated to determine the optimal server configuration. Comparisons show the double renting scheme achieves higher profits compared to single renting while guaranteeing quality of service.
A profit maximization scheme with guaranteed quality of service in cloud comp...Shakas Technologies
A HYBRID CLOUD APPROACH FOR SECURE AUTHORIZED DEDUPLICATION
ABSTRACT:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting reduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing.
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
AN EFFICIENT ALGORITHM FOR THE BURSTING OF SERVICE-BASED APPLICATIONS IN HYB...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
This document discusses scheduling algorithms for batches of MapReduce jobs in heterogeneous cloud environments with budget and deadline constraints. It proposes two optimization problems: 1) Given a fixed budget B, how to efficiently schedule tasks to minimize workflow completion time without exceeding the budget. 2) Given a fixed deadline D, how to efficiently schedule tasks to minimize monetary cost without missing the deadline. It presents an optimal dynamic programming algorithm for the first problem that runs in O(κB2) time, and two faster greedy algorithms. It also briefly discusses reducing the second problem to a knapsack problem. The goal is to help cloud service providers deploy MapReduce cost-effectively given user constraints.
It is well-known that SRPT is optimal for minimizing
ow time on machines that run one job at a time.
However, running one job at a time is a big under-
utilization for modern systems where sharing, simultane-
ous execution, and virtualization-enabled consolidation
are a common trend to boost utilization. Such machines,
used in modern large data centers and clouds, are
powerful enough to run multiple jobs/VMs at a time
subject to overall CPU, memory, network, and disk
capacity constraints.
Motivated by this pr
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
A profit maximization scheme with guaranteed quality of service in cloud comp...Pvrtechnologies Nellore
- The document proposes a double resource renting scheme for cloud service providers that combines short-term and long-term server renting. This aims to guarantee quality of service for all requests while reducing resource waste.
- A profit maximization problem is formulated to determine the optimal configuration of servers. Solutions are obtained for ideal and actual scenarios to maximize profit compared to a single renting scheme.
- Comparisons show the double renting scheme can guarantee complete service quality and obtain more profit than a single renting scheme that does not ensure quality of service.
This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
Iaetsd improved load balancing model based onIaetsd Iaetsd
This document proposes an improved load balancing model for cloud computing based on partitioning. It analyzes static and dynamic load balancing schemes using the CloudAnalyst tool. Static schemes like round robin performed similarly regardless of system load. Dynamic schemes analyzed current system status and allocated jobs accordingly. Analysis showed dynamic schemes had better response times than static schemes, with throttled and equally spread current execution performing best by balancing load based on system conditions. The proposed model implements multiple dynamic algorithms to further reduce response times and improve user satisfaction in cloud systems.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Enhancing minimal virtual machine migration in cloud environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Hierarchical SLA-based Service Selection for Multi-Cloud EnvironmentsSoodeh Farokhi
Cloud computing popularity is growing rapidly and consequently the number of companies offering their services in the form of Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) is increasing. The diversity and usage benefits of IaaS offers are encouraging SaaS providers to lease resources from the Cloud instead of operating their own data centers. However, the question remains for them how to, on the one hand, exploit Cloud benefits to gain less maintenance overheads and on the other hand, maximize the satisfactions of customers with a wide range of requirements. The complexity of addressing these issues prevent many SaaS providers to benefit from the Cloud infrastructures. In this paper, we propose HS4MC approach for automatic service selection by considering SLA claims of SaaS providers. The novelty of our approach lies
in the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences. The HS4MC approach first constructs a set of SLAs based on the given accumulated SaaS provider requirements. Then, it selects a set of services that best fulfills the SLAs. We evaluate our approach in a simulated environment by comparing it with a state-of-the-art utility based algorithm. The evaluation results show that our approach selects services that more effectively satisfy the SLAs.
This document provides a summary of a student's seminar paper on resource scheduling algorithms. The paper discusses the need for resource scheduling algorithms in cloud computing environments. It then describes several types of algorithms commonly used for resource scheduling, including genetic algorithms, bee algorithms, ant colony algorithms, workflow algorithms, and load balancing algorithms. For each algorithm type, it provides a brief introduction, overview of the basic steps or concepts, and some examples of applications where the algorithm has been used. The paper was submitted by a student named Shilpa Damor to fulfill requirements for a degree in information technology.
Bin packing algorithms for virtual machine placement in cloud computing: a re...IJECEIAES
Cloud computing has become more commercial and familiar. The Cloud data centers havhuge challenges to maintain QoS and keep the Cloud performance high. The placing of virtual machines among physical machines in Cloud is significant in optimizing Cloud performance. Bin packing based algorithms are most used concept to achieve virtual machine placement(VMP). This paper presents a rigorous survey and comparisons of the bin packing based VMP methods for the Cloud computing environment. Various methods are discussed and the VM placement factors in each methods are analyzed to understand the advantages and drawbacks of each method. The scope of future research and studies are also highlighted.
A profit maximization scheme with guaranteed quality of service in cloud comp...syeda yasmeen
The document proposes a double resource renting scheme for cloud service providers to maximize profits while guaranteeing quality of service. It involves combining short-term and long-term server renting to adapt to varying demand and reduce waste. The system is modeled as an M/M/m+D queuing model. An optimization problem is formulated to determine the optimal server configuration. Comparisons show the double renting scheme achieves higher profits compared to single renting while guaranteeing quality of service.
A profit maximization scheme with guaranteed quality of service in cloud comp...Shakas Technologies
A HYBRID CLOUD APPROACH FOR SECURE AUTHORIZED DEDUPLICATION
ABSTRACT:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting reduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing.
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
LOAD BALANCING IN AUTO SCALING-ENABLED CLOUD ENVIRONMENTSijccsa
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
This document discusses power-aware computing in cloud environments. It identifies high power consumption as a major challenge for data centers and explores several techniques to reduce it, including: virtualization to consolidate servers; migration of virtual machines between servers; and algorithms like bin packing and dynamic voltage scaling to optimize resource allocation. The key idea is to improve energy efficiency by running fewer physical servers and dynamically powering down unused servers through server consolidation using virtualization and live migration of virtual machines. This allows jobs to be allocated to servers that consume less power, reducing overall data center power usage and costs.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
Migration Control in Cloud Computing to Reduce the SLA Violationrahulmonikasharma
The requisition of cloud based services are more eminent because of the enormous benefits of cloud such as pay-as-you-use flexibility,scalability and low upfront cost. Day-by-day due to growing number of cloud consumers the load on the datacenters is also increasing. Various load distribution and dynamic load balancing approaches are being followed in the datacenters to optimize the resource utilization so that the performance may be maintained during the increased load. Virtual machine (VM) migration is primarily used to implement dynamic load balancing in the datacenters. But, the poorly designed dynamic VM migration policies may negate its benefits. The VM migration overheads result in the violations of service level agreement (SLA) in the cloud environment.In this paper,an extended VM migration control model is proposedto minimize the SLA violations while controlling the energy consumption of the datacenter during VM migration. The parameters of execution boundary threshold is used to extend an existing VM migration control model. The proposed model is tested through extensive simulations using CloudSim toolkit by executing real world workload. Results are obtained in terms of number of SLA violations while controlling the energy consumption in the datacenter. Results show that the proposed modelachieves better performance in comparison to the existing model.
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Load Balancing in Cloud Computing Through Virtual Machine PlacementIRJET Journal
This document discusses load balancing in cloud computing through virtual machine placement. It proposes using a binary search tree approach to map virtual machines to host machines in a way that optimizes resource utilization, minimizes resource allocation time, and reduces violations of service level agreements. The approach is analyzed using the CloudSim simulator and compared to other placement strategies. The document provides background on resource allocation, types of virtual machine placement algorithms, and related work on power-aware and energy-efficient placement strategies.
Optimizing the placement of cloud data center in virtualized environmentIJECEIAES
In cloud mobile networks, precise assessment for the position of the virtualization powered cloud center would improve the capacity limit, latency and energy efficiency (EEf). This paper utilized the Monte Carlo oriented particle swarm optimization (PSO) and genetic algorithm (GA) to first, obtain the optimal number of virtual machines (VMs) that maximize the EEf of the mobile cloud center, second, optimize the position of the mobile data center. To fulfil such examination, a power evaluation framework is proposed to shape the power utilization of a virtualized server while hosting an amount of VMs. In addition, the total power consumption of the network is examined, including data center and radio units (RUs). This evaluation is based on linear modelling of the network parameters, such as resource blocks, number of VMs, transmitted and received powers, and overhead power consumption. Finally, the EEf is constrained to many quality of service (QoS) metrics, including number of resource blocks, total latency and minimum user's data rate.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
Similar to Server Consolidation through Virtual Machine Task Migration to achieve Green Cloud (20)
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Server Consolidation through Virtual Machine Task Migration to achieve Green Cloud
1. Server Consolidation through Virtual Machine Task
Migration to achieve Green Cloud
Geetha Megharaj
Associate Professor
Department of CSE
Sri Krishna Institute of Technology
Bengaluru, India
E-mail: geethagvit@yahoo.com
Mohan G. Kabadi
Professor and HOD Department of CSE
Presidency University
Bengaluru, India
E-mail: kabadimohan60@gmail.com
Abstract—Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Keywords–Cloud Center; Server Consolidation; Virtual
Machines; Task Migration;
I. INTRODUCTION
The Cloud Center (CC) is a computational resource reposi-
tory [1], which provides on-demand computational services to
the clients. The computational servers in CC are referred as
Physical Machines (PMs). The required services are provided
to the client through Virtual Machines (VMs), which abstracts
these PMs, and each PM might host multiple VMs.
A. Overview on Server Consolidation
Cloud Computing is becoming widespread due to the re-
duction of cost and effort in maintaining servers in the client
organizations. As more and more operations are migrated to
the Cloud, the CCs expand in-terms of PMs, and this expan-
sion leads to significant increase in total power consumption
of CCs. In some situations, some of the PMs have limited
active VMs, and recently it was demonstrated that [2], even a
single active VM can contribute 50% power consumption in
the corresponding PM. Veritably, shutting down such lightly
loaded PMs by migrating their corresponding VMs can aid
power consumption reduction in CCs. The process of running
the CC by shutting down lightly loaded PMs is known as
Server Consolidation (SC)
Currently, many efficient VM migration techniques for SC
have been proposed in the literature. VM migration techniques
are also used in load balancing inside CCs, wherein, the
overloaded VMs are migrated to other PMs, so that, in these
new PMs, sufficient resources can be provided for the efficient
execution of tasks inside such migrated VMs. However, it
was highlighted in [3] that, VM migration has significant
drawbacks in achieving efficient load balancing or SC:
1. VM migration requires halting the current functionality
of the VM, which is associated with significant memory
consumption and task execution downtime.
2. There is chance that, customer activity information can be
lost during the VM migration process, and which may increase
the monetary expenditure.
3. Significant increase in dirty memory is associated with
VM migration.
B. Motivation
In [3], [4], the new/extra tasks for overloaded VMs are
migrated instead of migrating the actual VMs to achieve
load balancing; however, this migration framework has not
been applied to address SC problem. The merits of VM task
migration technique obtained for load balancing, also need
to be achieved for SC. The current framework of VM task
migration presented in [3] requires extensive modifications to
make it adaptable for addressing the SC problem.
C. Paper Contributions
The following contributions are made in this paper:
1. A new technique for VM task migration for SC is
proposed. This new technique identifies the potential PMs
which need to be shutdown. The extra tasks arriving for
the VMs present in the potential PMs are migrated to other
resourceful PMs, and this migration is achieved through a cost
function which utilizes estimated parameters such as–probable
task execution time and the cost of task migration. The VMs
from which extra tasks are migrated, continue to be active
until all the running tasks finish their execution, and then, the
corresponding PMs can be shutdown.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
80 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. 2. The proposed VM task migration technique is simu-
lated using MATLAB. Empirical results demonstrate excel-
lent power consumption reduction achieved by the proposed
technique.
The paper is organized as follows: in Section 2, the related
work in the area of the addressed problem is described. The
proposed VM task migration technique for SC is presented in
Section 3. The simulated results and corresponding discussions
are presented in Section 4. Finally, the work is concluded with
future directions in Section 5.
II. RELATED WORK
Extensive contributions have been made to achieve SC
through VM migration technique. Various techniques for SC
in virtualized data center has been discussed in [5]. In [6],
two VM migration techniques namely–Hybrid and Dynamic
Round Robin(DRR) was presented. Two states were defined
in the solution framework called–retiring and non-retiring. If
a PM contains limited number of active VMs which are about
to finish their task, then, the PM is in retiring state, else, it
is in non-retiring state. The retiring PMs will not accept new
tasks, and the active VMs are migrated to suitable PMs. Both,
Hybrid and DRR exhibit excellent performance w.r.t. reducing
power consumption in CCs.
Most of the VM migration techniques for SC are modeled
through Bin Packing Problem (BPP), which is NP-complete.
An approximation scheme based on First Fit Decreasing
algorithm was proposed in [7] to effectively migrate VMs.
Each bin is considered as a PM, and the highest priority PMs
are subjected to VM migration.
The Magnet scheme proposed in [8], performs selection of
suitable subsets of available PMs which can guarantee the
expected performance levels. The PMs outside the selected
subset are shutdown.
A CC management tool was presented in [9]. This tool not
only provides continuous monitoring facility, it also provides
facility to perform live migration of VMs.
In [2], it was emphasized that, VMs can be broadly clas-
sified as data intensive or CPU intensive based on their
respective workloads. For this new framework, the BPP was
modified, and suitable approximation schemes were presented.
The placement of migrated VMs for SC was performed
through assigning priority levels to the candidate PMs in
[10]. The PMs which consume low power were given higher
priority.
Non-migratory technique for reduction of power consump-
tion in CCs was presented in [11]. Energy efficiency model
and corresponding heuristics were proposed to reduce power
consumption in CCs. Similar techniques were presented in [12]
which utilized green computing framework.
Resource scheduling techniques for SC were presented
in [13]. Here, a new architectural model was presented to
calculate energy expenditure for different resource scheduling
strategies.
All the described VM migration techniques, even though
they achieve noticeable performance in reducing power con-
sumption, they all suffer from excessive down times in
completing VM migration, and increase in dirty memory as
explained before.
The initial work on VM task migration for load balancing
in CCs was proposed in [3], [4], [14]. Different quality
parameters such as–task execution time, task transfer cost
and task power consumption were utilized in designing the
scoring function for task migration. The optimal solution for
performing VM task migration was searched through Particle
Swarm Optimization (PSO) technique. Since, the VM task
migration framework proposed in [3], [4], [14] was specifically
designed to address load balancing issue, it requires suitable
adaptations to address the SC problem.
III. VM TASK MIGRATION TECHNIQUE FOR SC
The first step in SC is to identify suitable PMs which can be
considered for shutting down. Let, PMk indicate the kth
PM
in the CC, num(PMk) indicate the number of active VMs
in PMk. Each PM is defined with a corresponding thresh-
old indicated by SD(PMk), which indicates the required
minimum number of VMs running in the PM to prevent it
from shutting down. This case is represented in Equation 1.
Here, shutdown(PMk) = 1 indicates that, PMk should be
shutdown, and shutdown(PMk) = 0 indicates that, PMk
should be kept active.
shutdown(PMk) =
1, if num(PMk) < SD(PMk)
0, otherwise
(1)
A. Task Migration Framework
Let, SD indicate the set of PMs which are eligible to be
shutdown, and V M indicate the set of active VMs hosted
inside those PMs ∈ SD. The extra or new tasks which are
submitted to V M will be migrated to other suitable PMs.
Once, the running tasks ∈ V M finish their execution, all the
PMs ∈ SD can be shutdown.
Let, tiy indicate the ith
extra task submitted to V My ∈
V M, and suppose it can be migrated to V Mz which is
hosted in that PM /∈ SD. The migration of tiy also requires
the migration of data associated with tiy. The merit of this
migration is analyzed through a scoring function represented
in Equation 2. Here, score(tiy, V Mz) indicates the score of
migration strategy which migrates tiy from V My to V Mz,
exeiz indicates the estimated execution time of tiy inside
V Mz, transfer(tiy, V Mz) indicates the task transfer time
from V My to V Mz, and both these metrics are represented in
Equations 3 and 4 respectively. Here, cz indicates the number
of CPU nodes present in V Mz, mz is the memory capacity
of V Mz, diy indicates the size of data used by tiy, and bwyz
indicates the bandwidth available between V My and V Mz.
The metric formulation represented in Equation 3 is based on
the intuition that, increase in data size of a task results in
increased execution time, and presence of rich computational
resources in VM influences the decrease in task execution
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
81 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. time. It is evident from Equation 2 that, higher values of
score(tiy, V Mz) indicates unattractive options.
score(tiy, V Mz) = exeiz + transfer(tiy, V Mz) (2)
exeiz =
diy
cz × mz
(3)
transfer(tiy, V Mz) =
diy
bwyz
(4)
The extra task migration is performed batch-wise, rather
than on a single task in-order to reduce computational over-
heads. All the extra tasks submitted to V M in a specific
time interval indicated by Ie are batched together for migra-
tion. Consider the scenario, where the batch of extra tasks
[ti1y1
, ti2y2
, .....tisys
] submitted to V M need to be migrated.
Suppose, [V Mz1
, V Mz2
, .....V Mzs
] is a candidate solution for
the required migration of tasks, wherein, tij yj (1 ≤ j ≤ s)
is considered to be migrated from V Myj to V Mzj , and this
candidate solution is denoted as S. Also, there is no restriction
that, the VMs in the candidate solution should be distinct. The
score of this migration scheme is represented in Equation 5.
migration score(S) =
s
j=1 score(tij yj
, V Mzj
)
s
(5)
The goal of the task migration scheme is represented in
Equation 6, wherein, the most optimal candidate solution
has to be discovered. It is evident that, the problem of
finding the optimal migration scheme has combinatorial search
complexity. To perform efficient search in polynomial search
complexity, utilization of meta-heuristic techniques for finding
near optimal approximate solutions becomes attractive.
optimization condition = arg min
S
migration score(S) (6)
B. Algorithm
PSO technique is a meta-heuristic technique which provides
an approximate solution to the optimization problems, and it
is inspired by the social behavior of birds. The search for
optimal solution is carried out by group of particles, wherein,
each particle has an exclusive zone in the candidate solution
space, and union of all particle zones is equal to the candidate
solution space. Each point in the candidate solution space
represents a candidate solution vector. The particles are contin-
uously moving in their corresponding candidate solution space
to identify the optimal solution, and are involved in continuous
communication for exchanging their locally discovered best
solution, which in-turn decides the corresponding velocity of
the particle for navigation. The particles continue their search
until acceptable solution is obtained.
The PSO based solution technique for SC through VM
task migration technique utilizes r particles. Here, the current
position of the ith
particle at iteration t is indicated by
−→
Xi(t),
and the position for the next iteration is indicated by
−→
Xi(t+1),
which is calculated as represented in Equation 7. Here,
−→
V i(t)
indicates the velocity of ith
particle for t + 1 iteration, and it
is calculated as represented in Equation 8. Here, D1 and D2
indicate the degree of particle attraction towards individual
and group success respectively, −→x gbest and −→x pbesti indicate
the global best solution obtained by all the particles until the
current iteration and the local best solution obtained by the ith
particle until the current iteration respectively, W indicates
a control variable, and r1, r2 ∈ [0, 1] indicate the random
factors.
−→
Xi(t + 1) =
−→
Xi(t) +
−→
V i(t + 1) (7)
−→
V i(t + 1) = W
−→
V i(t) + D1r1(−→x pbesti −
−→
Xi(t))+
D2r2(−→x gbest −
−→
Xi(t))
(8)
The PSO based solution technique for SC through VM
task migration technique is outlined in Algorithm 1. Here,
initialize PSO(P) divides the candidate solution space
among the r search particles indicated by P = p1, p2, .....pr,
and assigns each particle to some arbitrary positions in their
corresponding candidate solution space. Each particle calcu-
lates its candidate solution for the corresponding current posi-
tion through compute score(
−→
Xi(t)), which utilizes Equations
7 and 8. The values for −→x pbesti
and −→x gbest are calcu-
lated through local best(scorei) and global best(P, −→x pbesti )
respectively. The particles continue to search until the ac-
ceptable solution is found, and which is calculated through
acceptable(−→x gbest).
Algorithm 1 PSO Algorithm for SC
P = p1, p2, ....pr
initialize PSO(P)
flag = 0
t = 0
while flag == 0 do
t = t + 1
for i = 1 to r do
scorei = compute score(
−→
Xi(t))
−→x pbesti
= local best(scorei)
−→x gbest = global best(P, −→x pbesti
)
if acceptable(−→x gbest) then
flag = 1
end if
end for
t = t + 1
end while
C. Simulation Setup
The proposed VM task migration technique for SC is
implemented in MATLAB, and for the ease of reference
it will be referred as VMSC. The corresponding simulation
parameter settings are outlined in Table I. Here, the power
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
82 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. Simulation Parameter Set value
Number of PMs Varied between 5 × 103 to 104
Number of VMs present in each
PM indicated by tvm(PMk)
Varied between 2 to 200 (random-
ized)
nvm(PMk) 0.5 × tvm(PMk)
Number of extra tasks for a VM
during Ie
Poisson distributed with λ = 5
Number of computing nodes/CPUs
in each VM
Varied between 5 to 20
Main memory capacity for each
VM
Varied between 4GB / 8GB
/16GB
min SD(PMk) Varied between [5 − 25]
Bandwidth between any 2 VMs Varied between 100mbps to
500mbps
Number of PSO search particles Varied between 5 to 25
Computing nodes used for PSO
technique execution
One computing node per particle
Size of task data Varied between 1GB to 10GB
Power consumed by each VM Varied between 0 to 1(normalized)
TABLE I
SIMULATION PARAMETER SETTINGS
consumption of each VM is normalized to [0 − 1] for the
sake of convenience, wherein, 1 indicates the maximum power
consumption, and 0 indicates the VM is inactive, also, the
number of VMs present in each PM is decided randomly in-
order to reflect realism. The effectiveness of VMSC is analyzed
through two metrics, which are represented in Equations 9
and 10. Here, pwc b indicates the average power consumption
by all the PMs inside the addressed CC indicated by CCr
before VMSC is executed, pwc(PMk) indicates the average
power consumed by PMk, |CCr| indicates the number of
PMs present in CCr, CCr indicates CCr after execution of
VMSC, pwc a indicates the average power consumption by
all the PMs inside CCr after the execution of VMSC, and
|CCr| = |CCr|.
The metric pwc(PMk) is calculated as represented in
Equation 11. Here, pwc(V Mj) indicates the power consumed
by the jth
VM, and |PMk| indicates the number of VMs
present in PMk. It is clear from Equations 9 and 10 that,
0 ≤ pwc(a), pwc(b) ≤ 1.
pwc b= P Mk∈CCr
pwc(P Mk)
|CCr| (9)
pwc a= P Mk∈CCr
pwc(P Mk)
|CCr|
(10)
pwc(PMk) =
V Mj ∈P Mk
pwc(V Mj)
|PMk|
(11)
IV. EMPIRICAL RESULTS AND DISCUSSIONS
The first experiment evaluates the performance of VMSC
when the number of PMs are varied. The analysis result
w.r.t. pwc and execution time is illustrated in Figures 1 and
2 respectively. Due to the increase in PMs and the random
number of VMs present in each PM, the number of PMs
suitable for shutdown tends to increase, hence, pwc b and
pwc a exhibit monotonically non-increasing behavior. The
monotonically non-decreasing behavior w.r.t. execution time
0.5 0.6 0.7 0.8 0.9 1
·104
0
0.5
1
No. of PMs
pwc
pwc b
pwc a
Fig. 1. No of PMs vs pwc
is majorly due to increase in computational load. It is clear
that, VMSC provides significant benefits in optimizing power
consumption in CCs, and exhibits its merit in identifying the
approximate solution in appreciable execution efficiency.
The second experiment analyzes the execution time of
VMSC when the number of PSO search particles are varied,
and the number of PMs is fixed at 104
, which corresponds
to the highest load case utilized in empirical analysis. The
analysis result is illustrated in Figure 3. As the number of PSO
search particles increase, corresponding increase in parallelism
results in better execution efficiency.
The third experiment analyzes the performance of VMSC
when min SD(PMk) is varied. The analysis result w.r.t.
pwc a and execution time is illustrated in Figures 4 and 5. The
increase of min SD(PMk) creates an opportunity to include
more number of PMs for shutdown, which in-turn improves
pwc a, and for the same reason, which also increases the
computational load, execution efficiency decreases.
The final experiment analyzes the execution time of VMSC
when the number of PSO search particles are varied, and
min SD(PMk) = 25. The analysis result is illustrated in
Figure ??. The performance reasoning of VMSC is similar
to the second experiment.
V. CONCLUSION
In this work, the importance of SC in CCs was described.
The drawbacks of VM migration techniques for SC were
outlined. A new SC approach using VM task migration
concept was presented, which utilized PSO based search
technique. Empirical results demonstrated the effectiveness of
the proposed technique in reducing power consumption in
CCs, and appreciable execution efficiency. In future, design of
probabilistic models for SC, which predict the load behavior of
PMs can be investigated for implementing effective preemptive
actions.
REFERENCES
[1] Zeng Wenying, Zhao Yuelong and Zeng Junwei, ”Cloud Service
and Service Selection Algorithm Research”, Proceedings of the first
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
83 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. 0.5 0.6 0.7 0.8 0.9 1
·104
0
100
200
300
No of PMs
ExeTime(s)
VMSC
Fig. 2. No of PMs vs Exe Time
5 10 15 20 25
0
100
200
300
400
No. of Particles
ExeTime(s)
VMSC
Fig. 3. No of Particles vs Exe Time
5 10 15 20 25
0
0.5
1
min SD(PMk)
pwc(a)
VMSC
Fig. 4. min SD(PMk) vs pwc a
5 10 15 20 25
0
100
200
300
400
min SD(PMk)
ExeTime(s)
VMSC
Fig. 5. min SD(PMk) vs Exe Time
ACM/SIGEVO Summit on Generic and Evolutionary Computation, GEC
09, 978-1-60558-326-6, Shanghai, China, 1045-1048.
[2] Jyun-Shiung Yang, Pangfeng Liu and Jan-Jan Wu, ”Workload
Characteristics-aware virtual machine consolidation algorithms”, 2012
IEEE 4th International Conference on Cloud Computing Technology and
Science, 978-1-4673-4510-1.
[3] F.Ramezani, J. Lu, and F.K.Hussain, ”Task Based System Load Balancing
in Cloud Computing Using particle Swarm Optimization”, International
journal of parallel programming, Vol.42, No.5, pp.739-754, 2014.
[4] Geetha Megharaj and Dr. Mohan G.Kabadi, ”Metaheuristic Based Virtual
Machine Task Migration Technique for Load Balancing in Cloud”,
Second International conference on Integrated Intelligent Computing,
Communication and Security(ICIIC-2018), on 24th,25th January 2018,
SJBIT, Bengaluru.
[5] Amir Varasteh and Maziar Goudarzi, ”Server Consolidation Techniques in
Virtualized Data Centers: A Survey”, IEEE SYSTEMS JOURNAL, VOL.
11, NO. 2, June 2017.
[6] C.C. Lin, P.Liu and J.J Wu, ”Energy-efficient Virtual Machine provision
Algorithms for cloud systems”, Utility and Cloud Computing (UCC),
2011 Fourth IEEE International Conference, 2011,81-88.
[7] Shingo Takeda and Toshinori Takemura, ”A Rank based VM Consolida-
tion Method for Power Saving in Datacenters”, IPSJ Online Transactions
3(2):88-96, January 2010.
[8] Liting Hu, Hai Jin, Xianjie Xiong and Haikun Liu, ”Magnet: A novel
scheduling policy for power reduction in cluster with virtual machines”,
2008 IEEE International Conference on Cluster Computing, 13-22.
[9] Liu Liang, Wang Hao, Liu Xue, Jin Xing, He Wen Bo, Wang Qing
Bo and Chen Ying, ”Green cloud: A new architecture for Green Data
Center”, Proceedings of the 6th International Conference Industry Session
on Automatic Computing and Communication Industry Session 2009, 978-
1-60558-612-0, 29-38.
[10] Kamali Gupta and Vijay Katiyar, ”Energy Aware Virtual Machine
Migration Techniques for Cloud Environment”, International Journal of
Computer Applications, Volume 141, May 2016.
[11] Pragya and G.Manjeey, ”A Review on Energy Efficient Techniques in
Green Cloud Computing”, International Journals of Advanced Research
in Computer Science and Software Engineering, Volume 5, Issue (3),
2015,550-554.
[12] A.Banerjee, P.Agrawal and N.ch S.N. Iyengar, ”Energy Efficient Model
for Cloud Computing”, International Journal of Energy Information and
Communications, Volume 4,Issue (6), 2013,29-42.
[13] Kamali Gupta and Vijay katiyar, ”Energy Aware Scheduling Frame-
work for resource allocation in a virtualized cloud data centre”, In-
ternational Journal of Engineering and Technology, DOI: 10.21817/i-
jet/2017/v9i2/170902032.
[14] F. Ramezani, J.Lu, and F.Hussain, ”Task Based System Load Balancing
Approach in Cloud Environments”, Knowledge Engineering and Man-
agement, pp.31-42, 2014.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
84 https://sites.google.com/site/ijcsis/
ISSN 1947-5500