This document summarizes a student report on optimizing virtual machine placement across geo-distributed data centers to minimize costs. It proposes using an optimization model to determine the optimal spare capacity allocation across data centers while considering electricity costs, demand variability, and other factors. It also describes using a heuristic algorithm to place VMs on physical machines across data centers in a way that minimizes operating costs like electricity and communication costs.
Sla based optimization of power and migration cost in cloud computingNikhil Venugopal
This document summarizes a research paper about optimizing power consumption and migration costs in cloud computing systems while meeting service level agreements (SLAs). The paper presents an algorithm to solve the resource allocation problem of minimizing total energy costs while probabilistically meeting client SLAs specifying upper bounds on service times. The algorithm uses convex optimization and dynamic programming. Simulation results show the algorithm is more effective than previous approaches at optimizing power and migration costs under SLAs.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
Iaetsd effective fault toerant resource allocation with costIaetsd Iaetsd
1) The document proposes a fault-tolerant resource allocation method for cloud computing that aims to minimize user payment while meeting task deadlines.
2) It formulates a deadline-driven resource allocation problem based on virtual machine isolation technology and proposes an optimal solution with polynomial time complexity.
3) Experimental results show that the proposed work more efficiently schedules and allocates resources, improving utilization of cloud infrastructure resources.
1. The document discusses the economic properties of cloud computing including common infrastructure, location independence, online connectivity, utility pricing, and on-demand resources.
2. It provides details on utility pricing models and how cloud computing can be cheaper than owning resources depending on the ratio of peak to average demand.
3. On-demand cloud resources allow organizations to dynamically scale up or down based on changing demand levels without penalty, which provides significant economic benefits over static resource provisioning.
This document discusses power-aware computing in cloud environments. It identifies high power consumption as a major challenge for data centers and explores several techniques to reduce it, including: virtualization to consolidate servers; migration of virtual machines between servers; and algorithms like bin packing and dynamic voltage scaling to optimize resource allocation. The key idea is to improve energy efficiency by running fewer physical servers and dynamically powering down unused servers through server consolidation using virtualization and live migration of virtual machines. This allows jobs to be allocated to servers that consume less power, reducing overall data center power usage and costs.
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
This document summarizes a research paper on developing an efficient and dynamic resource allocation mechanism for cloud infrastructure services based on genetic algorithms. The mechanism aims to reduce energy utilization and latency by exactly matching resource requirements to virtual machine capacities while tolerating variations in available infrastructure and workload requirements. It proposes classifying workloads and machines based on their heterogeneities and allocating tasks in a way that diversifies machine usage to reduce risks from potential attackers. The genetic algorithm-based approach is compared to other scheduling methods and experimental results demonstrate its effectiveness in lowering power consumption and delay. Future work could account for machines with capacities exceeding available resources and optimize allocation based on predicted capacities.
Sla based optimization of power and migration cost in cloud computingNikhil Venugopal
This document summarizes a research paper about optimizing power consumption and migration costs in cloud computing systems while meeting service level agreements (SLAs). The paper presents an algorithm to solve the resource allocation problem of minimizing total energy costs while probabilistically meeting client SLAs specifying upper bounds on service times. The algorithm uses convex optimization and dynamic programming. Simulation results show the algorithm is more effective than previous approaches at optimizing power and migration costs under SLAs.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
Iaetsd effective fault toerant resource allocation with costIaetsd Iaetsd
1) The document proposes a fault-tolerant resource allocation method for cloud computing that aims to minimize user payment while meeting task deadlines.
2) It formulates a deadline-driven resource allocation problem based on virtual machine isolation technology and proposes an optimal solution with polynomial time complexity.
3) Experimental results show that the proposed work more efficiently schedules and allocates resources, improving utilization of cloud infrastructure resources.
1. The document discusses the economic properties of cloud computing including common infrastructure, location independence, online connectivity, utility pricing, and on-demand resources.
2. It provides details on utility pricing models and how cloud computing can be cheaper than owning resources depending on the ratio of peak to average demand.
3. On-demand cloud resources allow organizations to dynamically scale up or down based on changing demand levels without penalty, which provides significant economic benefits over static resource provisioning.
This document discusses power-aware computing in cloud environments. It identifies high power consumption as a major challenge for data centers and explores several techniques to reduce it, including: virtualization to consolidate servers; migration of virtual machines between servers; and algorithms like bin packing and dynamic voltage scaling to optimize resource allocation. The key idea is to improve energy efficiency by running fewer physical servers and dynamically powering down unused servers through server consolidation using virtualization and live migration of virtual machines. This allows jobs to be allocated to servers that consume less power, reducing overall data center power usage and costs.
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
This document summarizes an article from the International Journal of Research in Advent Technology that proposes algorithms for energy-aware resource allocation in datacenters with minimized virtual machine migrations. It discusses how virtualization allows servers to be consolidated onto fewer physical machines to reduce hardware and power consumption. The algorithms aim to dynamically reallocate VMs according to current resource needs while ensuring quality of service and reliability, with the goal of minimizing the number of active physical nodes and switching idle nodes to a low-power state. It describes two proposed VM selection policies - the Minimum Migrations policy that selects the minimum number of VMs to migrate from overloaded hosts, and the Highest Potential Growth policy that migrates VMs with the lowest current CPU usage to prevent future
This document summarizes a research paper on developing an efficient and dynamic resource allocation mechanism for cloud infrastructure services based on genetic algorithms. The mechanism aims to reduce energy utilization and latency by exactly matching resource requirements to virtual machine capacities while tolerating variations in available infrastructure and workload requirements. It proposes classifying workloads and machines based on their heterogeneities and allocating tasks in a way that diversifies machine usage to reduce risks from potential attackers. The genetic algorithm-based approach is compared to other scheduling methods and experimental results demonstrate its effectiveness in lowering power consumption and delay. Future work could account for machines with capacities exceeding available resources and optimize allocation based on predicted capacities.
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...Susheel Thakur
This document summarizes research on improving energy efficiency in data centers through dynamic virtual machine consolidation. It discusses how virtualization allows multiple virtual machines to run on single physical servers, improving resource utilization. Dynamic consolidation techniques migrate virtual machines between servers based on resource usage to minimize the number of active servers and reduce energy costs. The document reviews different server consolidation heuristics that aim to pack virtual machines tightly and turn off underutilized physical machines to reduce energy consumption in cloud data centers.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Server Consolidation Algorithms for Virtualized Cloud Environment: A Performa...Susheel Thakur
This document summarizes research on server consolidation algorithms for virtualized cloud environments with variable workloads. It discusses how server consolidation aims to reduce the number of physical servers through virtualization and live migration of virtual machines between servers. The document reviews several existing server consolidation algorithms and studies their impacts on performance when migrating virtual machines. It then presents an evaluation of selected algorithms under variable workloads to reduce server sprawl, optimize power consumption, and balance loads across physical machines in cloud computing environments.
Performance Evaluation of Server Consolidation Algorithms in Virtualized Clo...Susheel Thakur
The document discusses server consolidation algorithms for virtualized cloud environments. It analyzes the performance of Sandpiper, Khanna's, and Entropy algorithms under constant load. Sandpiper detects hotspots using monitoring and profiling, then migrates VMs to mitigate hotspots. Khanna's algorithm sorts PMs by residual capacity and VMs by usage to migrate VMs from overloaded to underloaded PMs. Entropy formulates VM allocation as a constraint satisfaction problem and uses a constraint solver to optimize resource usage and minimize migrations. The paper evaluates these algorithms in a virtualized test environment under constant loads.
Dynamic resource allocation using virtual machines for cloud computing enviro...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Performance Analysis of Server Consolidation Algorithms in Virtualized Cloud...Susheel Thakur
This document discusses server consolidation algorithms for virtualized cloud environments. It begins with an introduction to cloud computing and virtualization. It then reviews several existing server consolidation algorithms from literature, including Sandpiper, Khanna's algorithm, and Entropy. Sandpiper aims to mitigate hotspots by migrating virtual machines between physical machines. Khanna's algorithm aims for server consolidation by packing virtual machines to minimize the number of physical machines needed. Entropy aims to minimize the number of migrations required during consolidation. The document evaluates the performance of these algorithms in a virtualized cloud test environment.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
dynamic resource allocation using virtual machines for cloud computing enviro...Kumar Goud
Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs., we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing imbalance, we will mix completely different of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Index Terms—Cloud computing, resource management, virtualization, green computing.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes research on techniques for virtual machine (VM) scheduling and management to improve energy efficiency in cloud computing. It discusses how VM scheduling algorithms aim to optimally map VMs to physical servers while minimizing costs and power consumption. Precopy and postcopy live migration techniques are described for managing VMs. The document surveys various algorithms for VM scheduling, including ones based on data transfer time, linear programming, and combinatorial optimization. It also discusses factors that affect VM migration efficiency such as hypervisor options and network configuration. Overall, the document provides an overview of energy-efficient approaches for VM scheduling and management in cloud computing.
SERVER COSOLIDATION ALGORITHMS FOR CLOUD COMPUTING: A REVIEWSusheel Thakur
This document summarizes a research paper on server consolidation algorithms for cloud computing environments. It discusses how server consolidation aims to reduce the number of underutilized servers through virtual machine migration and load balancing techniques. It reviews different server consolidation algorithms like Sandpiper that automate monitoring for hotspots, resizing or migrating virtual machines to improve resource utilization and energy efficiency. The document provides background on server consolidation and virtualization concepts and categorizes consolidation approaches before analyzing the Sandpiper algorithm in more detail.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
This presentation summarizes the costs associated with cloud data centers, including servers (45% of costs), infrastructure (25%), power draw (15%), and networking (15%). It discusses problems like low server utilization (10%) due to uneven application fits and long provisioning timescales. Solutions proposed include increasing utilization by matching applications better to server resources and reducing provisioning times. Reducing power consumption through more efficient hardware could lower infrastructure and power costs. The presentation argues data center networks need properties like location-independent addressing and uniform bandwidth/latency to improve agility. It suggests incentivizing efficient resource usage and filling low-usage periods. Geo-distributing services could improve performance but requires joint optimization of data center and network resources.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentEditor IJCATR
Cloud computing becomes quite popular among cloud users by offering a variety of resources. This is an on demand service because it offers dynamic flexible resource allocation and guaranteed services in pay as-you-use manner to public. In this paper, we present the several dynamic resource allocation techniques and its performance. This paper provides detailed description of the dynamic resource allocation technique in cloud for cloud users and comparative study provides the clear detail about the different techniques
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
This document discusses how virtualization and cloud computing can improve disaster recovery management. It begins by describing traditional disaster recovery approaches like dedicated and shared models that require tradeoffs between cost and speed of recovery. It then explains how cloud computing provides virtualized disaster recovery mechanisms that can offer lower costs, faster recovery times through replication of virtual servers, and improved scalability and flexibility. The document concludes that cloud computing is well-suited for disaster recovery as it allows organizations to scale resources as needed and achieve more reliable continuity of operations at lower costs than traditional approaches.
Green Cloud Computing :Emerging TechnologyIRJET Journal
This document discusses green cloud computing and how cloud infrastructure contributes to high energy consumption. It summarizes that while cloud computing provides cost and scalability benefits, the growing demand on data centers has increased energy usage and carbon emissions. However, the document also explains that cloud computing technologies like dynamic provisioning, multi-tenancy, high server utilization, and efficient data center design can help reduce the environmental impact and enable more sustainable "green" cloud computing through higher efficiency. Future research directions are needed to further optimize cloud resource usage and energy efficiency from a holistic perspective.
Guardian Healthcare Services migrated their IT infrastructure from an outsourced hosted solution to an in-house virtualized infrastructure using VMware. They consolidated 14 remote nursing home facilities across 3 states onto VMware servers and HP hardware in their own datacenter. This allowed them to gain more control over their systems and realize cost savings. The document describes their project planning, infrastructure design, server consolidation, migration process, and benefits realized from the new virtualized environment.
This project implements a Disaster Recovery Manager for a data center that monitors virtual machines and recovers them if they fail. It uses the VMware vSphere API to connect to ESXi hypervisors and vCenter. The manager includes components to ping VMs, take periodic snapshots, and recover failed VMs either by reverting snapshots or migrating VMs to a new host. It detects failures by checking for missed heartbeat pings and excludes VMs intentionally powered off by a user from recovery. The manager was implemented in Java using multithreading and allows for conversion between image formats to support multiple hypervisors.
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...Susheel Thakur
This document summarizes research on improving energy efficiency in data centers through dynamic virtual machine consolidation. It discusses how virtualization allows multiple virtual machines to run on single physical servers, improving resource utilization. Dynamic consolidation techniques migrate virtual machines between servers based on resource usage to minimize the number of active servers and reduce energy costs. The document reviews different server consolidation heuristics that aim to pack virtual machines tightly and turn off underutilized physical machines to reduce energy consumption in cloud data centers.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Server Consolidation Algorithms for Virtualized Cloud Environment: A Performa...Susheel Thakur
This document summarizes research on server consolidation algorithms for virtualized cloud environments with variable workloads. It discusses how server consolidation aims to reduce the number of physical servers through virtualization and live migration of virtual machines between servers. The document reviews several existing server consolidation algorithms and studies their impacts on performance when migrating virtual machines. It then presents an evaluation of selected algorithms under variable workloads to reduce server sprawl, optimize power consumption, and balance loads across physical machines in cloud computing environments.
Performance Evaluation of Server Consolidation Algorithms in Virtualized Clo...Susheel Thakur
The document discusses server consolidation algorithms for virtualized cloud environments. It analyzes the performance of Sandpiper, Khanna's, and Entropy algorithms under constant load. Sandpiper detects hotspots using monitoring and profiling, then migrates VMs to mitigate hotspots. Khanna's algorithm sorts PMs by residual capacity and VMs by usage to migrate VMs from overloaded to underloaded PMs. Entropy formulates VM allocation as a constraint satisfaction problem and uses a constraint solver to optimize resource usage and minimize migrations. The paper evaluates these algorithms in a virtualized test environment under constant loads.
Dynamic resource allocation using virtual machines for cloud computing enviro...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Performance Analysis of Server Consolidation Algorithms in Virtualized Cloud...Susheel Thakur
This document discusses server consolidation algorithms for virtualized cloud environments. It begins with an introduction to cloud computing and virtualization. It then reviews several existing server consolidation algorithms from literature, including Sandpiper, Khanna's algorithm, and Entropy. Sandpiper aims to mitigate hotspots by migrating virtual machines between physical machines. Khanna's algorithm aims for server consolidation by packing virtual machines to minimize the number of physical machines needed. Entropy aims to minimize the number of migrations required during consolidation. The document evaluates the performance of these algorithms in a virtualized cloud test environment.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
dynamic resource allocation using virtual machines for cloud computing enviro...Kumar Goud
Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs., we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing imbalance, we will mix completely different of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Index Terms—Cloud computing, resource management, virtualization, green computing.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes research on techniques for virtual machine (VM) scheduling and management to improve energy efficiency in cloud computing. It discusses how VM scheduling algorithms aim to optimally map VMs to physical servers while minimizing costs and power consumption. Precopy and postcopy live migration techniques are described for managing VMs. The document surveys various algorithms for VM scheduling, including ones based on data transfer time, linear programming, and combinatorial optimization. It also discusses factors that affect VM migration efficiency such as hypervisor options and network configuration. Overall, the document provides an overview of energy-efficient approaches for VM scheduling and management in cloud computing.
SERVER COSOLIDATION ALGORITHMS FOR CLOUD COMPUTING: A REVIEWSusheel Thakur
This document summarizes a research paper on server consolidation algorithms for cloud computing environments. It discusses how server consolidation aims to reduce the number of underutilized servers through virtual machine migration and load balancing techniques. It reviews different server consolidation algorithms like Sandpiper that automate monitoring for hotspots, resizing or migrating virtual machines to improve resource utilization and energy efficiency. The document provides background on server consolidation and virtualization concepts and categorizes consolidation approaches before analyzing the Sandpiper algorithm in more detail.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
This presentation summarizes the costs associated with cloud data centers, including servers (45% of costs), infrastructure (25%), power draw (15%), and networking (15%). It discusses problems like low server utilization (10%) due to uneven application fits and long provisioning timescales. Solutions proposed include increasing utilization by matching applications better to server resources and reducing provisioning times. Reducing power consumption through more efficient hardware could lower infrastructure and power costs. The presentation argues data center networks need properties like location-independent addressing and uniform bandwidth/latency to improve agility. It suggests incentivizing efficient resource usage and filling low-usage periods. Geo-distributing services could improve performance but requires joint optimization of data center and network resources.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentEditor IJCATR
Cloud computing becomes quite popular among cloud users by offering a variety of resources. This is an on demand service because it offers dynamic flexible resource allocation and guaranteed services in pay as-you-use manner to public. In this paper, we present the several dynamic resource allocation techniques and its performance. This paper provides detailed description of the dynamic resource allocation technique in cloud for cloud users and comparative study provides the clear detail about the different techniques
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
This document discusses how virtualization and cloud computing can improve disaster recovery management. It begins by describing traditional disaster recovery approaches like dedicated and shared models that require tradeoffs between cost and speed of recovery. It then explains how cloud computing provides virtualized disaster recovery mechanisms that can offer lower costs, faster recovery times through replication of virtual servers, and improved scalability and flexibility. The document concludes that cloud computing is well-suited for disaster recovery as it allows organizations to scale resources as needed and achieve more reliable continuity of operations at lower costs than traditional approaches.
Green Cloud Computing :Emerging TechnologyIRJET Journal
This document discusses green cloud computing and how cloud infrastructure contributes to high energy consumption. It summarizes that while cloud computing provides cost and scalability benefits, the growing demand on data centers has increased energy usage and carbon emissions. However, the document also explains that cloud computing technologies like dynamic provisioning, multi-tenancy, high server utilization, and efficient data center design can help reduce the environmental impact and enable more sustainable "green" cloud computing through higher efficiency. Future research directions are needed to further optimize cloud resource usage and energy efficiency from a holistic perspective.
Guardian Healthcare Services migrated their IT infrastructure from an outsourced hosted solution to an in-house virtualized infrastructure using VMware. They consolidated 14 remote nursing home facilities across 3 states onto VMware servers and HP hardware in their own datacenter. This allowed them to gain more control over their systems and realize cost savings. The document describes their project planning, infrastructure design, server consolidation, migration process, and benefits realized from the new virtualized environment.
This project implements a Disaster Recovery Manager for a data center that monitors virtual machines and recovers them if they fail. It uses the VMware vSphere API to connect to ESXi hypervisors and vCenter. The manager includes components to ping VMs, take periodic snapshots, and recover failed VMs either by reverting snapshots or migrating VMs to a new host. It detects failures by checking for missed heartbeat pings and excludes VMs intentionally powered off by a user from recovery. The manager was implemented in Java using multithreading and allows for conversion between image formats to support multiple hypervisors.
This document discusses the need for cloud computing. With increasing amounts of data and computing power needs, maintaining large-scale internal infrastructure is difficult and expensive due to hardware failures, maintenance costs, scalability issues, and resource constraints. Cloud computing offers on-demand access to vast computing resources at a lower cost by avoiding the need to invest in internal infrastructure and maintenance.
This technical paper provides the essential technical information about the advanced storage management solution for VMware virtual infrastructure using the VMware vSphere 5.0 Storage DRS feature with the IBM SONAS storage system. To know more about the VMware vSphere, visit http://ibm.co/Lx6hfc.
This project studied virtual machines including their background, scope, and achievements. It developed a process virtual machine for MIPS that uses instruction emulation techniques like interpretation and binary translation, and a table-driven approach for efficient execution. The project covered basic VM concepts and applications, and developed a simpler process VM rather than a full system virtual machine.
This document summarizes a virtual server implementation project. It discusses setting up a virtual server environment to save power compared to running multiple physical servers. It includes calculations showing the power savings and comparisons of running 5 virtual versus physical servers. It also lists the hardware and software needed for the project, including a standalone server, UPS, storage, and virtualization software. A project plan is outlined with tasks like planning, installation, testing, and closing out the project over about 7 days. The total projected budget is $17,590 including costs for hardware, software tools, and internal staff labor.
Server Virtualization With VMware_Project Doc [Latest Updated]Smit Bhilare
This document provides an overview of VMware vSphere and server virtualization. It discusses how vSphere uses virtualization to transform datacenters into scalable infrastructures and provides the foundation for cloud computing. It describes the key components of the vSphere software stack, including the virtualization layer with infrastructure and application services, the management layer with vCenter Server, and the interface layer. It then provides more detail on vCenter Server and its role in providing centralized visibility, management, and extensibility for vSphere environments.
Large scale virtual Machine log collector (Project-Report)Gaurav Bhardwaj
This document describes a project to develop an algorithm for automating virtual machine management. It includes:
- An algorithm that uses DRS and DPM concepts to load balance VMs across hosts and power off idle hosts.
- An architecture with agents that collect VM stats and send to a collector, which stores in a MySQL database. An aggregator rolls up the data.
- The system was implemented using Java agents, Logstash to parse logs, MongoDB as local storage, and MySQL as central storage. Visualizations were created using CanvasJS.
This document is a training report on cloud deployment submitted by Virendra Singh Ruhela to the Department of Computer Science and Engineering at Government Engineering College Bikaner in partial fulfillment of a Bachelor of Technology degree. It includes an acknowledgment section thanking those who provided guidance and support. The abstract provides a high-level overview of cloud computing, how it offers a solution for managing computing resources, and how it is being used in various fields.
The document is an internship report that describes work done on quality assurance of virtual labs. It discusses manual testing including developing a test plan, test cases, and reports. It also covers automated testing using Python scripts to check links and spelling on pages. The report provides details on testing objectives, requirements, tools used, and the structure of test cases, reports, and defect management. It aims to help deliver high quality, open-source virtual labs.
This document provides an overview of data center design and infrastructure. It discusses the history and evolution of data centers from large computer rooms in early computing to modern facilities. Key aspects covered include facilities layout, mechanical and electrical systems for power, cooling, fire protection and more. Modern data center design principles emphasize modularity, scalability, efficiency and resiliency. The document also examines data center infrastructure management tools and the use of modular or containerized data center solutions.
The document summarizes a technical seminar report submitted by Rajendra Dangwal on virtual private networks. The report includes an introduction to VPNs, descriptions of VPN topology and types, components of VPNs including security protocols, and conclusions on the future of VPN technology. It was submitted in partial fulfillment of a Bachelor of Engineering degree and approved by the computer science department.
The company is planning a new data center build project. Key details of the project include constructing a new 50,000 square foot facility in the company's headquarters location. The projected budget for the data center build is $25 million and the target completion date is 18 months from the start of construction.
Virtualization allows the abstraction and isolation of hardware resources and the sharing of those resources. It enables higher-level functions and services to operate independently of the underlying physical hardware. There are different types of virtualization including hardware, storage, and network virtualization. Virtualization provides benefits such as increased hardware utilization, reduced costs, improved flexibility, and greater security.
Report on cloud computing by prashant guptaPrashant Gupta
The document is a technical seminar report submitted by Prashant Gupta on cloud computing. It includes an abstract, introduction, table of contents, and initial sections on the concept and history of cloud computing. The introduction provides a definition of cloud computing and discusses the shift from centralized to distributed computing models. It highlights the scalability and on-demand access to computing resources that cloud computing provides.
This document is a technical seminar report on cloud computing submitted in partial fulfillment of a Bachelor of Engineering degree. It introduces cloud computing as a concept where computing resources such as servers, storage, databases and networking are provided as standardized services over the Internet. The document discusses the history, characteristics, implementation and economics of cloud computing and provides examples of major companies involved in cloud services.
The document provides an introduction to cloud computing, defining key concepts such as cloud, cloud computing, deployment models, and service models. It explains that cloud computing allows users to access applications and store data over the internet rather than locally on a device. The main deployment models are public, private, community, and hybrid clouds, while the main service models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS provides fundamental computing resources, PaaS provides development platforms, and SaaS provides software applications to users. The document discusses advantages such as lower costs and universal access, and disadvantages including internet dependence and potential security issues.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
GROUP BASED RESOURCE MANAGEMENT AND PRICING MODEL IN CLOUD COMPUTINGijcsit
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time.
Performance Enhancement of Cloud Computing using ClusteringEditor IJMTER
Cloud computing is an emerging infrastructure paradigm that allows efficient maintenance
of cloud with efficient uses of servers. Virtualization is a key element in cloud environment as it
provides distribution of computing resources. This distribution results in cost and energy reduction,
thus making efficient utilization of physical resources. Thus resource sharing and use of
virtualization allows improved performance for demanding scientific computing workloads. Number
of data centers and physical servers are underutilized so they are used inefficiently. So performance
evaluation and its enhancement in virtualized environment like public and private cloud are the
challenging issues. Performance of cloud environment is dependent on CPU & memory utilization,
Network and I/O disk operations. In order to improve the performance of the virtualization with
cloud computing, one of the solutions is to allow highly available data in the cluster form. Thus
replicas are available at each data centers and are highly available. In the proposed work, the I/O
parameters are chosen for increasing the performance in this domain. This enhancement can be
achieved through the clustering and caching technologies. The use of technology for data centers
clustering is proposed in this paper. Thus performance and scalability can be improved by reducing
the number of hits to the cloud database.
The document discusses a system that uses virtualization technology to dynamically allocate data center resources based on application demands. It aims to optimize the number of servers in use to support green computing while preventing server overload. The proposed system introduces a concept of "skewness" to measure uneven resource utilization across servers and develops heuristics to minimize skewness and improve overall utilization while avoiding overload and saving energy.
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
A hybrid algorithm to reduce energy consumption management in cloud data centersIJECEIAES
There are several physical data centers in cloud environment with hundreds or thousands of computers. Virtualization is the key technology to make cloud computing feasible. It separates virtual machines in a way that each of these so-called virtualized machines can be configured on a number of hosts according to the type of user application. It is also possible to dynamically alter the allocated resources of a virtual machine. Different methods of energy saving in data centers can be divided into three general categories: 1) methods based on load balancing of resources; 2) using hardware facilities for scheduling; 3) considering thermal characteristics of the environment. This paper focuses on load balancing methods as they act dynamically because of their dependence on the current behavior of system. By taking a detailed look on previous methods, we provide a hybrid method which enables us to save energy through finding a suitable configuration for virtual machines placement and considering special features of virtual environments for scheduling and balancing dynamic loads by live migration method.
This document discusses various techniques for resource provisioning in cloud computing. It describes techniques like using a microeconomic-inspired approach to determine the optimal number of virtual machines (VMs) to allocate to each user based on their financial capacity and workload. It also discusses using a genetic algorithm to compute the optimized mapping of VMs to physical nodes while adjusting VM resource capacities. Additionally, it proposes a reconfiguration algorithm to transition the cloud system from its current state to the optimized state computed by the genetic algorithm. The document provides an overview of these and other techniques like cost-aware provisioning and virtual server provisioning algorithms.
Affinity based virtual machine migration (AVM) approach for effective placeme...IRJET Journal
This document discusses an affinity-based virtual machine migration (AVM) approach to optimize VM placement in cloud environments. The AVM approach uses a genetic algorithm to group VMs based on affinity metrics like traffic rate and communication cost, with the goal of reducing communication cost. It introduces non-randomness to the initial population to reduce search space and time. The algorithm considers network topology and aims to minimize the number of networking devices between VMs to reduce communication cost and traffic rate. Simulation results show the AVM approach reduces communication cost by 15% and execution time compared to existing genetic algorithms.
This document discusses cloud computing and its key concepts. It defines cloud computing as both the software applications delivered over the internet and the hardware/software in data centers that provide those services. Cloud computing allows developers to avoid over-provisioning and under-provisioning of resources. Public clouds are available to the general public, while private clouds are for internal data centers not available publicly. Cloud computing provides computing resources on demand in a pay-as-you-go model.
This document discusses cloud computing and its key concepts. It defines cloud computing as both the software applications delivered over the internet and the hardware/software in datacenters that provide those services. Cloud computing allows developers to avoid over-provisioning and under-provisioning of resources. Public clouds are available to the general public, while private clouds are for internal datacenter use only. Cloud computing provides computing resources on demand in a pay-as-you-go model.
Cloud Computing refers to both the apps delivered as services over the Internet and the hardware and system software in the datacenter that provide those ...
The document discusses cloud computing and its advantages. It defines cloud computing as software and hardware services delivered over the internet. There are different types of clouds including public clouds that are available to the general public and private clouds that are for internal use only. Large-scale data centers enable cloud computing by providing vast computing resources at low costs through economies of scale. Cloud computing allows users to access resources on demand without large upfront costs and pay based on usage providing flexibility. This utility model of computing is made possible through large-scale virtualization and statistical multiplexing of resources.
Optimizing the placement of cloud data center in virtualized environmentIJECEIAES
In cloud mobile networks, precise assessment for the position of the virtualization powered cloud center would improve the capacity limit, latency and energy efficiency (EEf). This paper utilized the Monte Carlo oriented particle swarm optimization (PSO) and genetic algorithm (GA) to first, obtain the optimal number of virtual machines (VMs) that maximize the EEf of the mobile cloud center, second, optimize the position of the mobile data center. To fulfil such examination, a power evaluation framework is proposed to shape the power utilization of a virtualized server while hosting an amount of VMs. In addition, the total power consumption of the network is examined, including data center and radio units (RUs). This evaluation is based on linear modelling of the network parameters, such as resource blocks, number of VMs, transmitted and received powers, and overhead power consumption. Finally, the EEf is constrained to many quality of service (QoS) metrics, including number of resource blocks, total latency and minimum user's data rate.
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUDAlfiya Mahmood
G-SLAM is a framework that optimizes energy efficiency in clouds through software, hardware, and network techniques. It proposes using a Green Service Level Agreement (GSLA) to maintain performance while optimizing for energy efficiency. The software approach reduces active servers through techniques like Ant Colony Optimization and Power Aware Best Fit Decreasing allocation. Hardware techniques apply Dynamic Voltage Frequency Scaling and Dynamic Voltage Scaling to servers. Network techniques aim to reduce traffic and optimize routing through algorithms like Data Center Energy Efficient Network Aware Scheduling and Energy and Topology aware VM Migration.
Energy efficient resource allocation in cloud computingDivaynshu Totla
This document discusses energy efficiency in cloud computing. It first provides background on the rising energy consumption of data centers due to increased cloud usage. It then discusses various approaches for improving energy efficiency in clouds, including virtualization and energy-aware scheduling algorithms like round-robin and first-come first-serve. The document proposes an energy-aware VM scheduler that uses these algorithms to minimize server usage and reduce energy consumption while meeting performance requirements. Overall the document analyzes the problem of high cloud energy usage and proposes a scheduler to improve efficiency through virtualization and algorithmic approaches.
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...cscpconf
Data centers have become indispensable infrastructure for data storage and facilitating the development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results in increased energy consumption. This necessitates the development of efficient energy management techniques in data center not only for operational cost but also to reduce the amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys various issues related to dynamic energy management at virtualization level in cloud data
centers.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
Energy-Efficient Task Scheduling in Cloud Environment
Summer Intern Report
1. Topic: Virtual machine placement with
optimised cost
Summer Intern Report
Submitted By:
Shantanu Bharadwaj
Dept. of Comp. Science & Engg.
IIT Guwahati
Under the guidance of:
Dr. T. Venkatesh
Dept. of Comp. Science & Engg.
IIT Guwahati
2. Abstract
Almost all modern online services run on geo-distributed data centers, and
fault tolerance is one of the primary requirements that decides the
revenue of the service provider. Growing number of internet services like,
web services, business transactions and cloud computing services are
being deployed over geo-distributed data centers. Geo-distribution is
important for latency, availability, and increasingly also for efficiency. Due
to rapid growth in the volume of demand served, large numbers of geo-
distributed data centers today can benefit from the same multi-megawatt
economies of scale that were initially limited to a few centralized ones. As
a result, modern cloud infrastructures are already highly geo-
distributed. Recent experiences have shown that the failure of a data
center (at a site) is inevitable. In order to mask the failure, spare compute
capacity needs to be provisioned across the distributed data center, which
leads to additional cost. While the existing literature addresses the
capacity provisioning problem only to minimize the number of servers,
this report describes that the operating cost needs to be considered as
well. Since the operating cost and client demand vary both across space
and time, we consider cost-aware capacity provisioning to account for
their impact on the operating cost of data centers. We propose an
optimization framework to minimize the Total Cost of Ownership (TCO) of
the cloud provider while designing fault-tolerant geo-distributed data
centers.
The second part of this report deals with the problem of VM placement.
When a virtual machine is deployed on a host, the process of selecting the
most suitable host for the virtual machine is known as virtual machine
placement, or simply placement. During placement, hosts are rated based
on the virtual machine’s hardware and resource requirements and the
anticipated usage of resources. The administrator selects a host for the
virtual machine based on the host ratings. The operating cost of the VM
placement has two important parameters: electricity cost and
communication cost. In Cloud Environment, the process of execution
requires proper Resource Management and Scheduling due to the high
process to the resource ratio. Resource Scheduling is a complicated task in
cloud computing environment because there are many alternative
computers with varying capacities. The goal of this project is to propose a
model for job-oriented resource scheduling algorithm in a cloud computing
environment. This report proposes a cost-aware heuristic approach for
optimal VM placement among a given number of physical machines in a
data center using resource scheduling techniques. The idea can be
extended to a group of data centers. The results show that the operating
cost has great potential of improvement via optimal VM placement.
3. Introduction
A data center is a facility to house computer systems and its associated
components like, telecommunications and storage systems. It is a
centralized repository, either physical or virtual, for the storage,
management and dissimination of data and information. A basic level data
center components are server, network and storage hardware. Other
components are, power, cooling, fire suppression, security systems and
network connectivity.
Geo-distributed data center is collection of small, geographically
distributed, fully automated data centers. Geo-distributed data centers
are popular because of the following reasons: first, reduced latency to the
clients as their requests are served by closer data centers. Second, they
are more effective in protecting data from catastrophes. Geo-distributed
data centers are gaining popularity because one data center is too small,
in addition to the above mentioned advantages over a single data center.
This is a general model of a Geo-distributed data center. In a broad way, it
handles two types of processes. They are:
Clients- Who wish to execute some operations or run some protocols.
Servers- Help implement operations, like storing data.
Business critical applications running in geo-distributed data centers
(henceforth simply referred data centers) demand high availability
because of huge loss of revenue, cost of idle employees and loss of
productivity associated with downtime. In addition outages lead to
reduced customer satisfaction, damaged brand perception and regulatory
4. problems. Instances of a data center failure at a site have been reported
by many cloud service providers like Amazon, Facebook and Google.
Data center unavailability can be due to reasons that vary across software
bugs, router misconfiguration in the Internet, human errors due to poor
supporting documentation and training, and man-made or natural
disasters. Due to these experiences from the industry, it is evident that
failure of a data center is inevitable. Designing a fault-tolerant geo-
distributed data center usually involves spare capacity provisioning
(allocation of additional servers to mask the failure) across different data
center sites, satisfying a set of constraints based on electricity prices,
infrastructure cost, operating cost, demand at each location, and delay
faced by customers. Henceforth, in this report, failure of a single data
center is the only kind of failure we consider.
Cloud computing is developing based on various recent advancements in
virtualization, Grid computing, Web computing, Utility computing, and
related technologies. Cloud computing provides both platforms and
applications on demand through the Internet or Intranet. Cloud
computing is a kind of Internet-based computing that provides shared
processing resources and data to computers and other devices on
demand. It is a model for enabling ubiquitous, on-demand access to a
shared pool of configurable computing resources (e.g., networks, servers,
storage, applications and services), which can be rapidly provisioned and
released with minimal management effort.
Resource scheduling plays an important role in Cloud data centers. One of
the challenging scheduling problems in Cloud data centers is the
consideration of the allocation of VMs. A data center is composed of a set
of hosts (PMs), which are responsible for managing VMs. A host is a
component that represents a physical computing node in a Cloud. It is
assigned a preconfigured processing capability (e.g., that expressed in
Million Instructions Per Second or GHz), memory, storage, and a
scheduling policy for allocating VMs. A number of hosts can also be
interconnected to form a cluster or a data center. In this chapter, we
introduce a framework for cost-efficient resource scheduling of real-time
VMs, considering only the computing resources.
5. Cost-aware Capacity Provisioning
Spare capacity provisioning across geo-distributed data center to mask
failure of a single data center, can be illustrated by a simple example.
Consider a distributed data center with 5 sites with a compute capacity of
20 units at each site. To mask the failure of any one data center at a time,
we require a spare capacity of 20/4 = 5 units at each of the remaining
data centers. Therefore the total spare capacity required is 5*5 = 25; So
the additional cost in building a fault-tolerant data center that can mask
single failure is 25%. The naive approach uniformly distributes the spare
capacity. However, all data centers do not have the same number of
servers and different locations are characterized by variation in the
electricity cost, bandwidth cost, carbon tax and varying user demand over
time. Therefore, the main challenge in designing fault-tolerant distributed
data center is to provision spare capacity so that along with capital cost
(cost of spare servers), operating cost is minimized while satisfying the
client latency even during the period of failure. Current literature proposes
an optimization framework with the objective of simply minimizing the
number of servers to meet the delay and availability constraints. But
operating cost across different geographical locations also needs to
minimised or optimised.
Considering the cost of a server to be $2000, and its lifetime to be 4 years
, we calculate the energy to acquisition cost (EAC) defined to be the ratio
of cost of running a server for its lifetime to its acquisition cost.
Power cost = 4 years * (8760 hours/year) * (electricity cost) * server
power * PUE
EAC = (power cost / server cost) * 100
6. PUE or Power Usage Effectiveness is the ratio of total amount of energy
used by a computer data center facility to the energy delivered to
computing equipment. It is a measure of how efficiently a computer data
center uses energy; specifically, how much energy is used by the
computing equipment (in contrast to cooling and other overhead).
PUE = Total Facility Energy / IT Equipment Energy
Higher EAC indicates more power and cooling cost than the server
acquisition cost. Therefore, lower the EAC, more feasible is the system.
This report formulates a mixed integer linear program (MILP) framework
for cost-aware capacity provisioning in fault tolerant geo-distributed data
centers to mask single data center failures. Along with cost of additional
servers, we also consider the variation in electricity prices across space
and time in determining the optimal capacity that
minimizes the operating cost.
Optimization Model
Assumptions:
Mechanism for failure detection and request re-routing is already
present.
Failure of only single data center (a site) is considered at a time.
Notations used:
7. Delay: Let Dmax
be the maximum latency allowed for a client based on the
service level agreements with the cloud provider. Let Dsu be the
propagation delay between user location u and data center location s. The
data center must be designed such that even after the failure of a site, the
latency continues to be lower than Dmax
.
Cost: Let S and U denote the set of data centers and client locations,
respectively. The cost of server (acquisition cost) is denoted by α. Let σs
denote the cost of access bandwidth.
Server Provisioning: Let ms denote the number of servers required in a
data center at s. We define Mmin and Mmax to be the minimum and
maximum number of servers that can be provisioned at any data center.
8. Power Consumption: Let Pidle be the average power drawn in idle
condition and Ppeak be the power consumed when server is running at peak
utilization. Then total power consumed at
a data center location s belonging to S, at hour h belonging to H, is:
Es is the PUE of data center s,
Average server utilization,
The TCO, which includes server acquisition cost and operating cost, is
defined as :
Subject to the following constraints:
The objective function is the sum of total cost incurred by all the individual
data centers over a day. The goal is to minimize the objective function,
that is, the total cost of ownership (TCO).
/*more stuff about code to be inserted*/
VM placement in distributed data centers:
In order to efficiently allocate computing resources; scheduling becomes a
very complicated task in a cloud computing environment where many
alternative computers with varying capacities are available. Efficient task
scheduling mechanism can meet users’ requirements and improve the
9. resource utilization. The cloud service providers often receive lots of
computing requests with different requirements and preferences from
users simultaneously. Some tasks need to be fulfilled at a lower cost and
less computing resources, while some tasks require higher computing
ability and take more bandwidth and computing resources.
In this report, only computing resources are considered. A data center is
composed of a set of hosts (Physical Machines), which are responsible for
managing VMs during their life cycles. A host is a component that
represents a physical computing node in a Cloud. It is assigned a
preconfigured processing capability (e.g., that expressed in Million
Instructions Per Second or GHz), memory, storage, and a scheduling policy
for allocating VMs. A number of hosts can also be interconnected to form a
cluster or a data center.
Data centers (probably distributed in different geographical multiple
systems) are the places that accommodate computing equipment and are
responsible for providing energy and air conditioning maintenance for the
computing devices. A data center could be a single construction or it could
be located within several buildings. Dynamic distribution manages virtual
and shared resources in the new application environment—Cloud
computing data centers face new challenges. Efficient scheduling
strategies and algorithms must be designed to adapt to different business
requirements and to satisfy different business goals.
Key technologies of resource scheduling include:
Scheduling strategies: It is the top level of resource scheduling
management, which needs to be defined by data center owners and
managers. It mainly determines the resource scheduling goals and
makes sure to satisfy all.
Optimization goals: Scheduling center needs to identify different
objective functions to determine the pros and cons of different types
of scheduling. Now there are optimal objective functions, such as
minimum costs, maximum profits, and maximum resource
utilization.
Scheduling algorithms: Good scheduling algorithms need to produce
optimal results according to objective functions.
GreenCloud architecture:
10. Proposed GreenCloud architecture
This figure describes a layered architecture for GreenCloud. There is a web
portal at the top layer for the user to select resources and send requests:
basically, it’s an uniform view of the few types of VMs that are
preconfigured for users to choose. Once user requests are initiated, they
go to the next level—CloudSched—which is responsible for choosing
appropriate data centers and PMs based on user requests. This layer can
manage a large number of Cloud data centers, consisting of thousands of
PMs. At this layer, different scheduling algorithms can be applied in
different data centers based on customer characteristics. At the lowest
layer, there are Cloud resources that include PMs and VMs, both consisting
of a certain amount of CPU, memory, storage, and bandwidth. At the
Cloud resource layer, virtual management is mainly responsible for
keeping track of all VMs in the system, including their status, required
capacities, hosts, arrival times, and departure times.
This report proposes a queuing model where a client requests virtual
machines for a predefined duration. Network Resources are not
considered at all. Jobs are assumed not to communicate with each other
or transmit or receive data. No preference is required as to where the VMs
are to be scheduled. An algorithm is proposed to optimally distribute VMs
in order to minimize the distance between user VMs in a data center grid.
The only network
constraint used is the Euclidean distance between data centers. No
specific connection requests or user differentiation is used. An algorithm is
11. proposed to schedule VMs within one data center to minimize
communication cost. No network topology is used. Rather, only the
monetary cost of transmitting data is considered for VM requests.
Real-time VM request model:
The Cloud computing environment is a suitable solution for real-time VM
service because it leverages virtualization. When users request execution
of their real-time VMs in a Cloud data center, appropriate VMs are
allocated.
A real-time VM request can be represented in an interval vector :
VMRequestID(VM typeID, start time, finish time, requested capacity).
For example, vm1(1, 0, 6, 0.25) shows that for VM request ID vm1, the VM
requested is of Type1 (corresponding to integer 1) with a start time of 0
and a finish time of 6 and 25% of the total capacity of Type1 PM.
Request formats can vary according to the definitions by data center
owners and managers.
In this report, the request format is as follows:
VMRequestID(VM typeID, start time, finish time, requested CPU capacity,
requested storage capacity).
For example, vm1(1, 0, 6, 2, 1) shows that for VM request ID vm1, the VM
requested is of Type1 (corresponding to integer 1) with a start time of 0
and a finish time of 6 and the request needs 2 units of CPU and 1 unit of
memory.
Assumptions in the proposed model:
All tasks are independent. There are no precedence constraints
other than those implied by the start and finish times.
Each PM is always available (i.e., each machine is continuously
available in [0, ∞)).
Each PM has an operating cost and communication cost associated
with it.
Each VM request has an electricity cost and communication
overhead associated with it.
Each PM is linked with every other PM in the system.
Each communication link is unidirectional.
12. The capacities of VMs and PMs are strongly divisible. If (P,V) denote
the list of capacities of PMs and VMs respectively, they are strongly
divisible if every item in list P exactly divides every item in list V.
That is, capacity demanded by VM requests are multiples of
capacities of PMs.
Proposed Algorithm:
The heuristic developed is based on first fit decreasing algorithm along
with some cost optimisation techniques. The VM requests are sorted
according to the decreasing order of their processing times. Each physical
machine has different operating costs at different hours. Each
communication link has a communication overhead associated with it.
Each VM request has an electricity and communication cost. The
algorithm compares the requested capacity with the capacity assigned to
physical machines, finds the physical machine with lowest cost and
assigns it to the VM request. If the capacity is not met, the communication
costs for sending the request to other physical machines are considered
and the physical machine with minimum cost is found out and assigned
again to the VM request.
The pseudo-code for the algorithm is as follows:
Input:
Number of VM requests
VM requests (indicated by their VM ID, start time, finish time, CPU
capacity, storage capacity)
Number of PMs
PM (PM ID, CPU capacity, storage capacity)
Number of hours the whole system will run
Operating cost of each PM at every hour
Communication cost for each link
Output:
13. PM ID of physical machine assigned to a particular VM request along with
the cost incurred.
Pseudo Code:
n <- number of VM requests
m <- number of PMs
h <- number of hours
eij <- electicity cost of physical machine i at hour j
dij <- communication overhead of link between PM i and PM j
Rei <- operating cost of VM request i
Rbi <- communication cost of VM request i
ei <- average electricity cost of PM i
bi <- average communication cost of PM i
vij <- cost of allocating PM j to request i
v_min <- minimum cost
machinei <- physical machine selected for VM request i
for i = 1 to n do
processing timei = finish timei – start timei
sort (processing timei)
for i = 1 to m do
for j = 1 to h do
ei = find_average (eij)
for i = 1 to m do
for j = 1 to m do
if (i=j)
dij = 0
else
bi = find_average (dij)
for i = 1 to n do
for j = 1 to m do
vij = ( ej * Rei ) / ( ej + bj )
for i = 1 to n do
for j=1 to m do
v_min = find_minimum(vij)
machinei = j
if(capacity_request <= capacity_machine)
14. jth
machine is allocated to request i
else
capacity_remaining = capacity_request – capacity_machine
for i = 1 to n do
for j = 1 to m do
vij = (( ej * Rei ) / ( ej + bj )) + (( bj * Rbi ) / ( ej + bj )) + (( ej+1 *
Rei ) / (ej+1 + bj+1))
This process is repeated till the requested capacity of the VM request is
not met.
Results:
A small example has been taken to show the working of the algorithm. In
this example, the data center has 3 PMs with given capacities i.e, 2 units of
CPU and 1 unit of storage. 3 VM requests are considered with varying start
times, end times, and capacities. The goal is to allocate PMs to them such
that, the cost is minimised. The costs considered are average operating cost
of PMs, average communication cost of PMs, electric cost of VM requests,
communication overhead of VM requests. The output of the algorithm
implemented, is as follows:
15.
16. In this algorithm, if there are three nodes and three VMs are to be
scheduled, each node would be allocated one VM, provided all the nodes
have enough available resources to run the VMs. The main advantage of
this algorithm is that it utilizes all the resources in a balanced order.
Comparison with Traditional method:
The simplest approach for this problem is based on the idea that, no
sorting of the VM request IDs are done. It is the traditional approach for
the problem, based on Round Robin Scheduling. The requests are served
in a FCFS manner and the PM with lowest average operating cost is
assigned to the requests. If the capacity is not met, the PM with next
lowest average operating cost is assigned and the process goes on. No
communication overhead is introduced.
Taking the same values from the example taken above, this is the order
and cost of PMs being assigned to VM requests:
Request
ID
Start
time
End time Capacity(C
PU)
Capacity(Stor
age)
Re(electric
ity cost)
1 0 2 4 2 100
2 0 6 2 1 50
3 0 4 2 1 50
Avg operating cost=6 Avg operating cost=5
Avg operating cost=7
Naturally, PM 2 will be the first choice of every request as its average
operating cost is the least.
Cost of running VM1 on PM2 is 500
Capacity not met
17. PM1 has the next lowest average operating cost
Cost of running VM1 on PM1 is 600
Cost of running VM2 on PM2 is 250
Cost of running VM3 on PM2 is 250
Request 1 Request 2 Request 3
0
200
400
600
800
1000
1200
Our Heuristic
Round Robin
Advantages of Resource Scheduling Algorithm:
Easy access to resources and Better resource utilization.
In this report, implementation of the optimized algorithm is
compared with the traditional task scheduling algorithm. The main
goal of this optimized algorithm is to optimize the cost as compare
to the traditional ones.
This algorithm improves the traditional cost-based scheduling
algorithm for making appropriate mapping of tasks to resources.
This algorithm computes the priority of tasks on the basis of
different attributes of tasks and after that applies sorting on tasks
onto a service which can further complete the tasks.
Conclusion:
Thus, this report argued the need for cost-aware capacity provisioning for
geo-distributed data centers that can tolerate failure of a single data
18. center. We proposed an MILP optimization model that reduces the total
cost of ownership (TCO) (includes capital and operating cost factors) while
provisioning the servers across different locations with varying running
cost factors.
This report also stated that Scheduling is one of the most important tasks
in cloud computing environment. Priority is an important issue of job
scheduling in cloud environments. The heuristic developed using resource
scheduling techniques is thus helpful in minimising the cost incurred
during VM placement.