Probabilistic consolidation of virtual machines in self organizing cloud data...WMLab,NCU
The document describes a probabilistic approach called ecoCloud for consolidating virtual machines (VMs) across physical servers in a cloud data center. EcoCloud uses two probabilistic procedures - assignment and migration - to autonomously distribute VMs among servers based on local resource utilization information, with the goal of improving utilization levels, reducing power consumption, and avoiding SLA violations. The assignment procedure determines whether an idle server should accept a new VM or not, while the migration procedure determines whether an underutilized VM should migrate to another server for better consolidation. Both procedures are based on simple Bernoulli trials using resource utilization-dependent probability functions.
Load Balancing In Cloud Computing newpptUtshab Saha
The document discusses various load balancing algorithms for cloud computing including round robin, first come first serve (FCFS), and simulated annealing. It provides implementations of each algorithm in CloudSim and compares the results. Round robin and FCFS showed similar overall response times, data center processing times, and maximum/minimum values. Simulated annealing had slightly lower average overall response time. The document proposes using a genetic algorithm for host-side optimization to select the best host for virtual machine requests.
Load balancing In cloud - In a semi distributed systemAchal Gupta
Load Balancing in Cloud
What is load balancing in Cloud in semi distributed system and why it is better than a centralized system and distributed system
Faraz Ahmad and T.N. Vijaykumar, Joint Optimization of Idle and Cooling Power in Data Centers While Maintaining Response Time
presented in Green Computing class, 05NOV2012
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
This document proposes a load balancing model for public clouds using cloud partitioning. It divides a large public cloud into partitions based on geographic location. When a job arrives, a main controller assigns it to the least loaded partition. Each partition uses algorithms like weighted round robin to further distribute jobs to nodes based on their calculated load degrees. The model aims to improve resource utilization and response times across the large, complex public cloud infrastructure.
Probabilistic consolidation of virtual machines in self organizing cloud data...WMLab,NCU
The document describes a probabilistic approach called ecoCloud for consolidating virtual machines (VMs) across physical servers in a cloud data center. EcoCloud uses two probabilistic procedures - assignment and migration - to autonomously distribute VMs among servers based on local resource utilization information, with the goal of improving utilization levels, reducing power consumption, and avoiding SLA violations. The assignment procedure determines whether an idle server should accept a new VM or not, while the migration procedure determines whether an underutilized VM should migrate to another server for better consolidation. Both procedures are based on simple Bernoulli trials using resource utilization-dependent probability functions.
Load Balancing In Cloud Computing newpptUtshab Saha
The document discusses various load balancing algorithms for cloud computing including round robin, first come first serve (FCFS), and simulated annealing. It provides implementations of each algorithm in CloudSim and compares the results. Round robin and FCFS showed similar overall response times, data center processing times, and maximum/minimum values. Simulated annealing had slightly lower average overall response time. The document proposes using a genetic algorithm for host-side optimization to select the best host for virtual machine requests.
Load balancing In cloud - In a semi distributed systemAchal Gupta
Load Balancing in Cloud
What is load balancing in Cloud in semi distributed system and why it is better than a centralized system and distributed system
Faraz Ahmad and T.N. Vijaykumar, Joint Optimization of Idle and Cooling Power in Data Centers While Maintaining Response Time
presented in Green Computing class, 05NOV2012
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
This document proposes a load balancing model for public clouds using cloud partitioning. It divides a large public cloud into partitions based on geographic location. When a job arrives, a main controller assigns it to the least loaded partition. Each partition uses algorithms like weighted round robin to further distribute jobs to nodes based on their calculated load degrees. The model aims to improve resource utilization and response times across the large, complex public cloud infrastructure.
An Efficient Decentralized Load Balancing Algorithm in Cloud ComputingAisha Kalsoom
This document proposes a new efficient decentralized load balancing algorithm for cloud computing. It consists of two phases: 1) a request sequencing phase where incoming user requests are sequenced to minimize wait times, and 2) a load transferring phase where a load balancer calculates resource utilization of each VM and transfers tasks to less utilized VMs. This algorithm aims to improve load balancing performance and achieve more efficient resource utilization in cloud computing environments.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
This document compares the performance of two dynamic load balancing algorithms - the Honey Bee algorithm and the Throttled Load Balancing algorithm - in a cloud computing environment. It first describes both algorithms and other related concepts. It then discusses results from simulations run using the CloudAnalyst tool. The simulations show that the Honey Bee algorithm has lower average, minimum, and maximum response times compared to the Throttled algorithm. Additionally, the Honey Bee algorithm results in lower data center processing times and costs. Therefore, the document concludes the Honey Bee algorithm performs better than the Throttled algorithm for load balancing in cloud computing.
Base paper ppt-. A load balancing model based on cloud partitioning for the ...Lavanya Vigrahala
A load balancing model based on cloud partitioning for the public cloud. -Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
The document provides instructions for installing Xen Cloud Platform host software on a physical server. It describes selecting installation options such as keyboard layout, driver installation, clean vs upgrade install. It also covers configuring storage, networking and other setup steps. The goal is to install a Xen hypervisor and management tools to create a platform for hosting virtual machines.
This document presents an approach called ecoCloud for efficiently consolidating virtual machines (VMs) across physical servers in a cloud data center based on two key resources: CPU and RAM. EcoCloud uses probabilistic procedures driven by local information to assign and migrate VMs, with the goal of increasing server utilization and workload consolidation while reducing electrical costs and meeting service level agreements. Both mathematical modeling and real data center experiments show ecoCloud can rapidly consolidate VMs and balance CPU-bound and RAM-bound workloads to efficiently use resources.
This document discusses load balancing, which is a technique for distributing work across multiple computing resources like CPUs, disk drives, and network links. The goals of load balancing are to maximize resource utilization, throughput, and response time while avoiding overloads and crashes. Static load balancing involves preset mappings, while dynamic load balancing distributes workload in real-time. Common load balancing algorithms are round robin, least connections, and response time-based. Server load balancing distributes client requests to multiple backend servers and can operate in centralized or distributed architectures using network address translation or direct routing.
Iaetsd appliances of harmonizing model in cloudIaetsd Iaetsd
This document proposes and evaluates a load balancing model for cloud computing environments that aims to optimize resource utilization and minimize energy usage. Key points:
- It introduces a "skewness" metric to measure uneven resource utilization across servers and develops algorithms to minimize skewness to improve overall utilization.
- The algorithms dynamically allocate resources based on demand, detecting "hot spots" that are overloaded and migrating VMs to reduce overload, as well as detecting "cold spots" that are underutilized to power them off to save energy.
- It evaluates the algorithms through trace-driven simulation and experimentation, finding it achieves good performance in load balancing while saving energy.
An efficient approach for load balancing using dynamic ab algorithm in cloud ...bhavikpooja
This document outlines a proposed approach for efficient load balancing using a dynamic Ant-Bee algorithm in cloud computing. It discusses limitations of existing ant colony and bee colony algorithms for load balancing. The author aims to develop a new AB algorithm approach that combines aspects of ant colony optimization and bee colony algorithms to improve load balancing optimization and overcome issues like slow convergence and tendency to stagnate in ant colony algorithms. The proposed approach would leverage both the dynamic path finding of ants and pheromone updating of bees for more effective load balancing in cloud environments.
ieee standard base paper.-Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
This document discusses various SQL Server disaster recovery strategies including log shipping, database mirroring, replication, and maintaining disaster recovery plans and documentation. Log shipping uses transaction logs to copy changes from a primary to standby server. Database mirroring maintains an up-to-date copy of a database on a mirror server. Replication can be used to distribute data changes in near real-time. The document emphasizes the importance of regularly testing disaster recovery plans and keeping recovery documentation up-to-date.
This document discusses load balancing in cloud computing. It begins by defining cloud computing and some of its key characteristics like broad network access, rapid elasticity, and pay-as-you-go pricing. It then discusses how load balancing can improve performance in distributed cloud environments by redistributing load, improving response times, and better utilizing resources. The document outlines different load balancing techniques like virtual machine migration and throttled load balancing using a load balancer, virtual machines, and a data center controller. It also proposes a trust and reliability based algorithm that prioritizes data centers for load balancing based on calculated trust values that consider factors like initialization time, machine performance, and fault rates.
A load balancing model based on cloud partitioning for the public cloud. ppt Lavanya Vigrahala
Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Live virtual machine migration based on future prediction of resource require...Tapender Yadav
This document gives the brief description of the work done during the Summer Internship at Institute for Development and Research in Banking Technology (IDRBT), Hyderabad. The project was undertaken from May 2014 - July 2014 under the exemplary guidance of Dr. G. R. Gangadharan, Asst. Professor, IDRBT, Hyderabad.
In the FACTS-based transmission line, if the fault does not include FACTS device, then the impedance calculation is like an ordinary transmission line, and when the fault includes FACTS, then the impedance calculation accounts for the impedances introduced by FACTS device.
The objectives of DVFS is:
- To optimize resource allotment for tasks and maximize power saving when those resources are not needed.
- To use the optimal operating frequency and voltage to allow a task to be performed in the required amount of time.
- To maximize battery life and longevity of devices while still maintaining ready compute performance availability.
The document describes an approach called ecoCloud for efficiently consolidating virtual machines (VMs) across physical servers in cloud data centers to reduce power consumption. EcoCloud uses probabilistic procedures driven by local information to dynamically assign and migrate VMs, allowing servers to autonomously increase utilization and consolidate workload over time. The goal is to maximize the number of servers switched off or in low-power modes through continuous optimization of VM placement in response to changing resource demands.
Load balancing is used to distribute workloads across multiple servers in cloud computing. It aims to optimize resource use and minimize response time. The document proposes using a round robin approach to distribute loads from virtual machines across servers periodically to reduce server workload and use networks efficiently. Key benefits outlined are high scalability, availability, and flexibility to balance various protocols and route traffic based on server health. The conclusion states that load balancing is important in cloud computing to distribute work evenly for high user satisfaction and resource utilization, though further research is still needed.
REGION BASED DATA CENTRE RESOURCE ANALYSIS FOR BUSINESSESijsrd.com
This document analyzes data center resource usage based on region. It examines trends in Latin America, the Middle East/Africa, and the United States from 2010-2016. In Latin America, all data center types increased except rack/computer rooms, which decreased in the last two years. In the Middle East/Africa, all types significantly increased. In the US, all types decreased except large data centers, which steadily grew. The document also discusses utilization rates, costs per server, trends towards cloud computing, and concerns with cloud security and data lock-in.
This document discusses Dynamic Voltage and Frequency Scaling (DVFS) and a novel DVS-EDF scheduling algorithm for multi-core embedded real-time systems. DVFS allows dynamic adjustment of processor frequency and voltage to conserve power. The proposed DVS-EDF algorithm uses Earliest Deadline First scheduling and DVFS to guarantee task deadlines while improving energy efficiency over simple power-aware scheduling by up to 12%. It considers task utilization, critical frequency, and leakage power to minimize energy consumption within deadline constraints.
This document discusses traditional monolithic applications and their lack of scalability, deployability, and flexibility compared to modern distributed applications. It describes how modern applications are built to be scalable, deployable, and resilient by separating different parts of the application and leveraging cloud infrastructure and platforms. The document provides an overview of VMware's vFabric suite for building, running, and managing modern applications.
An Efficient Decentralized Load Balancing Algorithm in Cloud ComputingAisha Kalsoom
This document proposes a new efficient decentralized load balancing algorithm for cloud computing. It consists of two phases: 1) a request sequencing phase where incoming user requests are sequenced to minimize wait times, and 2) a load transferring phase where a load balancer calculates resource utilization of each VM and transfers tasks to less utilized VMs. This algorithm aims to improve load balancing performance and achieve more efficient resource utilization in cloud computing environments.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
This document compares the performance of two dynamic load balancing algorithms - the Honey Bee algorithm and the Throttled Load Balancing algorithm - in a cloud computing environment. It first describes both algorithms and other related concepts. It then discusses results from simulations run using the CloudAnalyst tool. The simulations show that the Honey Bee algorithm has lower average, minimum, and maximum response times compared to the Throttled algorithm. Additionally, the Honey Bee algorithm results in lower data center processing times and costs. Therefore, the document concludes the Honey Bee algorithm performs better than the Throttled algorithm for load balancing in cloud computing.
Base paper ppt-. A load balancing model based on cloud partitioning for the ...Lavanya Vigrahala
A load balancing model based on cloud partitioning for the public cloud. -Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
The document provides instructions for installing Xen Cloud Platform host software on a physical server. It describes selecting installation options such as keyboard layout, driver installation, clean vs upgrade install. It also covers configuring storage, networking and other setup steps. The goal is to install a Xen hypervisor and management tools to create a platform for hosting virtual machines.
This document presents an approach called ecoCloud for efficiently consolidating virtual machines (VMs) across physical servers in a cloud data center based on two key resources: CPU and RAM. EcoCloud uses probabilistic procedures driven by local information to assign and migrate VMs, with the goal of increasing server utilization and workload consolidation while reducing electrical costs and meeting service level agreements. Both mathematical modeling and real data center experiments show ecoCloud can rapidly consolidate VMs and balance CPU-bound and RAM-bound workloads to efficiently use resources.
This document discusses load balancing, which is a technique for distributing work across multiple computing resources like CPUs, disk drives, and network links. The goals of load balancing are to maximize resource utilization, throughput, and response time while avoiding overloads and crashes. Static load balancing involves preset mappings, while dynamic load balancing distributes workload in real-time. Common load balancing algorithms are round robin, least connections, and response time-based. Server load balancing distributes client requests to multiple backend servers and can operate in centralized or distributed architectures using network address translation or direct routing.
Iaetsd appliances of harmonizing model in cloudIaetsd Iaetsd
This document proposes and evaluates a load balancing model for cloud computing environments that aims to optimize resource utilization and minimize energy usage. Key points:
- It introduces a "skewness" metric to measure uneven resource utilization across servers and develops algorithms to minimize skewness to improve overall utilization.
- The algorithms dynamically allocate resources based on demand, detecting "hot spots" that are overloaded and migrating VMs to reduce overload, as well as detecting "cold spots" that are underutilized to power them off to save energy.
- It evaluates the algorithms through trace-driven simulation and experimentation, finding it achieves good performance in load balancing while saving energy.
An efficient approach for load balancing using dynamic ab algorithm in cloud ...bhavikpooja
This document outlines a proposed approach for efficient load balancing using a dynamic Ant-Bee algorithm in cloud computing. It discusses limitations of existing ant colony and bee colony algorithms for load balancing. The author aims to develop a new AB algorithm approach that combines aspects of ant colony optimization and bee colony algorithms to improve load balancing optimization and overcome issues like slow convergence and tendency to stagnate in ant colony algorithms. The proposed approach would leverage both the dynamic path finding of ants and pheromone updating of bees for more effective load balancing in cloud environments.
ieee standard base paper.-Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
This document discusses various SQL Server disaster recovery strategies including log shipping, database mirroring, replication, and maintaining disaster recovery plans and documentation. Log shipping uses transaction logs to copy changes from a primary to standby server. Database mirroring maintains an up-to-date copy of a database on a mirror server. Replication can be used to distribute data changes in near real-time. The document emphasizes the importance of regularly testing disaster recovery plans and keeping recovery documentation up-to-date.
This document discusses load balancing in cloud computing. It begins by defining cloud computing and some of its key characteristics like broad network access, rapid elasticity, and pay-as-you-go pricing. It then discusses how load balancing can improve performance in distributed cloud environments by redistributing load, improving response times, and better utilizing resources. The document outlines different load balancing techniques like virtual machine migration and throttled load balancing using a load balancer, virtual machines, and a data center controller. It also proposes a trust and reliability based algorithm that prioritizes data centers for load balancing based on calculated trust values that consider factors like initialization time, machine performance, and fault rates.
A load balancing model based on cloud partitioning for the public cloud. ppt Lavanya Vigrahala
Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Live virtual machine migration based on future prediction of resource require...Tapender Yadav
This document gives the brief description of the work done during the Summer Internship at Institute for Development and Research in Banking Technology (IDRBT), Hyderabad. The project was undertaken from May 2014 - July 2014 under the exemplary guidance of Dr. G. R. Gangadharan, Asst. Professor, IDRBT, Hyderabad.
In the FACTS-based transmission line, if the fault does not include FACTS device, then the impedance calculation is like an ordinary transmission line, and when the fault includes FACTS, then the impedance calculation accounts for the impedances introduced by FACTS device.
The objectives of DVFS is:
- To optimize resource allotment for tasks and maximize power saving when those resources are not needed.
- To use the optimal operating frequency and voltage to allow a task to be performed in the required amount of time.
- To maximize battery life and longevity of devices while still maintaining ready compute performance availability.
The document describes an approach called ecoCloud for efficiently consolidating virtual machines (VMs) across physical servers in cloud data centers to reduce power consumption. EcoCloud uses probabilistic procedures driven by local information to dynamically assign and migrate VMs, allowing servers to autonomously increase utilization and consolidate workload over time. The goal is to maximize the number of servers switched off or in low-power modes through continuous optimization of VM placement in response to changing resource demands.
Load balancing is used to distribute workloads across multiple servers in cloud computing. It aims to optimize resource use and minimize response time. The document proposes using a round robin approach to distribute loads from virtual machines across servers periodically to reduce server workload and use networks efficiently. Key benefits outlined are high scalability, availability, and flexibility to balance various protocols and route traffic based on server health. The conclusion states that load balancing is important in cloud computing to distribute work evenly for high user satisfaction and resource utilization, though further research is still needed.
REGION BASED DATA CENTRE RESOURCE ANALYSIS FOR BUSINESSESijsrd.com
This document analyzes data center resource usage based on region. It examines trends in Latin America, the Middle East/Africa, and the United States from 2010-2016. In Latin America, all data center types increased except rack/computer rooms, which decreased in the last two years. In the Middle East/Africa, all types significantly increased. In the US, all types decreased except large data centers, which steadily grew. The document also discusses utilization rates, costs per server, trends towards cloud computing, and concerns with cloud security and data lock-in.
This document discusses Dynamic Voltage and Frequency Scaling (DVFS) and a novel DVS-EDF scheduling algorithm for multi-core embedded real-time systems. DVFS allows dynamic adjustment of processor frequency and voltage to conserve power. The proposed DVS-EDF algorithm uses Earliest Deadline First scheduling and DVFS to guarantee task deadlines while improving energy efficiency over simple power-aware scheduling by up to 12%. It considers task utilization, critical frequency, and leakage power to minimize energy consumption within deadline constraints.
This document discusses traditional monolithic applications and their lack of scalability, deployability, and flexibility compared to modern distributed applications. It describes how modern applications are built to be scalable, deployable, and resilient by separating different parts of the application and leveraging cloud infrastructure and platforms. The document provides an overview of VMware's vFabric suite for building, running, and managing modern applications.
VMware's vCloud Hybrid Service (vCHS) allows extending on-premise VMware environments to the cloud. It provides infrastructure resources through virtual private clouds (VPCs) or dedicated clouds. Customers can migrate or replicate existing and new applications between their data center and vCHS. vCHS uses VMware technologies like vSphere and vCloud Networking and Security to provide a consistent environment. It offers disaster recovery as a service through replication between on-premise and cloud environments. vCHS aims to simplify hybrid cloud deployments and provide flexible consumption models.
This document provides an overview of VMware NSX network virtualization. It discusses key functions of network virtualization and components of NSX including the management, control, and data planes. It also describes how NSX enables micro-segmentation through logical grouping of workloads into security groups and enforcing network policies based on these groups rather than physical topology. Examples of use cases for network segmentation, multi-tenancy, and VDI are also summarized.
Fusion-IO - Building a High Performance and Reliable VSAN EnvironmentVMUG IT
This document discusses how Fusion-io products can improve virtual desktop infrastructure (VDI) and virtual SAN (VSAN) performance. It provides an overview of Fusion-io's flash storage acceleration technology and customer base. It then outlines how Fusion-io's ioMemory products can be used to greatly increase VDI density and improve VSAN performance and scalability by integrating flash as a caching tier compared to traditional spinning disk or SSD-based storage architectures. Sample configurations and cost comparisons are provided that demonstrate significant capital expenditure and operational savings when using ioMemory for VDI and VSAN deployments.
Zerto provides virtual replication software that allows for continuous data protection, disaster recovery automation, and workload mobility between private and public clouds. It protects virtual machines and applications across multiple hypervisors in a storage-agnostic manner. Key benefits include replication of just changes for low RPOs, application-consistent recovery to specific points in time, simple one-click failover and failback testing, and mobility between on-premises and cloud infrastructures.
Veeam® Endpoint Backup™ FREE:
alla fine, il dato più prezioso è il tuo
Gianluca Mazzotta
Veeam EMEA Presales Director
VMUG.IT Meeting a Verona (4 marzo 2015)
VMware - Openstack e VMware: la strana coppia VMUG IT
VMware has integrated several of its virtualization technologies with OpenStack to provide customers more choice in how they deploy and manage OpenStack clouds. Key VMware technologies integrated with OpenStack include vSphere as the compute driver (Nova), NSX as the network driver (Neutron), and vSAN for block storage (Cinder). Using these VMware components can provide enterprises with the reliability, security, and management capabilities they have come to expect from VMware products. VMware also contributes code to OpenStack projects and aims to make OpenStack easier to deploy and manage for customers running it with VMware technologies. Tools like VOVA and hands-on labs allow users to test an OpenStack deployment on vSphere.
TrendMicro - Security Designed for the Software-Defined Data CenterVMUG IT
This document discusses security solutions designed for the software-defined data center. It notes that traditional physical server security approaches no longer work in virtualized environments. A new software-defined approach is needed to automatically provision security as virtual machines are deployed, manage security efficiently as environments scale, and optimize data center resources. Trend Micro's Deep Security product is presented as a solution that provides workload-aware security across physical, virtual, private and public cloud environments through a single management console.
VMware is introducing new platforms to better support cloud-native applications, including containers. The Photon Platform is a lightweight, API-driven control plane optimized for massive scale container deployments. It includes Photon OS, a lightweight Linux distribution for containers. vSphere Integrated Containers allows running containers alongside VMs on vSphere infrastructure for a unified hybrid approach. Both aim to provide the portability and agility of containers while leveraging VMware's management capabilities.
VMware - Virtual SAN - IT Changes EverythingVMUG IT
Virtual SAN is a hyper-converged storage platform that is built into the ESXi hypervisor. It aggregates locally attached flash and disk drives from each ESXi host in a cluster to provide a shared datastore. Virtual SAN provides dynamic capacity and performance scaling. It utilizes storage policies to provide per-VM storage service levels from the single shared datastore. Virtual SAN simplifies storage management by automating control of storage capacity, performance, and availability based on application needs.
Dai tradizionali SAN e NAS allo Storage VM-aware: come Clouditalia ha evoluto...VMUG IT
VMUGIT Meeting a Napoli - 6 aprile 2016
Dai tradizionali SAN e NAS allo Storage VM-aware: come Clouditalia ha evoluto la sua infrastrutture utilizzando Tintri (Raffaello Poltronieri, Clouditalia)
The document discusses energy efficiency in cloud computing. It outlines that data center energy costs make up a large portion of total costs, with energy-related costs being 31% alone. It then discusses current approaches to improving energy efficiency, including more efficient hardware, minimizing power usage in clusters and networks, and distributed energy-efficient schedulers. The document also discusses how cloud computing can move towards being more energy efficient through virtualization, consolidation techniques, and cooling improvements. The conclusions state that the future is energy-aware data centers and green computing, and that current technologies allow leveraging energy efficiency at different levels.
ENERGY-AWARE LOAD BALANCING AND APPLICATION SCALING FOR THE CLOUD ECOSYSTEMNexgen Technology
The document discusses energy-aware load balancing and application scaling techniques for cloud computing. It proposes an approach that defines energy-optimal operating regimes for servers and aims to maximize the number of servers operating in this regime. Servers that are idle or lightly loaded are switched to low-power sleep states to save energy. Load balancing and scaling algorithms are introduced to improve energy efficiency based on predicting workloads and migrating virtual machines between servers. The techniques are evaluated through simulation using published workload data.
ENERGY-AWARE LOAD BALANCING AND APPLICATION SCALING FOR THE CLOUD ECOSYSTEMNexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
ENERGY-AWARE LOAD BALANCING AND APPLICATION SCALING FOR THE CLOUD ECOSYSTEMShakas Technologies
IEEE Projects,
Non-IEEE Projects,
Data Mining,
Cloud computing,
Main Projects,
Mini Projects,
Final year projects,
Project title 2015
Best project center in vellore
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
The document proposes an operation zone based load balancer to improve user responsiveness on multicore embedded systems. It aims to reduce the costs of frequent task migration by the existing load balancers. The proposed approach divides the CPU utilization range into three zones - cold, warm and hot. The load balancer operates less frequently in the cold zone and more frequently in the hot zone, with intermediate behavior in the warm zone. Evaluation shows the approach reduces scheduling latency compared to CPU affinity based and non-affinity based systems under stress tests.
33. dynamic resource allocation using virtual machinesmuhammed jassim k
At Softroniics we provide job oriented training for freshers in IT sector. We are Pioneers in all leading technologies like Android, Java, .NET, PHP, Python, Embedded Systems, Matlab, NS2, VLSI etc. We are specializiling in technologies like Big Data, Cloud Computing, Internet Of Things (iOT), Data Mining, Networking, Information Security, Image Processing, Mechanical, Automobile automation and many other. We are providing long term and short term internship also.
We are providing short term in industrial training, internship and inplant training for Btech/Bsc/MCA/MTech students. Attached is the list of Topics for Mechanical, Automobile and Mechatronics areas.
MD MANIKANDAN-9037291113,04954021113
softroniics@gmail.com
A Study on Task Scheduling in Could Data Centers for Energy Efficacy Ehsan Sharifi
Abstract: The increasing energy consumption of Physical Machines (PM) in cloud data centers is nowadays a major problem, it has a negative impact on the environment while at the same time increasing the operational costs of data centers. This fosters the development of more energy-efficient scheduling approaches. In this study, we study the barriers of knowledge in energy efficiency for cloud data centers.
Energy aware load balancing and application scaling for the cloud ecosystemKamal Spring
In this paper we introduce an energy-aware operation model used for load balancing and application scaling on a cloud. The basic philosophy of our approach is defining an energy-optimal operation regime and attempting to maximize the number of servers operating in this regime. Idle and lightly-loaded servers are switched to one of the sleep states to save energy. The load balancing and scaling algorithms also exploit some of the most desirable features of server consolidation mechanisms discussed in the literature.
Exploiting latency bounds for energy efficient load balancingMichael May
These slides are taken from a research paper the PSL group wrote while under the direction of Dr. Vijay Garg at the Universtiy of Texas at Austin. The abstract is provided below.
In this paper we explore exploitation of latency bounds in order to gain energy efficiency in load balancing applications. We are proposing an energy aware job scheduler that uses vary-on, vary-off features in order to maximize time spent at peak utilization, while maintaining bounded latency. Computing resources will either be at load saturation(highest work per joule) or off. The premise being that servers are most efficient at peak utilization, measured in terms of energy per calculation. We explore the efficiency gains achieved through this approach and compare our results to other methods.
On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.
This document summarizes a student report on optimizing virtual machine placement across geo-distributed data centers to minimize costs. It proposes using an optimization model to determine the optimal spare capacity allocation across data centers while considering electricity costs, demand variability, and other factors. It also describes using a heuristic algorithm to place VMs on physical machines across data centers in a way that minimizes operating costs like electricity and communication costs.
The document discusses several topics related to parallel and distributed computing models, including performance metrics, scalability dimensions, Amdahl's law, Gustafson's law, and energy efficiency in distributed systems. It provides details on:
1) Performance metrics such as system throughput measured in MIPS, Tflops, and TPS, as well as system overhead from factors like OS boot time.
2) Scalability dimensions including size, software, application, and technology scalability when upgrading computing resources.
3) Amdahl's law and Gustafson's law formulas for calculating speedup from parallel processing based on sequential and parallel fractions of a program.
4) Techniques for improving energy efficiency across application,
dynamic resource allocation using virtual machines for cloud computing enviro...Kumar Goud
Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs., we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing imbalance, we will mix completely different of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Index Terms—Cloud computing, resource management, virtualization, green computing.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
A Host Selection Algorithm for Dynamic Container Consolidation in Cloud Data ...IRJET Journal
This document proposes a novel host selection algorithm called Energy-Efficient Particle Swarm Optimization (EE-PSO) for dynamic container consolidation in cloud data centers. The goal of the algorithm is to reduce energy consumption while maintaining quality of service levels. It was tested using the ContainerCloudSim toolkit on real-world workloads and was found to outperform existing algorithms in terms of energy savings, quality of service guarantees, number of new virtual machines created, and number of container migrations.
07 vmugit aprile_2018_massimiliano_moschiniVMUG IT
VMware Hyper-Converged Software provides Virtual SAN, which allows for storage to be pooled and shared across servers. Virtual SAN enables the creation of a shared datastore that can be accessed by any VM running on the servers in the Virtual SAN cluster. It provides a simple, efficient and resilient way to store and protect VM data without the need for external shared storage.
07 - VMUGIT - Lecce 2018 - Antonio Gentile, FortinetVMUG IT
VMUGIT Meeting - Lecce, 5 Aprile 2018
Antonio Gentile - System Engineer Fortinet Italy - Fortinet Security Fabric - Le nuove sfide della cyber security su infrastrutture software defined
VMUGIT Meeting - Lecce, 5 Aprile 2018
Rodolfo Rotondo VMware Sr. Business Solution Strategist, SEMEA - Difendere tutto... difendere niente! Come sviluppare un approccio strategico alla cyber security nell'era del mobile-cloud e degli oggetti interconnessi
Rubrik offers a software-defined data management platform that can help organizations accelerate their GDPR compliance efforts. The platform provides centralized management of data across on-premises, edge, and cloud environments. It employs security measures like encryption and immutable storage that are designed with privacy and compliance in mind. Rubrik also simplifies compliance through policy-driven automation that enforces data protection, retention, and deletion policies. Reporting tools give insights into policy effectiveness. The unified platform streamlines compliance processes around identifying, managing, and securing personal data.
This document discusses blockchain and enterprise IT, dispelling myths around distributed ledgers. It provides an overview of blockchain concepts like data integrity, actors, and public vs private blockchains. It also includes decision diagrams to help determine if a blockchain is needed and compares databases to blockchains. Example use cases for blockchains are listed such as supply chain management. Considerations for blockchain projects like requirements and limitations are also covered.
VMUGIT Meeting - Lecce, 5 Aprile 2018
Enrico Signoretti, Head of Product Strategy at OpenIO, blogger at Juku - IIoT. Il futuro è nell'integrazione Cloud-Edge
This document describes various "rebels" or non-virtualized applications in a datacenter that need to be managed. It discusses "Filerix", an old file server that has grown significantly in size and files. It also mentions "Maniscalchix", an application installed long ago whose purpose is unknown, and "Nonmifotografarix", which produces a lot of I/O and could crash during snapshots. The document provides information on how to back up these different applications using Veeam solutions like NAS shares, agents, I/O filtering, and archive tier despite their non-virtualized nature or other challenges.
The document provides an agenda for a PowerCLI session that will cover topics like getting started with PowerCLI, common errors and pitfalls, advanced functionality, and the PowerCLI community. It includes code snippets and examples for working with PowerCLI to retrieve and report on VMware vSphere infrastructure information using PowerShell. The session aims to help attendees become more proficient PowerCLI users.
Storage Policy Based Management (SPBM) allows data services like replication, encryption, and performance policies to be applied on a per-VM or per-VMDK level through configurable storage policies. The presenter discusses how SPBM is central to VMware's software-defined storage vision and allows administrators to take an application-centric approach to assigning storage services and service level agreements. Administrators can define storage policies, apply them dynamically to VMs, and change policies without disrupting services.
VMware Cloud on AWS allows customers to run VMware workloads on AWS infrastructure providing operational consistency, existing skillsets and tools, and control and security. It introduces VMware's software-defined data center (SDDC) technologies like vSphere, vSAN, and NSX running on AWS. This provides enterprises hybrid cloud capabilities with elasticity, portability of applications between on-premises and cloud, and access to AWS native services. Customers can easily deploy and manage their VMware environments on AWS.
Security groups and security policies were created to microsegment the network and restrict traffic flows based on the new segmentation. This was done using vRNI to visualize traffic before and after the changes. Security groups were defined using dynamic membership based on VM name, security tag, or other attributes. A shared services security policy template was also created to securely allow access to common management and services resources from different security groups.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
1. Algoritmi innovativi per il consolidamento
dinamico del carico nei Data Center
Agostino Forestiero
CNR Researcher | Eco4Cloud – Chief Architect
forestiero@eco4cloud.com
Mar 04, 2015
2. Global Energy Problem: the
contribution of ICT
The ICT sector:
accounts for ~3% of total energy consumption worldwide, and is expected to double
every 5 years
produces between 2% and 3% of total emissions of greenhouse gases
Source: Greenpeace Report “How Clean is Your Cloud?”, April 2012 Source: Pickavet et al (IBBT, 2011)
3. Source: “Smart 2020: Enabling the Low-Carbon Economy in the Information Age”, The Climate Group, June 2008.
Contribution of data centers is increasing
4. Energy/cost savings opportunities
1. Improve infrastructure
use liquid cooling, improve efficiency of chillers and power supplies
helps to improve the PUE index (Power Usage Effectiveness), not to increase
computational efficiency
2. Adopt more energy-efficient infrastructures
feasible for CPU (DVFS), on-going efforts on more efficient network utilization,
difficult for other components
3. Consolidate VMs on fewer servers
unneeded servers can be hibernated or used to accommodate more load
consolidation should follow workload fluctuations (daily, weekly)
Power Usage Effectiveness (PUE)
5. 1. Improve infrastructure
use liquid cooling, improve efficiency of chillers and power supplies
helps to improve the PUE index (Power Usage Effectiveness), not to increase
computational efficiency
2. Adopt more energy-efficient infrastructures
feasible for CPU (DVFS), on-going efforts on more efficient network utilization,
difficult for other components
3. Consolidate VMs on fewer servers
unneeded servers can be hibernated or used to accommodate more load
consolidation should follow workload fluctuations (daily, weekly)
Use of energy-efficient servers
Source: Winston Saunders, Intel: “Server Efficiency: Aligning Energy Use With Workloads
Energy/cost savings opportunities
6. 1. Improve infrastructure
use liquid cooling, improve efficiency of chillers and power supplies
helps to improve the PUE index (Power Usage Effectiveness), not to increase
computational efficiency
2. Adopt more energy-efficient infrastructures
feasible for CPU (DVFS), on-going efforts on more efficient network utilization,
difficult for other components
3. Consolidate VMs on fewer servers
unneeded servers can be hibernated or used to accommodate more load
consolidation should follow workload fluctuations (daily, weekly)
Source: Winston Saunders, Intel: “Server Efficiency: Aligning Energy Use With Workloads
Example:
if the workload of 3 servers utilized at 20%
is consolidated on one server utilized at
60%, the power is decreased from 3 x 85.3
W = 255.9 W to only 134 W.
Energy saving equal to !
Energy/cost savings opportunities
Intel Xeon E5-2600:
power vs. utilization
7. Two sources of inefficiency
Servers are underutilized (between 15% and 40%)
An idle server consumes more than 50% of the energy consumed when fully utilized
Source: L.Barroso, U.Holzle, The case of energy proportional computing, ACM Computer Journal, Volume 40 Issue 12.
Typical utilization of servers
This means that it is generally possible to consolidate the load on fewer and better utilized servers!
Inefficient utilization of servers
8. Energy efficiency is utilization divided by
power consumption (useful workload/W)
Energy efficiency is low in the typical
operating region
Consolidation of the workload means shifting the typical operating region
to the right, in this way increasing the energy efficiency
Improving efficiency through consolidation
Source: L.Barroso, U.Holzle, The case of energy proportional computing, ACM Computer Journal, Volume 40 Issue 12.
9. The consolidation problem is a form of Bin Packing Problem:
Issues:
• NP-Hard problem: heuristics exist, but their scalability is limited.
• In DCs, this is a multi-dimensional problem (CPU, disk, memory,
network).
• Load requirements are highly dynamic: VMs must be repacked with
few and asynchronous migrations
• Maximize QoS: prevent overload events even when resources
utilization is increased
Approaching the consolidation problem
Goal: pack a collection of VMs into the min. number of servers,
so as to hibernate the remaining servers, and save energy.
10. Known solutions for consolidation
o Best Fit: each VM is assigned to the server whose load is the closest to a target (e.g. 90%)
This only guarantees a performance ratio of 17/10: at most 17 servers are used when
the minimum is 10
o Best Fit Decreasing: VMs are sorted in decreasing order, then assigned with Best Fit
Performance ratio is 11/9, but sorting VMs may not be easy in large data centers, and
many concurrent migrations are needed
o DPM of VMWare adopts a greedy algorithm
Servers are sorted according to numerous parameters (capacity, power consumption,
etc.). DPM scans the list and checks if servers can be unloaded
11. The solutions available today are semi-manual, extremely complex, poorly adaptive, not scalable.
The ICAR-CNR solution uses a bio-inspired probabilistic approach to assign Virtual Machines to
servers. The solution is automatic, simple, adaptive and highly scalable.
INEFFICIENCY OF CONSOLIDATION ALGORITHMS
INNOVATIVE BIO-INSPIRED APPROACH
PROBLEM
SOLUTION
Eco4Cloud algorithm
• C. Mastroianni, M. Meo, G. Papuzzo, "Probabilistic Consolidation of Virtual Machines in Self-
Organizing Cloud Data Centers". IEEE Transactions on Cloud Computing, vol. 1, n. 2, pp. 215-228,
2013.
• PCT Patent “System for Energy Saving in Company Data Centers”
ICAR-CNR researchers have devised and developed a very effective and scalable solution,
based on the swarm intelligence paradigm.
12. Eco4cloud algorithm in action
The data center manager assigns and migrates VMs to servers based on local probabilistic trials:
Lightly loaded servers tend to reject VMs
Highly loaded servers tend to reject VMs
Servers with intermediate load tend to accept VMs
Eventually, the workload is distributed to a low number of highly utilized servers
SERVERS
DATA CENTER
MANAGER
13. VM assignment/migration
1. The manager sends an invitation to a subset
of servers
2. Each server evaluates the assignment
probability function (Bernoulli trial) based
on the utilization of local resources (e.g.
CPU, RAM…) and sends a positive ack if it is
available
3. The manager collects positive replies and
selects the server that will execute the VM
1. A server checks if its load is in the range
between a low and a high threshold
2. When utilization is too low/high, the server
performs a Bernoulli trial based on the
migration probability function
3. If the trial is positive, some VMs are migrated
4. Destination servers are determined with a
new reassignment procedure
Assignment procedure Migration procedure
14. • Energy Savings: before consolidation, servers are running at between 20-40% usage. After 15 hours,
all servers are either close to optimal values (80% usage) or hibernated
• SLAs: Utilization is not allowed to exceed 85%, providing complete protection of the physical
resources and adherence to SLAs
Consolidation Snapshot
(400 servers and 6000 VMs)
0.8
0.4
0.6
0.2
0 5 10 15 20 25 30
1
----- Time (hours) -----
-----CPUutilization-----
0
140 servers take all the load
260 servers are hibernated
15. CPU Utilization in steady conditions
(48 hours: overall load shown as a reference)
• CPU utilization of active servers is always between 0.5 and 0.9
• Many servers are hibernated (bottom line)
Time (hour)
CPUutilization
16. Active servers and consumed power
Number of active servers
• The number of active servers follows the overall workload, and so the power
• Many servers are never activated: they can be safely devoted to other applications
• Power savings up to 60%!
• More savings are obtained thanks to decreased cooling needs
Consumed power
Time (hour)
Power(KW)
17. Multi-resource consolidation
Workload is consolidated on the most utilized resource (RAM in this case)
VMs with different characteristics (here, CPU-bound and RAM-bound) are balanced
hardware resources are exploited efficiently
RAM and CPU utilization of 28 servers, separately
considered for CPU-bound and RAM-bound VMs
C-type = CPU-bound
M-type = RAM-bound
18. Benefits of the Eco4Cloud solution
Energy saving. Power consumption reduced between 20% and 50%!
Highly scalable. Thanks to its adaptive/self-organized distributed algorithm, the approach is
extremely scalable
Capacity Planning. Optimal occupancy of physical resources and adaptive optimization of
inherently variable workloads
Minimal impact on operations. Migrations are gradual and asynchronous
Efficient balancing of heterogeneous applications
Meet DC SLAs. Thanks to the insights and real-time monitoring analytics provided by E4C,
data center managers can proactively/predictively prevent SLA violations and increase
overall data center reliability
Virtualization environment independent: VMWare vSphere, Microsoft Hyper-V, KVM,…
19. www.eco4cloud.com
Spin off of ICAR-CNR
Institute for High Performance Computing and Networks
National Research Council of Italy