Live virtual machine migration based on future prediction of resource require...Tapender Yadav
This document gives the brief description of the work done during the Summer Internship at Institute for Development and Research in Banking Technology (IDRBT), Hyderabad. The project was undertaken from May 2014 - July 2014 under the exemplary guidance of Dr. G. R. Gangadharan, Asst. Professor, IDRBT, Hyderabad.
Base paper ppt-. A load balancing model based on cloud partitioning for the ...Lavanya Vigrahala
A load balancing model based on cloud partitioning for the public cloud. -Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Load balancing In cloud - In a semi distributed systemAchal Gupta
Load Balancing in Cloud
What is load balancing in Cloud in semi distributed system and why it is better than a centralized system and distributed system
Live virtual machine migration based on future prediction of resource require...Tapender Yadav
This document gives the brief description of the work done during the Summer Internship at Institute for Development and Research in Banking Technology (IDRBT), Hyderabad. The project was undertaken from May 2014 - July 2014 under the exemplary guidance of Dr. G. R. Gangadharan, Asst. Professor, IDRBT, Hyderabad.
Base paper ppt-. A load balancing model based on cloud partitioning for the ...Lavanya Vigrahala
A load balancing model based on cloud partitioning for the public cloud. -Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Load balancing In cloud - In a semi distributed systemAchal Gupta
Load Balancing in Cloud
What is load balancing in Cloud in semi distributed system and why it is better than a centralized system and distributed system
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
Cloud computing as a distributed paradigm, it has the latent to make over a large part of the Cooperative industry. In cloud computing it’s automatically describe more technologies like distributed computing, virtualization, software, web services and networking. We review the new cloud computing technologies, and indicate the main challenges for their development in future, among which load balancing problem stands out and attracts our attention Concept of load balancing in networking and in cloud environment both are widely different. Load balancing in networking its complete concern to avoid the problem of overloading and under loading in any sever networking cloud computing its complete different its involves different elements metrics such as security, reliability, throughput, tolerance, on demand services, cost etc. Through these elements we avoiding various node problem of distributing system where many services waiting for request and others are heavily loaded and through these its increase response time and degraded performance optimization. In this paper first we classify algorithms in static and dynamic. Then we analyzed the dynamic algorithms applied in dynamics environments in cloud. Through this paper we have been show compression of various dynamics algorithm in which we include honey bee algorithm, throttled algorithm, Biased random algorithm with different elements and describe how and which is best in cloud environment with different metrics mainly used elements are performance, resource utilization and minimum cost. Our main focus of paper is in the analyze various load
balancing algorithms and their applicability in cloud environment.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
ieee standard base paper.-Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
Training Slides: Basics 102: Introduction to Tungsten ClusteringContinuent
This 30 minutes training session provides an introduction to how Tungsten Clustering for MySQL / MariaDB / Percona Server works, its basic principles, understanding Tungsten Clustering topologies, failover, rolling maintenance and related tools.
AGENDA
- Review the key benefits offered by Tungsten Clustering
- Examine the Tungsten Clustering architecture
- Tungsten Cluster Topologies for MySQL High Availability and Disaster Recovery
- Composite vs Multi-Site/Multi-Master
- Review automatic and manual failover
- Explore the concepts of a rolling maintenance procedure
- Study key resources to monitor and manage the cluster
In the FACTS-based transmission line, if the fault does not include FACTS device, then the impedance calculation is like an ordinary transmission line, and when the fault includes FACTS, then the impedance calculation accounts for the impedances introduced by FACTS device.
Speaker: Darlene Nerden, IBM
Overview: In this session will review the Maximo architecture and factors that influence performance. We will discuss some details for those factors regarding tuning for a performance impact. We will look at troubleshooting tools and Maximo settings to help identify and resolve a Maximo performance issue.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
Cloud computing as a distributed paradigm, it has the latent to make over a large part of the Cooperative industry. In cloud computing it’s automatically describe more technologies like distributed computing, virtualization, software, web services and networking. We review the new cloud computing technologies, and indicate the main challenges for their development in future, among which load balancing problem stands out and attracts our attention Concept of load balancing in networking and in cloud environment both are widely different. Load balancing in networking its complete concern to avoid the problem of overloading and under loading in any sever networking cloud computing its complete different its involves different elements metrics such as security, reliability, throughput, tolerance, on demand services, cost etc. Through these elements we avoiding various node problem of distributing system where many services waiting for request and others are heavily loaded and through these its increase response time and degraded performance optimization. In this paper first we classify algorithms in static and dynamic. Then we analyzed the dynamic algorithms applied in dynamics environments in cloud. Through this paper we have been show compression of various dynamics algorithm in which we include honey bee algorithm, throttled algorithm, Biased random algorithm with different elements and describe how and which is best in cloud environment with different metrics mainly used elements are performance, resource utilization and minimum cost. Our main focus of paper is in the analyze various load
balancing algorithms and their applicability in cloud environment.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
ieee standard base paper.-Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
Training Slides: Basics 102: Introduction to Tungsten ClusteringContinuent
This 30 minutes training session provides an introduction to how Tungsten Clustering for MySQL / MariaDB / Percona Server works, its basic principles, understanding Tungsten Clustering topologies, failover, rolling maintenance and related tools.
AGENDA
- Review the key benefits offered by Tungsten Clustering
- Examine the Tungsten Clustering architecture
- Tungsten Cluster Topologies for MySQL High Availability and Disaster Recovery
- Composite vs Multi-Site/Multi-Master
- Review automatic and manual failover
- Explore the concepts of a rolling maintenance procedure
- Study key resources to monitor and manage the cluster
In the FACTS-based transmission line, if the fault does not include FACTS device, then the impedance calculation is like an ordinary transmission line, and when the fault includes FACTS, then the impedance calculation accounts for the impedances introduced by FACTS device.
Speaker: Darlene Nerden, IBM
Overview: In this session will review the Maximo architecture and factors that influence performance. We will discuss some details for those factors regarding tuning for a performance impact. We will look at troubleshooting tools and Maximo settings to help identify and resolve a Maximo performance issue.
Power Comparison Power Comparison of Cloud Data of Cloud Data Center Architec...Paolo Giaccone
Power consumption is a primary concern for cloud computing data centers. Being the network one of the non- negligible contributors to energy consumption in data centers, several architectures have been designed with the goal of im- proving network performance and being energy-efficient. In this paper we provide a comparison study of data center architectures, covering both classical two- and three-tier design and state-of- art ones as Jupiter, recently disclosed by Google. Specifically, we analyze the combined effect on the overall system performance of different power consumption profiles for the IT equipment and of different resource allocation policies. Our experiments, performed in small and large scale scenarios, unveil the ability of network-aware allocation policies in loading the the data center in a energy-proportional manner and the robustness of classical two- and three-tier design under network-oblivious allocation strategies.
Simulating Heterogeneous Resources in CloudLightningCloudLightning
In this presentation, Dr Christos Papadopoulos-Filelis (Democritus University of Thrace, Greece) discusses resource characterisation, simulation tools and the elements of simulation used in CloudLightning.
This presentation was given at the National Conference on Cloud Computing in Dublin City University on 12th April 2016.
Adding Value in the Cloud with Performance TestRodolfo Kohn
System quality attributes such performance, scalability, and availability are among the main concerns for cloud application developers and product managers. There are many examples of notable system failures that show how a company business can be affected during key events like a Cyber Monday. However, many difficulties come up when a team intends to consciously manage these type of quality attributes during development and operations. It is possible to group these difficulties in two main aspects: human aspects and technical aspects. During this presentation, I will share main technical difficulties we had to deal with in the last seven years working with different cloud services as well as key technical performance, scalability, and availability issues we were able to find and solve. It is about cases that are relevant through different products, technologies, and teams.
Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Enviro...SaikiranReddy Sama
In Dynamic Resource Allocation, WE PRESENT A SYSTEM THAT USES VIRTUALIZATION TECHNOLOGY TO ALLOCATE DATA CENTER RESOURCES DYNAMICALLY.
WE INTRODUCE THE CONCEPT OF “SKEWNESS”.
And BY MINIMIZING SKEWNESS, WE CAN COMBINE DIFFERENT TYPES OF WORKLOADS NICELY AND IMPROVE THE OVERALL UTILIZATION OF SERVER RESOURCES.
WE DEVELOP A SET OF HEURISTICS THAT PREVENT OVERLOAD IN THE SYSTEM EFFECTIVELY WHILE SAVING ENERGY USED.
Dynamic resource Allocation using Virtual Machines For Cloud Computing
Transforming Legacy Applications Into Dynamically Scalable Web ServicesAdam Takvam
The tools and technologies used to power the modern data center are evolving at a pace faster than most companies can keep up. Aging web services built on LAMP, WAMP, or ASP cannot readily take advantage of the latest in scalable web platforms and technologies. In this presentation, we will discuss what factors must be considered in order for your aging web service to take advantage of technologies such as Apache Mesos, Marathon, Docker, Apache Kafka, and more.
This talk is intended for software developers, operations, and IT managers who are looking to modernize existing privately-hosted web applications. We will look at the transformation of the data center from a high-level perspective, examining before and after topology examples using Key Performance Indicators and Key Performance Metrics to show how levering modern design principles can both improve application performance and reduce operational costs. Next we will look at some example applications and show what needs to be done from both the software development and infrastructure perspectives to successfully accomplish the transformation.
[EWiLi2016] Enabling power-awareness for the Xen HypervisorMatteo Ferroni
Virtualization allows simultaneous execution of multi-tenant workloads on the same platform, either a server or an embedded system. Unfortunately, it is non-trivial to attribute hardware events to multiple virtual tenants, as some system’s metrics relate to the whole system (e.g., RAPL energy counters). Virtualized environments have then a rather incomplete picture of how tenants use the hardware, limiting their optimization capabilities. Thus, we propose XeMPower, a lightweight monitoring solution for Xen that precisely accounts hardware events to guest workloads. It also enables attribution of CPU power consumption to individual tenants. We show that XeMPower introduces negligible overhead in power consumption, aiming to be a reference design for power-aware virtualized environments.
Full paper: http://ceur-ws.org/Vol-1697/EWiLi16_10.pdf
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Probabilistic consolidation of virtual machines in self organizing cloud data centers
1. Probabilistic Consolidation
of Virtual Machines
in Self-Organizing Cloud Data Centers
IEEE TRANSACTIONS ON CLOUD
COMPUTING, JULY-DECEMBER 2013
Speaker: Caroline
2. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
2
3. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
3
4. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
4
6. • Cloud computing、Big Data、IoT
• They all need large and powerful computing and
storage infrastructures to support.
6
Trends in Information Technology
8. Data Center (Server Farm )
• It generally includes
• Computation or storage resource
• Redundant or backup power supplies
• Redundant data communications connections
• Environmental controls
• Various security devices
8
9. Power Consumption
• In 2006
• The energy consumed by IT infrastructures was
about 61 billion kWh, corresponding to 1.5% of
all the produced electricity.
• 2% of the global carbon emissions, equal to the
aviation industry.
• These figures are expected to double every 5
years. [1]
9
11. Power usage effectiveness (PUE)
• A measure of how efficiently a computer data
center uses energy.
• In past few year, the typical values have
decreased from 2 and 3 to lower than 1.1.
11
12. • Most of the time servers
operate at 10-50% of their
full capacity. [2], [3]
• Caused by the changing
variability of VMs’ workload.
[4],[5]
• The DC is planned to sustain
the peaks of load, while for
long periods of time the load
is much lower.
12
Utilization of each server
13. • An active but idle server
consumes 50-70% of the
power consumed when fully
utilized. [6]
• Although the power is used
in the computing as much as
possible, the utilization rate
for each server itself does
not achieve the best.
13
Utilization of each server
15. Consolidation
• Allocate the max number of VMs in the min
number of physical machines [7].
• Allows unneeded servers to be put into
• A low-power state or switched off
• Devoted to the execution of incremental workload.
15
16. The complexity of the problem
• The optimal assignment of VMs to PM is analogous
to the NP-hard “Bin Packing Problem” [17], [1], [28]
• Assigning a given set of items of variable size to the min
number of bins taken from a given set.
16
17. The complexity of the problem
• The assignment should take into account
multiple server resources, it becomes a “multi-
dimensional bin packing problem”
• The VMs continuously modify their hardware
requirements.
17
18. In this paper
• Proposed ecoCloud, inspired by ant algorithm.
• Using two types of probabilistic procedures:
Assignment and Migration.
• Key decisions is made by single servers,
• Increasing the utilization of servers
• Consolidating VMs dynamically and locally.
18
19. In this paper
• Extended to the Multi-dimension problem (CPU
and memory)
• Save the electrical costs and respect to the
Service Level Agreements.
19
20. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
20
21. Forecast the load?
• [27] and [13]—try to forecast the processing
load and aim at determining the min number of
servers that should be switched on to satisfy the
demand.
• How to correctly set the server’s number?
• How to predict the processing load precisely?
• How the VMs map to servers in a dynamic
environment?
21
22. Heuristic approaches?
• Optimally mapping VMs to PMs
• = Bin packing problem
• = NP-hard problem
• The heuristic approaches can only lead to
suboptimal solutions.
22
23. Heuristic approaches?
• The heuristic approaches presented are use
• the Best Fit Decreasing algorithms. [1]
• the First Fit Decreasing algorithms. [28]
• the Constraint Programming paradigm. [30]
• They use lower and upper utilization thresholds
to decide when execute migration. [29]
23
24. Heuristic approaches?
• Deterministic and centralized algorithms,
• Efficiency goes bad when the size of the data
center grows.
• Mapping strategies may require the concurrent
migration of many VMs
• Cause considerable performance degradation
during the reassignment process.
24
25. P2P Model?
• The data center is modeled as a P2P network. [33]
• Server explore the network to collect information that
can later be used to migrate VMs.
• The V-MAN system[34] uses a gossip protocol to
communicate their state to each other.
• The complete absence of centralized control can be
seen as an obstacle by the data center administrator.
25
26. In the multi-resource problem
• Based on the first-fit approximation. [38]
• Using an LP formulation[39].
• Performs dynamic consolidation based on
constraint programming. [41]
• But they all need to use any complex centralized
algorithm.
26
27. In this paper
• Adopts a probabilistic approach
• naturally scalable
• an asynchronous and smooth migration process
• Servers can autonomously decide whether or
not to migrate or accept a VM
• The final decisions are still granted to the central
manager
27
28. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
28
29. Scenriao - request comes
29
• Data center manager will selects a VM that is
appropriate for the application.
• Application characteristics & Client demand
30. Scenriao - assignment procedure
30
• Single servers to decide whether they should
accept or reject a VM.
• information available locally (CPU/RAM utilization)
31. Scenriao - migration procedure
31
• Migrating a VM when highly underutilized or
possibly causing overload situations.
• requests a VM migration
• choose the server that will host the migrating VM
32. Performance metrics
• Resource utilization
• Number of active servers
• Consumed power
• Frequency of migrations and server switches
• SLA violations
32
33. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
33
34. Scenriao - assignment procedure
34
• Performed when a client asks the data center to
execute a new application.
• The manager delegates a main part of the procedure
to single servers
35. Scenriao - assignment procedure
35
Reject?
Accept?
Depends on the server’s utilization
36. Scenriao - assignment procedure
36
Overutilization might cause
overload situations
Underutilization, the objective is
to put the server in a sleep mode
and save energy
37. Scenriao - assignment procedure
• Decision is taken performing a Bernoulli trial.
• The success probability for this trial is equal to the
value of the overall assignment function.
37
38. Scenriao - assignment procedure
• X (0-1):the relative utilization of a resource.
• T:the maximum allowed utilization.
• P:the shape parameter
• Mp:the factor used to normalize the max
value to 1
38
40. Scenriao - assignment procedure
• This figure shows the graph of the single-
resource assignment function
• for some values of the parameter p, and T=0.9.
40
41. Scenriao - assignment procedure
• us、 ms:the current CPU and RAM utilization
at server s.
• pu、pm:the shape parameters.
• Tu、 Tm:the respective maximum utilizations
41
42. Scenriao - assignment procedure
• Bernoulli trial is successful
• server communicates its availability to the data
center manager.
• manager selects one of the available servers, and
assigns the new VM to it.
42
Yes
43. Scenriao - assignment procedure
• Bernoulli trial is unsuccessful
• Current number of active servers is not sufficient.
• Manager wakes up an inactive server and
requests it to run the new VM
43
No
44. Scenriao - migration procedure
44
• Application workload changes with time
• VMs terminate or reduce demand → underutilized
• VMs increase their requirements → overutilized
45. Scenriao - migration procedure
• Each server monitors its CPU and RAM
utilization
• using the libraries provided by the virtualization
infrastructure (e.g., VMWare or Hyper-V)
• Tl :the lower threshold
• Th:the upper threshold
45
46. Scenriao - migration procedure
• Each server evaluates the corresponding
probability function, fl
migrate or fh
migrate
• X:the utilization of given source
46
47. Scenriao - migration procedure
• This figure shows the graph of the single-
resource migration function
• for some values of the parameter α, β, Tl=0.3, Th=0.8
47
48. Scenriao - migration procedure
• Whenever a Bernoulli trial is success
• This server will choose the VM
• Utilization of resource > Current server’s
utilization - Th
48
• Current server’s utilization:0.9
• Th:0.8
• VM1 utilization:0.05
• VM2 utilization:0.2
• VM3 utilization:0.01
49. Scenriao - migration procedure
• The choice of the new server is made by assignment
procedure, with 2 difference:
• Threshold T of the assignment function is set to 0.9
times the resource utilization of the resource server.
• This ensures the migrate to a less loaded server, and
avoid multiple migrations of the same VM.
49
50. Scenriao - migration procedure
• The second difference concerns the migration
from a lightly loaded server.
• When no server is available to run a migrating
VM, it would not be acceptable to switch on a
new server.
50
51. Scenriao - migration procedure
• This paper’s approach ensures a gradual and
continuous migration process.
• The data center administrator can set
• threshold values & shape parameters
• To choose different consolidation strategies (e.g.
conservative, intermediate, aggressive)
51
52. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
52
53. Mathematical Analysis
• Ns :the number of servers in a data center
• Nc :the number of cores in each server
• Nv :the number of VMs that can be executed
in each core.
53
54. Mathematical Analysis
• It is assumed that two types of VMs are
executed on the data center.
• CPU-bound(C-type)
• RAM-bound(M-type)
• C-type’s CPU > M-type’s CPU factor γC > 1
• M-type’s RAM > C-type’s RAM factor γM > 1
54
55. Power Consume
• As the CPU utilization increases, the consumed
power can be assumed increase linearly.[13][14]
• In analytical and simulation experiments
presented in this study, the power consumed by
a single server is expressed as:
55
56. Mathematical Analysis
• To analyze the behavior of the system, an
experiment with parameters as follow:
• Ns = 100 (servers)
• Nc = 6 (cores)
• CPU frequency = 2 GHz
• RAM = 4GB
• VMs‘ CPU frequency use = 500 MHz. → Nv = 4
56
57. Mathematical Analysis
• Power consumption
• Pmax = 250 W
• Pidle = 0.7 * Pmax = 175 W
• The average CPU (memory) load of the DC is
defined as the ratio between
• Total amount of CPU (RAM) required by VMs
• Corresponding CPU (RAM) capacity of the DC
• Denoted as ρC (ρM)
57
58. Mathematical Analysis
• Initial CPU and RAM utilizations = 40% of the
server capacity. T = 0.9, and p = 3.
• without ecoCloud
• 100 active server
• with CPU/RAM utilization around 40%
• without ecoCloud
• 45 active server
• nearly halve the consumed power
58
59. Mathematical Analysis
• Next we considered the values of γC and γM
• Different ratios between the CPU and RAM
demanded by the two types of VMs.
• In test case:
• 1.0 (the two kinds of applications coincide)
• 1.5 (C-type need 50% more CPU than M-type)
• 2.0 (C-type need 100% more CPU than M-type)
• 4.0 (C-type need 300% more CPU than M-type)
59
61. Mathematical Analysis
• Such an efficient consolidation is possible
• when the overall loads of CPU and RAM are
comparable. (ρC =ρM =0.4)
• In next experiment
• ρC = 0.4
• ρM = 0.2-0.6
• γC and γM are set to 4.0
61
64. Experiment on Real Data Center
• Do the experiments in May 2013 on a live DC
owned by a major telecommunications operator.
• The experiment was run on 28 servers virtualized
with the platform VMWare vSphere 4.0.
• 2 with CPU Xeon 32 cores and 256-GB RAM
• 8 with CPU Xeon 24 cores and 100-GB RAM
• 11 with CPU Xeon 16 cores and 64-GB RAM
• 7 with CPU Xeon 8 cores and 32-GB RAM.
64
65. Experiment on Real Data Center
• The servers hosted 447 VMs assigned
• a number of virtual cores varying between 1 - 4
• an amount of RAM varying between 1 - 16 GB.
• M-type:358 (80%) & C-type:88 (20%)
• M-type VMs contributed for
• 49.44% of the overall CPU load
• 92.15 % of the overall memory load.
65
66. Experiment on Real Data Center
• Network adapters with bandwidth of 10 Gbps.
• Assignment procedure
• T = 0.8 (imposed by the data center administrator)
• p = 3.
• migration procedure
• Th = 0.95、Tl = 0.5
• Shape parameter α, and β = 0.25
66
72. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
72
73. Comparison Between ecoCloud & BFD
• Implement a variant of the classical Best Fit
Decreasing algorithm described and analyzed in [1]
• It was proved in [18] that BFD algorithm is the
polynomial algorithm that gives the best results in
terms of effectiveness.
73
75. Comparison Between ecoCloud & BFD
75
And then they are sorted in decreasing
order of CPU utilization.
100MHz 80MHz 60MHz 55MHz 40MHz
76. Comparison Between ecoCloud & BFD
• Each VM is allocated to the server that provides
the smallest increase of the power consumption.
76
100MHz 80MHz 60MHz 55MHz 40MHz
77. Comparison Between ecoCloud & BFD
• Key parameter is the interval of time between
two executions of the algorithm.
• Experiments with four different values of the
interval: 1, 5, 15, and 60 minutes.
• Use a home-made Java simulator fed with the
logs of real VMs to compare ecoCloud and BFD
in a data center with 400 servers.
77
78. Comparison Between ecoCloud & BFD
• The traces represent the CPU utilization of 6,000
VMs, monitored in March/April 2012 and
updated every 5 minutes.
• Since the CPU is the only resource considered in
[1], we also consider this resource only for the
experiments reported below.
78
79. Comparison Between ecoCloud & BFD
• Assigned the VMs to 400 servers, using the
ecoCloud and BFD algorithms for assignment
and migration of VMs.
• servers are all equipped with 2-GHz cores.
• 1/3 (4 cores)、1/3 (6 cores)、1/3 (8 cores)
• Ta = 0.90, Tl = 0.50, Th = 0.95
• α = 0.25, and β = 0.25.
79
84. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
84
85. Results with Different Data Center Sizes
• In small systems, it can happen that all the
servers reject the VM.
• even when some of them have enough spare
CPU to accommodate the VM.
• The probability of this event becomes negligible
in large data centers.
• a server is activated only when strictly needed.
85
86. Results with Different Data Center Sizes
• Simulations with data centers of different size
• 100, 200, 400, and 3,000 servers
• Using the VM traces described in the previous
section
86
87. Outline
• Introduction
• Related Works
• Scenario & Performance Metrics
• ecoCloud:Assignment & Migration Procedures
• Mathematical Analysis & Experiment on Real DC
• Comparison ecoCloud & Best Fit Decreasing
• Results with Different Data Center Sizes
• Conclusion
87
88. Conclusion
• This paper tackles the issue of energy-
related costs in data centers and Cloud
infrastructures.
• The aim is to consolidate the VMs on as
few PMs as possible
• Minimize power consumption and carbon
emissions.
• Ensuring a good level of the QoS experienced.
88
89. Conclusion
• Proposed the mapping of VM based on
Bernoulli trials.
• Through single servers decide, on the basis of
the local information.
• ecoCloud particularly efficient in large data
centers.
89
90. Conclusion
• Mathematical analysis and experiments in a real DC
prove that ecoCloud can
• Reduce power consumption
• Avoid overload events that cause SLA violations
• Limit the number of VM migrations and server
switches
• Balance CPU-bound and RAM-bound applications.
90
Editor's Notes
Hi everyone, I’m today’s first speaker Caroline
The paper I want to present is Probabilistic Consolidation of Virtual Machines in Self-Organizing Cloud Data Centers.
It comes from IEEE Transactions on Cloud Computing at July-December 2013
This is the outline:
First is introduction, and the related work.
Next is a general description of the scenario and performance metrics.
And then we define the ecoCloud’s two components : assignment and migration procedure.
And next I will introduce the evaluation of the performance of ecoCloud.
There are mathematical analysis of the assignment procedure,
and a real experiment performs in the data center.
And next is talk about the comparison between ecoCloud and one of the best deterministic algorithms Best Fit Decreasing.
Finally is focuses on the scalability properties of ecoCloud with Different Data Center Sizes.
and the conclusion.
Ok Let’s begin at the introduction
In the Introduction I will go through these parts:
First talk about the Datacenter and the Power consumption issue
Next is how to use Virtualization & Consolidation to increase the utilization.
Then is this paper’s contribution.
All main trends in information technology
for example, Cloud Computing, Big Data, or Internet of Things are
all need large and powerful computing infrastructures to support.
The increasing demand for computing resources has led companies and resource providers to build large warehouse-sized data centers,
A data center or we call server farm like this
is a facility used to house computer systems and associated components
provide the computation or storage resource.
It generally includes
computation or storage resource
redundant or backup power supplies
redundant data communications connections
environmental controls (e.g., air conditioning, fire suppression)
various security devices.
But the increasing demand and data centers require a significant amount of power to be operated and consume a lot of energy.
In 2006, the energy consumed by IT infrastructures in the USA was about 61 billion kWh, corresponding to 1.5% of all the produced electricity
And 2% of the global carbon emissions, which is equal to the aviation industry
and these figures are expected to double every 5 years [1].
In google’s data center, it has the cooling towers to keep the server’ temperature, and enhance the power’s utilization rate.
In the past few years, the reduction of energy consumption
is by improving the efficiency of cooling and power supplying facilities in data centers.
And they use Power usage effectiveness(PUE) , this metrics to measure how efficiently a computer data center uses energy.
It defined as the ratio of the overall power entering the data center and the power devoted to computing facilities
In past few year, the typical values have decreased from 2 and 3 to lower than 1.1.
Means when increase the power to the computing facilities, the power which other facilities need will be less,
and make the power usage more effectiveness.
However, there are lots of space remains for the optimization of the computing facilities themselves.
In this picture we can see most of the time servers operate at 10-50% of their full capacity [2], [3].
This low utilization is caused by the changing variability of VMs’ workload
Since the data center is planned to sustain the peaks of load
but for long periods of time the load is much lower, and make power waste. [4], [5]
And an active but idle server consumes between 50-70% of the power consumed
when it is fully utilized [6], a large amount of energy is used even at low utilization.
Although the power is used in the computing as much as possible, the utilization rate for each server itself does not achieve the best.
So virtualization was proposed and think it might be exploited to alleviate the problem.
Many Virtual Machine (VM) can be executed on the same physical server to increase the utility.
In this example, it doesn’t need to have 9 servers to run each application, just run the application on VMs and put several VMs on a real server.
This enables the consolidation of the workload.
Consolidation means to allocate the max number of VMs in the min number of physical machines [7].
Consolidation allows unneeded servers to be put into a low-power state or switched off
or devoted to the execution of incremental workload.
Increase the server’s utilization.
Unfortunately, efficient VM consolidation is prevented by its complexity.
The optimal assignment of VMs to the PMs of a data center is analogous to the NP-hard “Bin Packing Problem,”
The problem of assigning a given set of items of variable size to the minimum number of bins.
The problem is more complicated due to these two circumstances:
the assignment of VMs should take into account multiple server resources at the same time, for example, CPU and memory,
So it becomes a “multi-dimensional bin packing problem,” much more difficult than the single dimension problem;
2) even when a good assignment has been achieved
the VMs continuously modify their hardware requirements.
In this paper
authors proposed ecoCloud, an approach that is partly inspired by ant algorithms.
They use two types of probabilistic procedures “Assignment” and the “Migration”, and I will explain how they work later.
The key decisions is made by single servers and allows a complex problem to be solved by combining simple operations
Not only increasing the utilization of servers but also consolidating VMs dynamically and locally.
This approach is also extended to the multi-dimension problem (are CPU and memory)
in the following experiment.
In this way, we can save the electrical costs and ensure the quality of service
respect to the Service Level Agreements stipulated with users.
Because Consolidation is a powerful means to improve IT efficiency and reduce power consumption [7], [25], [26].
There are lots of related work, and I will briefly introduce some them
how they deal with consolidation, and What’s their drawback.
First, in [27] and [13]
They try to forecast the processing load and aim at determining the minimum number of servers that should be switched on to satisfy the demand.
However, in the author’s opinion
the problem is how to correctly set the server’s number?
How to predict the processing load precisely?
And how the VMs map to servers in a dynamic environment?
Since the problem of optimally mapping VMs to PMs can be reduced to the bin packing problem.
And it is also known as a NP-hard problem,
the heuristic approaches can only lead to suboptimal solutions.
There are lots of heuristic approaches presented to decide how to do the consolidation.
Like the Best Fit Decreasing in [1] , and First Fit Decreasing algorithms in [28] .
And [30] tackles the problem by exploiting the Constraint Programming paradigm.
Based on some constraints and make a priority order to guide the solver to a good solution.
These algorithms use lower and upper utilization thresholds to enact migration procedures [29].
All these approaches represent important steps ahead for the deployment of green-aware data centers.
But they still share a couple of notable drawbacks.
First, they use deterministic and centralized algorithms
and the efficiency goes bad when the size of the data center grows.
The second drawback is that
mapping strategies may require the concurrent migration of many VMs,
which can cause considerable performance degradation during the reassignment process.
And there is another way that no central control
In [33], the data center is modeled as a P2P network
and server explore the network to collect information that can later be used to migrate VMs.
The V-MAN system, proposed in [34]
uses a gossip protocol to communicate their state to each other.
But the complete absence of centralized control can be seen as an obstacle by the data center administrator.
Another problem is to consider more than one resource, and this make the problem more complicated.
There are some algorithm presented, like in [38] is based on the first-fit approximation for the bin packing problem.
And in [39] the problem is tackled by using an LP formulation that gives higher priority to virtual machines with more stable workload.
In [41] performs dynamic consolidation based on constraint programming, where constraints are defined both on CPU and on RAM utilization.
But the problem is they all need to use any complex centralized algorithm.
Conversely, the advantage of the approach presented here because
It adopts a probabilistic approach,
naturally scalable, and uses an asynchronous and smooth migration process,
which ensures that VMs are relocated gradually.
With ecoCloud, despite the fact that servers can autonomously decide whether or not to migrate or accept a VM
The final decisions are still granted to the central manager of the data center, ensures a better control of the operations.
Next is the Scenario & Performance Metrics introduce
The objective of ecoCloud is to dynamically map VMs to PMs to save electrical costs
And respect the Service Level Agreements.
So in this scenriao
When an application request is transmitted from a client to the data center manager
And the data center manager will selects a VM that is appropriate for the application,
on the basis of application characteristics such as the amount of required resources (CPU, memory, storage space)
and the type of operating system specified by the client.
Then, the VM is assigned to one of the available servers through the assignment procedure.
The main idea underlying the whole approach is that
it is up to the single servers to decide whether they should accept or reject a VM.
These decisions are based on information available locally—for example, information on the local CPU and RAM utilization
The data center manager has only a coordinating role, and it does not need to execute any complex centralized algorithm to optimize the mapping of VMs.
The workload of each application is dynamic, so its demand for computational resources varies with time:
So it needs the migration procedure to monitor the workload.
Migrating a VM can be advantageous either when the resources utilization is too low, meaning that the server is highly underutilized
or when it is too high, possibly causing overload situations and violate the QoS.
The migration procedure consists of two steps: first, a server requests the migration of a VM
And then choose the server that will host the migrating VM, with a technique similar to assignment procedure.
There are some performance metrics can be used to measure the performance
Resource utilization
Number of active servers. VMs should be clustered into as few servers as possible.
Consumed power
Frequency of migrations and server switches
SLA violations, the agreement assign with user
In next section, we describe how the two main probabilistic procedures work
In the previous slide, we said that the assignment procedure is performed when a client asks the data center to execute a new application.
Once the application is associated to a compatible VM, the data center manager will assign the VM to one of the servers for execution.
Instead of taking the decision on its own, the manager delegates a main part of the procedure to single servers.
Specifically, it sends an invitation to all the active servers, or to a subset of them,
depending on the data center size and architecture, to check if they are available to accept the new VM.
Whether the invitation should be rejected or accepted is depend on the server’s utilization.
If the server is over-utilized or under-utilized on either of the two considered resources, the invitation should be rejected.
Because overutilization might cause overload situations, and penalize the quality of service.
And in the case of underutilization, the objective is to put the server in a sleep mode and save energy
the server should refuse new VMs and try to get rid of those that are currently running.
So, a server with intermediate utilization should accept new VMs to foster consolidation.
And how the server know they are in underutilization or overutilization?
The server decision is taken performing a Bernoulli trial.
The success probability for this trial is equal to the value of the overall assignment function
means in two resource, the assignment function will evaluate on each resource of them.
There are some parameter will be used in the Bernoulli trial:
X (valued between 0 and 1) is the relative utilization of a resource, CPU or RAM
T is the maximum allowed utilization (e.g., T =0.8 means that the resource utilization cannot exceed 80 percent of the server capacity)
the assignment function is equal to zero when x > T
p is a shape parameter
factor Mp is used to normalize the maximum value to 1
This is the Bernoulli trial and normalize function.
The assignment function is equal to zero when x > T
After the Bernoulli trial, we can see this figure.
The X-axis is resource utilization, and Y is the assignment probability function.
This figure shows the graph of the single-resource assignment function for some values of the parameter p, and T=0.9.
The value of p can be used to modulate the shape of the function.
The value of x at which the function reaches its maximum
is the value at which assignment attempts succeed with the highest probability
And it will increase and approach T as the value of p increases.
The value of the function is zero or very low when the resource is overutilized or underutilized.
If us and ms are, respectively, the current CPU and RAM utilization at server s,
Let pu and pm be the shape parameters defined for the two resources
Tu and Tm the respective maximum utilizations.
The overall assignment function for the server s is denoted as fs and defined as this equation.
by the product of two assignment functions
This assignment functions will concern two of the resources
ensures the servers tend to respond positively when they have intermediate utilization values for both CPU and RAM
if one of the resources is under- or over-utilized the probability of the Bernoulli trial is low.
If the Bernoulli trial is successful, the server communicates its availability to the data center manager.
Then, the manager selects one of the available servers, and assigns the new VM to it.
If none of the contacted servers is available—all the Bernoulli trials are unsuccessful
it means that in all the servers one of the two resources (CPU or RAM) is close to the utilization threshold.
This usually happens when the overall workload is increasing, so that the current number of active servers can’t afford the load.
In such a case, the manager wakes up an inactive server and requests it to run the new VM.
And if there is no server to wake up, means all the servers are already active
it is a sign that the servers are unable to sustain the load even when consolidating work
Means the provider should consider to buy new servers.
The assignment process efficiently consolidates the VMs
but application workload changes with time.
When some VMs terminate or reduce their demand for server resources, it may happen that the server becomes underutilized leading to lower energy efficiency.
On the other hand, when the VMs increase their requirements, a server may be overloaded
In both these situations some VMs have to be profitably migrated to other servers.
Like assignment function, the migration procedure is defined as follows
each server monitors its CPU and RAM utilization using the libraries
provided by the virtualization infrastructure (e.g., VMWare or Hyper-V)
and checks if it is between two specified thresholds,
the lower threshold Tl and the upper threshold Th.
Each server evaluates the corresponding probability function
when the utilization is below the threshold Tl , or above the threshold Th.
and they will performs a Bernoulli trial, the success probability is set to the value of this function.
If the trial is successful the server requests the migration of one of the local VMs.
X is the utilization of a given resource, CPU or RAM, the migration probability functions are defined as follows:
These two kinds of migrations are also call “low migrations” and “high migrations”.
This graph shows that after the Bernoulli trial
The X-axis is resource utilization, and Y is the migration probability function.
This figure shows the graph of the single-resource migration function for some values of the parameter p, and Tl is set to 0.3, Th is set to 0.8.
The shape of the functions can be modulated by tuning the parameters α, and β, and can be used to foster or hinder migrations.
The same function is applied to CPU and RAM, and the parameters, Tl, Th, α, and β can have different values for the two resources.
Whenever a Bernoulli trial is success, the server must choose the VM to consider for migration.
In the case of high migration, the server focuses on the over-utilized resource (CPU or RAM)
This server will choose the VMs for which
the utilization of that resource is larger than the difference
between current server’s utilization and the threshold Th.
For example the Current server’s utilization:0.9, and threshold is 0.8.
Then the difference is 0.1, VM2 will be chosen to transfer.
If more than one, then one of such VMs is randomly selected for migration
And in the low migration case the choice of the VM to migrate is made randomly.
The choice of the new server that will contain the migrating VM
is made by using a variant of the assignment with two main differences.
First, in the high migration case
the threshold T of the assignment function is set to 0.9 times the resource utilization of the resource server
and this value is sent to servers along with the invitation.
This ensures that the VM will migrate to a less loaded server, and helps to avoid multiple migrations of the same VM.
The second difference concerns the low migration.
When no server is available to run a migrating VM, it would not be acceptable to switch on a new server to accommodate the VM
Because that will cause one server activated and let another one to sleep.
This paper’s approach ensures a gradual and continuous migration process which don’t require the simultaneous migration of many VMs.
The data center administrator can set threshold values & shape parameters to choose different consolidation strategies
(e.g., conservative, intermediate, aggressive)
The parameter values set in the following experiments are those corresponding to the intermediate strategy.
Next section is devoted to a mathematical analysis of the ecoCloud assignment procedure.
And an experiment on a Real Data center.
There are some symbols be difined.
Let Ns be the number of servers in a data center
Nc the number of cores in each server
Nv the number of VMs that can be executed in each core.
And it is assumed that two types of VMs are executed on the data center
CPU-bound and RAM-bound VMs, and denoted as C-type and M-type.
C-type VMs need more CPU than the amount needed by M-type VMs of a factor γC > 1
conversely, the amount of RAM needed by M-type VMs is larger than C-type VMs by a factor γM > 1.
In the previous slide, we have mentioned that in an active but idle server consumes 50-70% of the power consumed when fully utilized. [6]
As the CPU utilization increases, the consumed power can be assumed to increase linearly
from the power corresponding to the idle state to the power corresponding to full utilization [13], [14].
where Pmax is the power consumed at maximum CPU utilization (u = 1)
and Pidle is the power consumed when the server is active but idle (u = 0).
In the following analytical and simulation experiments presented in this study, the power consumed by a single server is expressed as:
In experiments on real data centers, the consumed power is directly monitored and measured.
Before the experiment begin, there are some parameters to be set.
To analyze the behavior of the system, we performed an experiment for a data center with 100 servers,
And each server has 6 cores with CPU frequency of 2 GHz and 4-GB RAM.
In the experiment, each VMs use 500 MHz CPU.
Which means one core can support 4 VM.
The power consumed by each server at maximum utilization is set to 250 W, a typical value for the servers of a data center,
And power of idle state is set to 70 percent of max power
The average CPU and memory load of the data center
is defined as the ratio between
the total amount required by VMs
and the corresponding capacity of the data center
denoted as ρC (ρM)
For each server in this experiment, initial CPU and RAM utilizations are set to have 40%of the server capacity.
maximum utilization threshold T = 0.9, and p = 3
Under normal operation, without using ecoCloud,
the data center would tend to a steady condition in which all the servers remain active with CPU and RAM utilization around 40 percent.
With ecoCloud,
the workload consolidates to only 45 servers, while 55 are switched off.
And allows the data center nearly halve the consumed power
And next we considered the values of γC and γM
means the different ratios between the CPU and RAM demanded by the two types of VMs.
The values of the two parameters were kept equal to one another, and in different tests were set to
1.0 (the two kinds of applications coincide)
1.5 (C-type applications need 50 percent more CPU than M-type ones)
2.0, and 4.0 as the most extreme case.
At the end of the consolidation process,
the 45 active servers show nearly the same distribution of their hardware resources between the two types of applications.
The distribution is shown in this figure
for one of the active servers and for the above-mentioned values of γC and γM.
The outcome of this experiment shows that the probabilistic assignment process
balances the two kinds of VMs so that neither the CPU nor the RAM becomes a bottleneck.
For example, in the most imbalanced scenario (γC andγM equal to 4.0),
about 71 percent of the CPU is assigned to C-type VMs while about 18 percent is given to M-type VMs,
and the opposite occurs for memory.
Both CPU and RAM are utilized up to the permitted threshold (90 percent) and the workload is consolidated efficiently,
which allows 55 servers to be hibernated and the consumed power to be almost halved.
Such an efficient consolidation is possible when the relative overall loads of CPU and RAM are comparable (both equal to 40 percent in this case).
If one of the two resources undergoes a heavier demand, that resource inevitably limits the consolidation degree.
To this purpose
we run experiments in which the overall CPU load, ρC, is set to 40 % of the total CPU capacity of the servers,
while the overall RAM load, ρM, is varied between 20 -60%.
For this set of experiments, the values of γC and γM are set to 4.0.
When the overall memory load is lower than 0.4 (cases ρM = 0.2 and ρM = 0.3)
the CPU is the critical resource and is the one that drives the consolidation process.
When the most critical resource is the memory, as happens in the cases ρM = 0.5 and ρM = 0.6,
the consolidation process is driven by the allocation of RAM to the VMs.
In this figure we can see
When the overall memory load is lower than 0.4
the number of active servers and the consumed power are the same as last experiment.
When the most critical resource is the memory, more active servers and more power are needed
to satisfy the increased demand for memory
in the cases that the memory load is equal to 50 and 60 percent of the data center capacity
56 and 67 servers are kept active, the corresponding values of consumed power are equal to about 13 kW and about 15 kW.
Overall, it may be concluded that the approach is always able to consolidate the load as much as is allowed by the most critical hardware resource.
In the previous section, we show the effectiveness of ecoCloud in consolidating the load under various scenarios.
However, the model relies on some necessary assumptions.
So to validate the model and prove that ecoCloud is effective in real scenarios
authors do the experiments in May 2013 on a live data center owned by a major telecommunications operator.
The experiment was run on 28 servers virtualized with the platform VMWare vSphere 4.0.
These are the server’s CPU and memory equipment.
The servers hosted 447 VMs which were assigned a number of virtual cores varying between 1 and 4
and an amount of RAM varying between 1 GB and 16 GB.
According to their usage of the two resources, the VMs were categorized into CPU-bound (C-type) and memory-bound (M-type).
In this data center, 80 percent of the VMs, were memory-bound.
The remaining were CPU-bound
The M-type VMs contributed for the 49.44 percent of the overall CPU load and for the 92.15 percent of the overall memory load.
All the servers have network adapters with bandwidth of 10 Gbps.
In the real experiments both the assignment and the migration were activated.
The parameters of the assignment function were set as follows
T = 0.8 (this value was imposed by the data center administrator), p = 3.
VMs are migrated
when the CPU or memory load exceeds the high threshold Th, set to 0.95,
or when the most utilized resource - the RAM in this case—goes below the low threshold Tl, set to 0.5.
The shape parameter α, and β were set to 0.25.
This figure shows the number of active servers
starting from the time at which ecoCloud is activated and for the following 7 days.
Within the first day 11 servers are hibernated.
In the following days, the number of active servers is stabilized,
but daily workload variations allow one or two servers to be hibernated during the night.
This figure shows the consumed power reduces, following the trend of the previous figure.
This figure reports the number of high and low migrations performed during each hour of the analyzed period on the whole data center.
In the first day, migrations are mostly from low utilized servers, which are first unloaded and then hibernated.
As the consolidation process proceeds, active servers tend to be well utilized and some high migrations are needed to prevent overload events
And low migrations are allow to improve consolidation during the night.
After the first day, only a few migrations per day are performed.
This figure offer a snapshot of the data center at the end of the seventh day of ecoCloud operation, when only 17 servers are active.
This figure reports for each of the 28 servers, the amount of CPU and RAM utilized by C-type and M-type VMs.
Since in this scenario most VMs are memory-bound, the consolidation is driven by RAM
in all active servers the RAM utilization is about 70 percent.
This figure reports the numbers of VMs of the two types that run on each server.
With the exceptions of servers 2 and 3, in which no C-type VM is running
the proportion between the two types of VMs is comparable to the 80-20 proportion, same as the overall proportion.
Next is a set of experiments perform to compare ecoCloud and one of the deterministic and central algorithms BFD.
Here we implemented a variant of the classical Best Fit Decreasing algorithm
described and analyzed in [1], referred to as BFD in the following.
This choice was made because it was proved in [18] that the BFD
is the polynomial algorithm that gives the best results in terms of effectiveness.
At each execution of BFD, VMs of over-utilized and under-utilized servers are collected
And then they are sorted in decreasing order of CPU utilization.
Respecting this order, each VM is allocated to the server that provides
the smallest increase of the power consumption caused by the allocation.
A key parameter of BFD is the interval of time between two successive executions of the algorithm
therefore, we performed experiments with four different values of the interval: 1, 5, 15, and 60 minutes.
Because they could not install ecoCloud in real data centers having more than 100 servers
so, the authors used a home-made Java simulator fed with the logs of real VMs to compare ecoCloud and BFD in a data center with 400 servers.
The traces represent the CPU utilization of 6,000 VMs, monitored in March/April 2012 and updated every 5 minutes.
Since the CPU is the only resource considered in [1], we also consider this resource only for the experiments reported below.
We assigned the VMs to 400 servers, using the ecoCloud and BFD algorithms for assignment and migration of VMs.
These servers are all equipped with 2-GHz cores.
One third of the servers have four cores, one third have six cores and the remaining third have eight cores.
The parameters of the assignment and migration functions were set as follows.
This figure reports the distribution of the average CPU utilization of the VMs
measured as a percentage of the total CPU capacity of the hosting physical machine.
The graph shows that the average CPU utilization is under 20 percent for most VMs..
It is clear that this kind of distribution leaves much room for consolidation algorithms.
Fig. 16 reports the average number of active servers versus the overall load in ecoCloud and BFD.
The curves are close to each other, and also close to the optimal value of the associated bin packing problem.
ecoCloud requires a slightly larger number of active servers,
mostly because the CPU utilization of servers is allowed to decrease by a certain amount before low migrations are triggered,
to avoid migrations that are not strictly necessary.
Similar observations can be done by analyzing the average consumed power of the two algorithms, shown in Fig. 17.
BFD has slightly better consolidation degree
However, it comes at a considerable cost in terms of the number of migrations and the probability of overload events.
Fig. 18 shows that the number of migrations is much higher in BFD than in ecoCloud.
For example, with load equal to 0.3, less than 400 migrations per hour are needed by ecoCloud,
while about 10,000 migrations per hour are needed by BFD in the case that the time interval is set to 1 minute.
This corresponds to more than 150 simultaneous migrations to be performed at each algorithm execution.
If the BFD time interval is enlarged the frequency of migrations can be reduced,
but the number of required simultaneous migrations increases.
When the simultaneous migrations increases, it might cause another problem, like bandwidth.
This figure reports the percentage of time of CPU overload.
The value of this index is remarkably lower in ecoCloud,
due to its capacity of immediately reacting with high migrations each time the CPU utilization exceeds the upper threshold.
The probability of overload in BFD comes as the combination of two contrasting phenomena:
if the algorithm is executed frequently, the consolidation effort is stressed (cfr. Fig. 16),
and brings the servers closer to their CPU limits and increases the overload probability.
When the time interval is larger the consolidation effort is lower, but VM workload variations are not controlled for a longer time, which can also be a cause of overload events.
Thus, overload events are present at any load condition. With ecoCloud the index is hardly affected by the value of the overall load.
ecoCloud is scalable, inherited from the probabilistic, self-organizing and partially distributed nature of the algorithm.
This section will focuses on the scalability properties of ecoCloud with Different Data Center Sizes
In small systems, it can happen that all the servers
after the execution of Bernoulli trials
reject the VM even when some of them have enough spare CPU to accommodate the VM.
But this probability of this event can be ignore in large data centers
a server is activated only when strictly needed.
To assess the ecoCloud scalability
this paper performed simulations with data centers of different size (100, 200, 400, and 3,000 servers)
using the VM traces described in the previous section
This figure reports the fraction of active servers versus the overall load and shows that this fraction is nearly independent on the system size.
We also performed tests for a data center with 3,000 servers in which invitations are forwarded to varying numbers of servers.
These tests confirm that there is no advantage to send invitations to more than about 100 servers.
Finally is the conclusion,
This paper tackles the issue of energy-related costs in data centers and Cloud infrastructures
The aim is to consolidate the VMs on as few PMs as possible
so to minimize power consumption and carbon emissions
and ensuring a good level of the QoS experienced by users.
With ecoCloud, the approach proposed in the paper, the mapping of Virtual Machines is based on Bernoulli trials
And through single servers decide, on the basis of the local information, to reduce the complexity.
The self-organizing and probabilistic nature makes ecoCloud particularly efficient in large data centers.
This is a notable advantage respect to other fully deterministic algorithms
And through the mathematical analysis and experiments performed in a real data center
prove that ecoCloud can
Reduce power consumption
Avoid overload events that cause SLA violations
Limit the number of VM migrations and server switches
Balance CPU-bound and RAM-bound applications.