The document proposes a cost-aware virtual machine placement approach across distributed data centers using Bayesian networks. It designs a Bayesian network to model expert knowledge on cloud infrastructure management. It then uses the GQM method to define measures for criteria based on the Bayesian network outputs. Finally, it applies multi-criteria decision analysis to create a utility function for virtual machine allocation and migration decisions. The approach was evaluated using a cloud simulation framework and real workload and infrastructure data, showing improvements of up to 69% in total costs compared to baseline algorithms.
A Study of Virtual Machine Placement Optimization in Data Centers (CLOSER'2017)Stéphanie Challita
In recent years, cloud computing has shown a valuable way for accommodating and providing services over the Internet such that data centers rely increasingly on this platform to host a large amount of applications (web hosting, e-commerce, social networking, etc.). Thus, the utilization of servers in most data centers can be improved by adding virtualization and selecting the most suitable host for each Virtual Machine (VM).
The problem of VM placement is an optimization problem aiming for multiple goals. It can be covered through various approaches. Each approach aims to simultaneously reduce power consumption, maximize resource utilization and avoid traffic congestion. The main goal of this literature survey is to provide a better understanding of existing approaches and algorithms that ensure better VM placement in the context of cloud computing and to identify future directions.
Review of Cloud Computing Simulation Platforms and Related EnvironmentsRECAP Project
This presentation was given by Dr. James Byrne at the Cloud Simulation Workshop @ NC4 2017 on 11th April 2017. Dr. Byrne presents a review of cloud computing simulation platforms and related environments. He provides an overview and multi-level feature analysis of DES tools for cloud computing environments and discusses how these cloud simulation platforms are being used for research purposes.
Optimising Service Deployment and Infrastructure Resource ConfigurationRECAP Project
This is a presentation delivered by Alec Leckey (Intel) at the 2nd Data Centre Symposium held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) on April 10, 2018 in Dublin, Ireland.
Learn more about the RECAP project: https://recap-project.eu/
Install the Intel Landscaper: https://github.com/IntelLabsEurope/landscaper
Presentation by Guillaume Pierre, Professor of Computer Science at the University of Rennes 1 (France), at the FogGuru Boot Camp training in September 2018.
A Study of Virtual Machine Placement Optimization in Data Centers (CLOSER'2017)Stéphanie Challita
In recent years, cloud computing has shown a valuable way for accommodating and providing services over the Internet such that data centers rely increasingly on this platform to host a large amount of applications (web hosting, e-commerce, social networking, etc.). Thus, the utilization of servers in most data centers can be improved by adding virtualization and selecting the most suitable host for each Virtual Machine (VM).
The problem of VM placement is an optimization problem aiming for multiple goals. It can be covered through various approaches. Each approach aims to simultaneously reduce power consumption, maximize resource utilization and avoid traffic congestion. The main goal of this literature survey is to provide a better understanding of existing approaches and algorithms that ensure better VM placement in the context of cloud computing and to identify future directions.
Review of Cloud Computing Simulation Platforms and Related EnvironmentsRECAP Project
This presentation was given by Dr. James Byrne at the Cloud Simulation Workshop @ NC4 2017 on 11th April 2017. Dr. Byrne presents a review of cloud computing simulation platforms and related environments. He provides an overview and multi-level feature analysis of DES tools for cloud computing environments and discusses how these cloud simulation platforms are being used for research purposes.
Optimising Service Deployment and Infrastructure Resource ConfigurationRECAP Project
This is a presentation delivered by Alec Leckey (Intel) at the 2nd Data Centre Symposium held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) on April 10, 2018 in Dublin, Ireland.
Learn more about the RECAP project: https://recap-project.eu/
Install the Intel Landscaper: https://github.com/IntelLabsEurope/landscaper
Presentation by Guillaume Pierre, Professor of Computer Science at the University of Rennes 1 (France), at the FogGuru Boot Camp training in September 2018.
This is my presentation, explaining the energy and carbon efficient algorithm presented in the conference paper published by the CLOUDS research lab, who developed the cloud simulator - CloudSim.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
Energy efficient VM placement - OpenStack Summit Vancouver May 2015Kurt Garloff
Some measurements of cloud energy consumption in our FusionSphere5 OpenStack cloud. And some thoughts on improving it by intelligent scheduling.
(Radu Tudoran, Kurt Garloff, Uli Kleber -- Huawei)
Presentation by Steffen Zeuch, Researcher at German Research Center for Artificial Intelligence (DFKI) and Post-Doc at TU Berlin (Germany), at the FogGuru Boot Camp training in September 2018.
Cost-aware scalability of applications in public clouds Daniel Moldovan
Presentation given in International Conference on Cloud Engineering (IC2E), IEEE, Berlin, Germany, 4-8 April, 2016.
Paper accessible on my website http://www.infosys.tuwien.ac.at/staff/dmoldovan/
Scalable applications deployed in public clouds can be built from a combination of custom software components and public cloud services. To meet performance and/or cost requirements, such applications can scale-out/in their components during run-time. When higher performance is required, new component instances can be deployed on newly allocated cloud services (e.g., virtual machines). When the instances are no longer needed, their services can be deallocated to decrease cost. However, public cloud services are usually billed over predefined time and/or usage intervals, e.g., per hour, per GB of I/O. Thus, it might not be cost efficient to scale-in public cloud applications at any moment in time, without considering their billing cycles.
In this work we aid developers of scalable applications for public clouds to monitor their costs, and develop cost-aware scalability controllers. We introduce a model for capturing the pricing schemes of cloud services. Based on the model we determine and evaluate the application's costs depending on its used cloud services and their billing cycles. We further evaluate cost efficiency of cloud applications, analyzing which application component is cost efficient to deallocate and when. We integrate our approach in a platform for cost-aware scalability of applications running in public clouds. We evaluate our approach on a scalable platform for IoT, deployed in Flexiant, one of the leading European public cloud providers. We show that cost-aware scalability can achieve higher application stability and performance, while reducing its operation costs.
From Cloud to Fog: the Tao of IT Infrastructure DecentralizationFogGuru MSCA Project
Keynote by Dr. Guillaume Pierre, Professor of Computer Science at the University of Rennes 1 (France), at the IEEE CloudNet conference, 4th November 2019.
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Buc...ServiceWave 2010
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Bucchiarone: Using a Lifecycle Model for Adaptable Interactive Distributed Applications
SYBL: An extensible language for elasticity specifications in cloud applicati...Georgiana Copil
Presentation given at CCGRID, May 2013
Abstract: Elasticity in cloud computing is a complex problem, regarding not only resource elasticity but also quality and cost elasticity, and most importantly, the relations among the three. Therefore, existing support for controlling elasticity in complex applications, focusing solely on resource scaling, is not adequate. In this paper we present SYBL - a novel language for controlling elasticity in cloud applications - and its runtime system. SYBL allows specifying in detail elasticity monitoring, constraints, and strategies at different levels of cloud applications, including the whole application, application component, and within application component code. Based on simple SYBL elasticity directives, our runtime system will perform complex elasticity controls for the client, by leveraging underlying cloud monitoring and resource management APIs. We also present a prototype implementation and experiments illustrating how SYBL can be used in real-world scenarios.
Cloud Modeling vs Internal vs Global Market using Burstorm PlatformScott Goessling
Designing using Cloud and Infrastructure services can be very challenging. It is a big and complex market that moves very fast and changes every day. Are you getting ahead of falling behind?
Multi-level Elasticity Control of Cloud Services -- ICSOC 2013Georgiana Copil
Presentation given at ICSOC 2013
Abstract: Fine-grained elasticity control of cloud services has to deal with multiple elasticity perspectives (quality, cost, and resources). We propose a cloud services elasticity control mechanism that considers the service structure for controlling the cloud service elasticity at multiple levels, by firstly defining an abstract composition model for cloud services and enabling multi-level elasticity control. Secondly, we define mechanisms for solving conflicting elasticity requirements and generating action plans for elasticity control. Using the defined concepts and mechanisms we develop a runtime system supporting multiple levels of elasticity control and validate the resulted prototype through experiments.
This is my presentation, explaining the energy and carbon efficient algorithm presented in the conference paper published by the CLOUDS research lab, who developed the cloud simulator - CloudSim.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
Energy efficient VM placement - OpenStack Summit Vancouver May 2015Kurt Garloff
Some measurements of cloud energy consumption in our FusionSphere5 OpenStack cloud. And some thoughts on improving it by intelligent scheduling.
(Radu Tudoran, Kurt Garloff, Uli Kleber -- Huawei)
Presentation by Steffen Zeuch, Researcher at German Research Center for Artificial Intelligence (DFKI) and Post-Doc at TU Berlin (Germany), at the FogGuru Boot Camp training in September 2018.
Cost-aware scalability of applications in public clouds Daniel Moldovan
Presentation given in International Conference on Cloud Engineering (IC2E), IEEE, Berlin, Germany, 4-8 April, 2016.
Paper accessible on my website http://www.infosys.tuwien.ac.at/staff/dmoldovan/
Scalable applications deployed in public clouds can be built from a combination of custom software components and public cloud services. To meet performance and/or cost requirements, such applications can scale-out/in their components during run-time. When higher performance is required, new component instances can be deployed on newly allocated cloud services (e.g., virtual machines). When the instances are no longer needed, their services can be deallocated to decrease cost. However, public cloud services are usually billed over predefined time and/or usage intervals, e.g., per hour, per GB of I/O. Thus, it might not be cost efficient to scale-in public cloud applications at any moment in time, without considering their billing cycles.
In this work we aid developers of scalable applications for public clouds to monitor their costs, and develop cost-aware scalability controllers. We introduce a model for capturing the pricing schemes of cloud services. Based on the model we determine and evaluate the application's costs depending on its used cloud services and their billing cycles. We further evaluate cost efficiency of cloud applications, analyzing which application component is cost efficient to deallocate and when. We integrate our approach in a platform for cost-aware scalability of applications running in public clouds. We evaluate our approach on a scalable platform for IoT, deployed in Flexiant, one of the leading European public cloud providers. We show that cost-aware scalability can achieve higher application stability and performance, while reducing its operation costs.
From Cloud to Fog: the Tao of IT Infrastructure DecentralizationFogGuru MSCA Project
Keynote by Dr. Guillaume Pierre, Professor of Computer Science at the University of Rennes 1 (France), at the IEEE CloudNet conference, 4th November 2019.
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Buc...ServiceWave 2010
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Bucchiarone: Using a Lifecycle Model for Adaptable Interactive Distributed Applications
SYBL: An extensible language for elasticity specifications in cloud applicati...Georgiana Copil
Presentation given at CCGRID, May 2013
Abstract: Elasticity in cloud computing is a complex problem, regarding not only resource elasticity but also quality and cost elasticity, and most importantly, the relations among the three. Therefore, existing support for controlling elasticity in complex applications, focusing solely on resource scaling, is not adequate. In this paper we present SYBL - a novel language for controlling elasticity in cloud applications - and its runtime system. SYBL allows specifying in detail elasticity monitoring, constraints, and strategies at different levels of cloud applications, including the whole application, application component, and within application component code. Based on simple SYBL elasticity directives, our runtime system will perform complex elasticity controls for the client, by leveraging underlying cloud monitoring and resource management APIs. We also present a prototype implementation and experiments illustrating how SYBL can be used in real-world scenarios.
Cloud Modeling vs Internal vs Global Market using Burstorm PlatformScott Goessling
Designing using Cloud and Infrastructure services can be very challenging. It is a big and complex market that moves very fast and changes every day. Are you getting ahead of falling behind?
Multi-level Elasticity Control of Cloud Services -- ICSOC 2013Georgiana Copil
Presentation given at ICSOC 2013
Abstract: Fine-grained elasticity control of cloud services has to deal with multiple elasticity perspectives (quality, cost, and resources). We propose a cloud services elasticity control mechanism that considers the service structure for controlling the cloud service elasticity at multiple levels, by firstly defining an abstract composition model for cloud services and enabling multi-level elasticity control. Secondly, we define mechanisms for solving conflicting elasticity requirements and generating action plans for elasticity control. Using the defined concepts and mechanisms we develop a runtime system supporting multiple levels of elasticity control and validate the resulted prototype through experiments.
Dynamic Bayesian modeling for risk prediction in credit operations (SCAI2015)AMIDST Toolbox
In this paper we perform an exploratory analysis of a finan- cial data set from a Spanish bank. Our goal is to do risk prediction in credit operations, and as data is collected continuously and reported on a monthly basis, this gives rise to a streaming data classification problem. Our analysis reveals some practical problems that have not previously been thoroughly analyzed in the context of streaming data analysis: the class labels are not immediately available and the rele- vant predictive features and entities under study (in this case the set of customers) may vary over time. In order to address these problems, we propose to use a dynamic classifier with a wrapper feature subset selection to find relevant features at di↵erent time steps. The proposed model is a special case of a more general framework that can also ac- commodate more expressive models containing latent variables as well as more sophisticated feature selection schemes.
Full text link: http://www.idi.ntnu.no/~helgel/papers/BorchaniMartinezMasegosaLangsethNielsenSalmeronFernandezMadsenSaezSCAI15.pdf
Controlling Project Performance using PDM - PSQT2005 - Ben LindersBen Linders
• A hands-on model for control of product and process quality.
• Support of release risk decisions based on defect data.
• ODC and Test Matrices applied in different test phases.
• Usage of feedback to analyze data and come to actions.
• Using project data for a business case for improvement.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Dynamic workload migration over optical backbone network to minimize data cen...Sabidur Rahman
Full paper: http://ieeexplore.ieee.org/abstract/document/7996505/
As more organizations rapidly adopt cloud services, energy consumption in data centers (DCs) is increasing such that today Information and Communication Technology (ICT) has become a major consumer of energy. A large portion of ICT energy consumption is used to power servers running in DCs and the network they use to communicate. In this study, we consider that, often, energy cost at a particular DC is related to the electricity price regulated by Independent System Operators / Regional Transmission Organizations (ISOs/RTOs). As these prices vary in time and depend on the geographical locations of the DCs, recent studies have shown that the spatio-temporal variations of electricity price can be exploited to reduce electricity cost. While most prior works consider a quasi-static scenario with known workload patterns, our study proposes a dynamic workload-aware algorithm that exploits the spatio-temporal variations of electricity costs with the goal to minimize the energy cost in ICT. Our algorithm uses dynamic request rerouting and live virtual machine (VM) migration to move workloads to DCs with lower electricity cost. We consider VM migration cost (including electricity cost at optical backbone network nodes), bandwidth constraints for migration, VM consolidation, constraints from Service Level Agreement (SLA), and administrative overhead of VM migration. Our simulation studies show that the proposed algorithm reduces operational cost and improves energy efficiency of data centers significantly.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Closing Roundtable Discussion: From Commodity Providers to Digital Service Co...Jill Kirkpatrick
The rise of AI, digital twin simulation, voice interaction and other digital advancements is providing utilities with the tools to better serve a proactive, liquid customer.
The impact of these transformations goes beyond customer experience, deeply transforming utility business models, and signaling the path to the future of the utility-consumer relationship.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
SE4SG 2013 : MODAM: A MODular Agent-Based Modelling Framework Jenny Liu
SE4SG 2013 Presentation by Fanny Boulaire at 2nd International Workshop on Software Engineering Challenges for the Smart Grid.
Please cite our workshop at
Ian Gorton, Yan Liu, Heiko Koziolek, Anne Koziolek, and Mazeiar Salehie. 2013. 2nd international workshop on software engineering challenges for the smart grid (SE4SG 2013). In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 1553-1554.
Techniques to Minimize State Transfer Cost for Dynamic Execution Offloading I...IJERA Editor
The recent advancement in cloud computing in cloud computing is leading to and excessive growth of the mobile devices that can become powerful means for the information access and mobile applications. This introducing a latent technology called Mobile cloud computing. Smart phone device supports wide range of mobile applications which require high computational power, memory, storage and energy but these resources are limited in number so act as constraints in smart phone devices. With the integration of cloud computing and mobile applications it is possible to overcome these constraints by offloading the complex modules on cloud. These restrictions may be alleviated by computation offloading: sending heavy computations to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade.
Presentation from the EPRI-Sandia Symposium on Secure and Resilient Microgrids: Integrated Design and Financial Model, presented by Stephen Knapp, Power Analytics Corp, Baltimore, MD, August 29-31, 2016.
Quality of Service Control Mechanisms in Cloud Computing EnvironmentsSoodeh Farokhi
The growth in popularity of the Internet, along with the rapid development of processing and storage technologies, has brought a paradigm shift in the way computing resources are provisioned. The technological trend today is to offer computing resources as services, leased and exposed via the Internet in a pay-as-you-go and on-demand fashion, called cloud computing...
Cloud infrastructure providers are trying to reduce their operating costs while offering their services with higher quality; something they strive to do to stand out among other providers. However, this is becoming challenging as providing such services needs operating large-scale and geographically distributed data centers. On the other hand, the main purpose of customers in using clouds is to achieve a high quality of service (QoS) while reducing their overall costs. Given the variety of offered services in terms of quality and cost, customers are encouraged to simultaneously use services from multiple cloud providers, known as multi-cloud. However, utilizing multi-cloud brings a new set of open challenges, such as selecting and composing the most appropriate services. Furthermore, despite the critical need of customers in having predictable service performance, in general cloud providers do not yet offer any performance guarantees. This gap is due to the complexity of practically addressing this issue in a cost-effective way. Such a complexity mainly comes from the dynamic nature of the cloud, unpredictable workloads, and non-linearity of mapping performance measurements into required cloud resources. Hence, controlling the trade-off between QoS and cost is a challenging goal for both cloud infrastructure providers and customers.
This thesis investigates models, algorithms, and mechanisms to tackle this trade-off from both perspectives. More specifically, in the scope of this thesis, we first take the cloud provider viewpoint by proposing an approach for virtual machine placement across geographically distributed infrastructures. In this approach, a Bayesian network model is used to address decision making under uncertainty. Then, we address the trade-off between QoS and cost from the cloud customer point of view by facilitating the utilization of the multi-cloud paradigm. We propose a service selection approach using prospect theory to rank the comparable service offerings. Furthermore, to guarantee the performance objectives of customers, we propose autonomic resource provisioning techniques. To this aim, control theory is used to design resource provisioning controllers, and fuzzy control is utilized to coordinate multiple controllers toward meeting the service performance objectives in a cost-effective manner. Finally, the evaluations of these contributions
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
Similar to Cost-Aware Virtual Machine Placement acrossDistributed Data Centers using Bayesian Networks (20)
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Cost-Aware Virtual Machine Placement acrossDistributed Data Centers using Bayesian Networks
1. Cost-Aware Virtual Machine Placement across
Distributed Data Centers using Bayesian Networks
Dmytro Grygorenko*, Soodeh Farokhi*, and Ivona Brandic
Vienna University of Technology, Austria
(*contributed equally to the paper)
12th International Conference on Economics of Grids, Clouds, Systems, and Services
Cluj-Napoca, Romania
September 15 – 17, 2015
2. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Introduction
• Fast growing trend of Cloud Computing industry of over 300% in the last 6 years
• 86% of companies use more than one type of Cloud Computing services
• 30 millions of Cloud servers geographically distributed all over the world
• Huge environmental impact of Cloud Computing (1-2% of the world electricity usage)
• Not optimal energy usage plans while there are possible cost efficient solutions
• Challenges:
– high Quality-of-Service (QoS) expectations of Cloud customers
– Cloud providers’ challenges for the Costs vs. QoS trade-off
2Introduction Approach Evaluation Conclusion
Fig.1: Windows Azure CDN Locations [1]
[1] https://www.simple-talk.com/cloud/development/an-introduction-to-windows-azure-%28part-2%29/
3. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Agenda
• Motivation
• Contributions & Challenges
• Approach
• Evaluation
• Conclusion & Future Work
3Introduction Approach Evaluation Conclusion
4. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Motivation
• How to reduce high operational cost of running cloud infrastructure and to minimize SLA penalty cost?
Addressed problems so far:
– Virtual Machine (VM) placement
– Temperature-aware energy usage
– Performance of VM migration
BUT what is missing?! Modeling a combination of these problems while tackling the interconnections and
dependencies challenges across geo-distributed DCs.
• How to evaluate the applicability of the proposed solutions?
Existing simulation frameworks (CloudSim, D-Cloud, PreFail, etc.) DO NOT ALLOW to simulate necessary objectives:
– Geo-distributed DCs
– Cooling systems
– Weather data
– Power outages
– SLAs
4Introduction Approach Evaluation Conclusion
5. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Contributions
• An approach to reduce the cloud operating cost by applying VM placement across geo-distributed DCs:
– Leverages the cloud expert knowledge and models them in a Bayesian Network (BN)
– The outputs of the BN are utilized in proposed VM allocation and consolidation algorithms
• A cloud simulation framework CloudNet [1] with the following features:
– Simulation of cloud infrastructure
– Utilization and generation of various application workloads
– Usage of geo-distributed DCs
– Management of cooling systems
– Usage of synthetic and real weather data
– Scheduling of power outages
– SLA-aware simulation
– Prediction of resource usage
5
[1] https://github.com/dmitrygrig/CloudNet/
Introduction Approach Evaluation Conclusion
6. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Challenges of managing Cloud data centers
• Geo-distributed DCs
– Dynamic electricity market
– Various time zones
– Different weather conditions (e.g., temperature)
• Frequent power outages
• VM migration is dependent on dynamic factors such as VM RAM size, bandwidth, etc.
• Trade-off (multi-criteria decision problem): reduction of the DCs energy cost vs. customer satisfaction in terms of QoS
6
Fig1.: Windows Azure CDN Locations [1]
[1] https://www.simple-talk.com/cloud/development/an-introduction-to-windows-azure-%28part-2%29/
Example: Microsoft Azure
• Operates in several regions around the world
• Electrical downtimes: from 6 min/year in Japan till 20 h/year in Brazil
• Energy prices differ more than twice at some locations
• Day/ night energy price rates
• Outdoor temperatures from -35 °C to +35 °C
Introduction Approach Evaluation Conclusion
7. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
VM Placement using Bayesian Network
VM Placement Phases:
• Phase 1: Designing the BN to represent expert domain knowledge on cloud infrastructure management
• Phase 2: Using GQM method to define the underlying measures for the chosen criteria based on the BN’s output
• Phase 3: Applying MCDA method to create the utility function as the final decision making indicator
7
Phase 1
modeling the expert
knowledge in a Bayesian
Network
Phase 2
using GQM method to
quantify the chosen
criteria
Phase 3
applying MCDA to
define utility function
for each decision
VM allocation
(when a new VM
request arrives)
VM migration
across distributed
DCs (periodically)
Introduction Approach Evaluation Conclusion
8. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Phase 1: Designing Bayesian Network
Phase 1: designing the BN to represent expert domain knowledge on cloud infrastructure management
• BN is used as for a decision making model
What are Bayesian Networks?
• graphical models to represent variables of interest (e.g., event occurrences) and probabilistic dependencies
among them
• they simulate the mechanism of exploring causal relations between key factors 𝑃 𝐴 𝐵 =
𝑃 𝐵 𝐴 𝑃(𝐴)
𝑃(𝐵)
Why Bayesian Networks?
• applying knowledge about domain to find hidden and causal relationships
• discovering relationships in raw data
• ability to prove the correctness of built models (based on the powerful mathematical background)
8Introduction Approach Evaluation Conclusion
Input • various observations of cloud infrastructure
Output • possibilities of decision criteria
9. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Designing BN for a simplified decision problem (1)
Problem: where a new VM request should be allocated?
Observations: Data centre location (Europe, Asia, etc.), time of day (day/night), season (winter, summer, etc.)
Hidden factors: Weather conditions
Criteria: Energy price, possibility of power outage
Decision Action: Allocate VM
9
Fig 2.: Designing Bayesian Network. Step 1: Consider energy price and dependent factors Table. 1: CPT of Probability (Energy Price | DC Location, Time of Day)
Introduction Approach Evaluation Conclusion
10. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Designing BN for a simplified decision problem (2)
10
Fig 3.: Designing Bayesian Network. Step 2: Add power outage criterion Table 3: CPT of Probability (Power Outage| DC Location, Weather Conditions)
Table 2: CPT of Probability (Weather Conditions | DC Location, Season)
Introduction Approach Evaluation Conclusion
11. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Designing BN for a simplified decision problem (3)
11
At the same time in South or North America…
Fig. 4: Querying of the BN for a DC in South America Fig. 5: Querying of the BN for a DC in North America
Introduction Approach Evaluation Conclusion
12. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Phase 2: using GQM
Phase 2: using GQM method
• definition of underlying measures for the chosen criteria 𝑔𝑖 𝑎
• based on the BN’s output
12
Power outage
Low 1
Middle 0.3
High 0.1
Energy price
Low 1
Middle 0.7
High 0.5
Table 4: Criteria mapped to values in [0,1] interval using GQM
Introduction Approach Evaluation Conclusion
13. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Phase 3: applying MCDA
Phase 3: applying MCDA method to create the utility function as the final decision making indicator
• quantitative measurement of the benefit of a certain decision
• expressed as an utility function based on set of factors and criteria calculated previously:
𝑈 𝑎 = 𝑤𝑖 𝑔𝑖 𝑎
𝑔𝑖 is a criteria
𝑤𝑖 is an utility weighting that represents relative importance of each criteria
Decision: migrate VM to a PM with the highest value of the utility function
13
Power Outage (2) Energy price (1) Total utility
Europe 1 0.2 2.2
Asia 1 0.2 2.2
North America 1 1 3.0
South America 0.3 1 1.6
Table 5: Weighted utilities of different actions according to MCDA
Introduction Approach Evaluation Conclusion
14. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Summary of the proposed approach
Problem: which PM should be used for allocation/migration of a VM?
Observations: Data center location, PM & VM resources utilization, temperature, cooling mode, electricity price, power outage statistics
Hidden factors: Dirty Page Rate (DPR), partial power usage effectiveness (pPUE), possibility of VM downtime
Criteria: VM unavailability (𝑔1), PM power consumption (𝑔2), PM CPU utilization (𝑔3), VM migration duration (𝑔4), energy price (𝑔5)
Decision Actions: Allocate/Migrate VM, Switch On/Off PM
14
Fig. 6: A snapshot of the designed Bayesian Network
Introduction Approach Evaluation Conclusion
15. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Evaluation input data
Used real data traces:
• temperature (http://forecast.io/)
• cooling modes (Mechanical, Air, Mixed)
• power outage statistics
• electricity prices
• PM power specifications (SPECpower benchmark)
15Introduction Approach Evaluation Conclusion
Fig. 8a: Temperature data traces
Fig. 7: Power outage statistics [1]
http://earlywarn.blogspot.co.at/2013/05/international-power-outage-comparisons.html
Fig. 8b: Cooling modes Fig. 8c: Energy prices
16. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Evaluation Setup
• Simulation period: 1 month (January, 1, 2013 – February, 1, 2013)
• Interval: 1 hour
• VM specs: 1000MIPS, 768MB RAM
• PM specs: 3000MIPS, 4GB RAM (HP ProLiant ML110 G3)
16Introduction Approach Evaluation Conclusion
Table 6: Evaluation setup configuration
http://www.spec.org/power_ssj2008/results/res2011q1/power_ssj2008-20110127-00342.html
Fig. 9: PM power specifications [1]
17. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Evaluation baseline algorithms
• No Migration (NoM): First-Fit VM allocation without migration
• First-Fit-Decreasing
• Agreed (FFD-A): resources agreed by the SLA
• Requested (FFD-R): resources required by the VM at runtime
• Bayesian Network Decision Model
• Last Workload policy (BN-LW): next workload value equals to the last one
• Trend Workload policy (BN-TW): values follow a certain linear trend
• Linear Regression Workload (BN-LRW): applying linear regression on data
17Introduction Approach Evaluation Conclusion
Table 7: CPU-provisioning for different allocation strategies
Provisioned Utilized Agreed
FFD-A 4 CPU 2 CPU >= 4 CPU
FFD-R 2 CPU 2 CPU >= 4 CPU
baseline algorithms
the proposed approach
(wokload prediction policies)
18. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Evaluation Results
• Improvements of total costs
• up to 69% in comparison to NoM
• up to 45% in comparison to FFD-R which has less number of migrations
• up to 18% in comparison to FFD-A which has more number of migrations
• Trend Workload policy
• the best results
• Linear-regression Workload policy
• increases the cost efficiency
• more SLA violations
18Introduction Approach Evaluation Conclusion
Fig. 9: Evaluation results
19. 12th International Conference on Economics of Grids, Clouds, Systems, and Services (GECON’15), Romania , 15-17 Sep, 2015
Conclusion & Future work
Summary
• cost-aware VM placement approach, leveraging domain knowledge to reduces the energy cost
• up to 69% in comparison to NoM, the 1st baseline algorithm
• up to 45% in comparison to FFD, the 2nd baseline algorithm
• evaluated in a novel simulation framework CloudNet with a rich set of cloud simulation opportunities
Ongoing work
• enhancement of VM placement by using more workload prediction techniques
• utilization of hybrid Bayesian Networks to use the analogous data
19Introduction Approach Evaluation Conclusion
20. Thank you for attention!
20
12th International Conference on Economics of Grids, Clouds, Systems, and Service (GECON’15), Romania , 15-17 Sep, 2015
VM Placement using Bayesian Network
VM Placement Phases:
• Phase 1: Designing the BN to represent expert domain knowledge on cloud infrastructure management
• Phase 2: Using GQM method to define the underlying measures for the chosen criteria based on the BN’s output
• Phase 3: Applying MCDA method to create the utility function as the final decision making indicator
Phase 1
modeling the expert
knowledge in a Bayesian
Network
Phase 2
using GQM method to
quantify the chosen
criteria
Phase 3
applying MCDA to
define utility function
for each decision
VM allocation
(when a new VM
request arrives)
VM migration
across distributed
DCs (periodically)
12th International Conference on Economics of Grids, Clouds, Systems, and Service (GECON’15), Romania , 15-17 Sep, 2015
Summary of the proposed approach
Problem: which PM should be used for allocation/migration of a VM?
Observations: Data centre location, PM & VM resources utilization, temperature, cooling mode, electricity price, power outage statistics
Hidden factors: Dirty Page Rate (DPR), partial power usage effectiveness (pPUE), possibility of VM downtime
Criteria: VM unavailability (𝑔1), PM power consumption (𝑔2), PM CPU utilization (𝑔3), VM migration duration (𝑔4), energy price (𝑔5)
Decision Actions: Allocate/Migrate VM, Switch On/Off PM
Fig. 6: A snapshot of the designed Bayesian Network
Dmytro Grygorenko: dmitrygrig@gmail.com
at.linkedin.com/in/dmitrygrig
Soodeh Farokhi: soodeh.farokhi@tuwien.ac.at
www.infosys.tuwien.ac.at/staff/sfarokhi
at.linkedin.com/in/soodehfa