Cloud computing popularity is growing rapidly and consequently the number of companies offering their services in the form of Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) is increasing. The diversity and usage benefits of IaaS offers are encouraging SaaS providers to lease resources from the Cloud instead of operating their own data centers. However, the question remains for them how to, on the one hand, exploit Cloud benefits to gain less maintenance overheads and on the other hand, maximize the satisfactions of customers with a wide range of requirements. The complexity of addressing these issues prevent many SaaS providers to benefit from the Cloud infrastructures. In this paper, we propose HS4MC approach for automatic service selection by considering SLA claims of SaaS providers. The novelty of our approach lies
in the utilization of prospect theory for the service ranking that represents a natural choice for scoring of comparable services due to the users preferences. The HS4MC approach first constructs a set of SLAs based on the given accumulated SaaS provider requirements. Then, it selects a set of services that best fulfills the SLAs. We evaluate our approach in a simulated environment by comparing it with a state-of-the-art utility based algorithm. The evaluation results show that our approach selects services that more effectively satisfy the SLAs.
Quality of Service Control Mechanisms in Cloud Computing EnvironmentsSoodeh Farokhi
The growth in popularity of the Internet, along with the rapid development of processing and storage technologies, has brought a paradigm shift in the way computing resources are provisioned. The technological trend today is to offer computing resources as services, leased and exposed via the Internet in a pay-as-you-go and on-demand fashion, called cloud computing...
Cloud infrastructure providers are trying to reduce their operating costs while offering their services with higher quality; something they strive to do to stand out among other providers. However, this is becoming challenging as providing such services needs operating large-scale and geographically distributed data centers. On the other hand, the main purpose of customers in using clouds is to achieve a high quality of service (QoS) while reducing their overall costs. Given the variety of offered services in terms of quality and cost, customers are encouraged to simultaneously use services from multiple cloud providers, known as multi-cloud. However, utilizing multi-cloud brings a new set of open challenges, such as selecting and composing the most appropriate services. Furthermore, despite the critical need of customers in having predictable service performance, in general cloud providers do not yet offer any performance guarantees. This gap is due to the complexity of practically addressing this issue in a cost-effective way. Such a complexity mainly comes from the dynamic nature of the cloud, unpredictable workloads, and non-linearity of mapping performance measurements into required cloud resources. Hence, controlling the trade-off between QoS and cost is a challenging goal for both cloud infrastructure providers and customers.
This thesis investigates models, algorithms, and mechanisms to tackle this trade-off from both perspectives. More specifically, in the scope of this thesis, we first take the cloud provider viewpoint by proposing an approach for virtual machine placement across geographically distributed infrastructures. In this approach, a Bayesian network model is used to address decision making under uncertainty. Then, we address the trade-off between QoS and cost from the cloud customer point of view by facilitating the utilization of the multi-cloud paradigm. We propose a service selection approach using prospect theory to rank the comparable service offerings. Furthermore, to guarantee the performance objectives of customers, we propose autonomic resource provisioning techniques. To this aim, control theory is used to design resource provisioning controllers, and fuzzy control is utilized to coordinate multiple controllers toward meeting the service performance objectives in a cost-effective manner. Finally, the evaluations of these contributions
Coordinating CPU and Memory Elasticity Controllers toMeet Service Response T...Soodeh Farokhi
Vertical elasticity is recognized as a key enabler for efficient resource utilization of cloud infrastructure through fine-grained resource provisioning, e.g., allowing CPU cycles to be leased for as short as a few seconds. However, little research has been done to support vertical elasticity where the focus is mostly on a single resource, either CPU or memory, while an application may need arbitrary combinations of these resources at different stages of its execution. Nonetheless, the existing techniques cannot be readily used as-is without proper orchestration since they may lead to either under- or over-provisioning of resources and consequently result in undesirable behaviors such as performance disparity. The contribution of this paper is the design of an autonomic resource controller using a fuzzy control approach
as a coordination technique. The novel controller dynamically
adjusts the right amount of CPU and memory required to meet
the performance objective of an application, namely its response time. We perform a thorough experimental evaluation using three different interactive benchmark applications, RUBiS, RUBBoS, and Olio, under workload traces generated based on open and closed system models. The results show that the coordination of memory and CPU elasticity controllers using the proposed fuzzy control provisions the right amount of resources to meet the response time target without over-committing any of the resource types. In contrast, with no coordinating between controllers, the behaviour of the system is unpredictable e.g., the application performance may be met but at the expense of over-provisioning of one of the resources, or application crashing due to severe resource shortage as a result of conflicting decisions.
Self-adaptation Challenges for Cloud-based Applications (Feedback Computing 2...Soodeh Farokhi
Abstract: Software applications accessible over the web are typically subject to very dynamic workloads. Since cloud elasticity provides the ability to adjust the deployed environment on the fly, modern software systems tend to target cloud as a fertile deployment environment. However, relying only on native elasticity features of cloud service providers is not sufficient for modern applications. This is because current features rely on users' knowledge for configuring the performance parameters of the elasticity mechanism and in general users cannot optimally determine such sensitive parameters. In order to overcome this user dependency, using approaches from autonomic computing is shown to be appropriate. Control theory proposes a systematic way to design feedback control loops to handle unpredictable changes at runtime for software applications. Although there are still substantial challenges to effectively utilize feedback control in self-adaptation of software systems, software engineering and control theory communities have made recent progress to consolidate their differences by identifying challenges that can be addressed cooperatively. This paper is in the same vein, but in a narrower domain given that cloud computing is a sub-domain of software engineering. It aims to highlight the challenges in the self-adaptation process of cloud-based applications in the perspective of control engineers.
Cost-Aware Virtual Machine Placement acrossDistributed Data Centers using Ba...Soodeh Farokhi
In recent years, cloud computing providers have been working to provide highly available and scalable cloud services to keep themselves alive in the competitive market of various cloud services. The difficulty is that to provide such high quality services, they need to enlarge data centers (DCs), and consequently, to increase operating costs. Hence, leveraging cost-aware solutions to manage resources is necessary for cloud providers to decrease the total energy consumption, while keeping their customers satisfied with high quality services. In this paper, we consider the cost-aware virtual machine (VM) placement across geographically distributed DCs as a multi-criteria decision making problem and propose a novel approach to solve it by utilizing Bayesian Networks and two algorithms for VM allocation and consolidation. The novelty of our work lays in building the Bayesian Network according to the extracted expert knowledge and the probabilistic dependencies among parameters to make decisions regarding cost-aware VM placement across distributed DCs, which can face power outages. Moreover, to evaluate the proposed approach we design a novel simulation framework that provides the required features for simulating distributed DCs. The performance evaluation results reveal that using the proposed approach can reduce operating costs by up to 45% in comparison with First-Fit-Decreasing heuristic method as a baseline algorithm.
Migration Control in Cloud Computing to Reduce the SLA Violationrahulmonikasharma
The requisition of cloud based services are more eminent because of the enormous benefits of cloud such as pay-as-you-use flexibility,scalability and low upfront cost. Day-by-day due to growing number of cloud consumers the load on the datacenters is also increasing. Various load distribution and dynamic load balancing approaches are being followed in the datacenters to optimize the resource utilization so that the performance may be maintained during the increased load. Virtual machine (VM) migration is primarily used to implement dynamic load balancing in the datacenters. But, the poorly designed dynamic VM migration policies may negate its benefits. The VM migration overheads result in the violations of service level agreement (SLA) in the cloud environment.In this paper,an extended VM migration control model is proposedto minimize the SLA violations while controlling the energy consumption of the datacenter during VM migration. The parameters of execution boundary threshold is used to extend an existing VM migration control model. The proposed model is tested through extensive simulations using CloudSim toolkit by executing real world workload. Results are obtained in terms of number of SLA violations while controlling the energy consumption in the datacenter. Results show that the proposed modelachieves better performance in comparison to the existing model.
Quality of Service Control Mechanisms in Cloud Computing EnvironmentsSoodeh Farokhi
The growth in popularity of the Internet, along with the rapid development of processing and storage technologies, has brought a paradigm shift in the way computing resources are provisioned. The technological trend today is to offer computing resources as services, leased and exposed via the Internet in a pay-as-you-go and on-demand fashion, called cloud computing...
Cloud infrastructure providers are trying to reduce their operating costs while offering their services with higher quality; something they strive to do to stand out among other providers. However, this is becoming challenging as providing such services needs operating large-scale and geographically distributed data centers. On the other hand, the main purpose of customers in using clouds is to achieve a high quality of service (QoS) while reducing their overall costs. Given the variety of offered services in terms of quality and cost, customers are encouraged to simultaneously use services from multiple cloud providers, known as multi-cloud. However, utilizing multi-cloud brings a new set of open challenges, such as selecting and composing the most appropriate services. Furthermore, despite the critical need of customers in having predictable service performance, in general cloud providers do not yet offer any performance guarantees. This gap is due to the complexity of practically addressing this issue in a cost-effective way. Such a complexity mainly comes from the dynamic nature of the cloud, unpredictable workloads, and non-linearity of mapping performance measurements into required cloud resources. Hence, controlling the trade-off between QoS and cost is a challenging goal for both cloud infrastructure providers and customers.
This thesis investigates models, algorithms, and mechanisms to tackle this trade-off from both perspectives. More specifically, in the scope of this thesis, we first take the cloud provider viewpoint by proposing an approach for virtual machine placement across geographically distributed infrastructures. In this approach, a Bayesian network model is used to address decision making under uncertainty. Then, we address the trade-off between QoS and cost from the cloud customer point of view by facilitating the utilization of the multi-cloud paradigm. We propose a service selection approach using prospect theory to rank the comparable service offerings. Furthermore, to guarantee the performance objectives of customers, we propose autonomic resource provisioning techniques. To this aim, control theory is used to design resource provisioning controllers, and fuzzy control is utilized to coordinate multiple controllers toward meeting the service performance objectives in a cost-effective manner. Finally, the evaluations of these contributions
Coordinating CPU and Memory Elasticity Controllers toMeet Service Response T...Soodeh Farokhi
Vertical elasticity is recognized as a key enabler for efficient resource utilization of cloud infrastructure through fine-grained resource provisioning, e.g., allowing CPU cycles to be leased for as short as a few seconds. However, little research has been done to support vertical elasticity where the focus is mostly on a single resource, either CPU or memory, while an application may need arbitrary combinations of these resources at different stages of its execution. Nonetheless, the existing techniques cannot be readily used as-is without proper orchestration since they may lead to either under- or over-provisioning of resources and consequently result in undesirable behaviors such as performance disparity. The contribution of this paper is the design of an autonomic resource controller using a fuzzy control approach
as a coordination technique. The novel controller dynamically
adjusts the right amount of CPU and memory required to meet
the performance objective of an application, namely its response time. We perform a thorough experimental evaluation using three different interactive benchmark applications, RUBiS, RUBBoS, and Olio, under workload traces generated based on open and closed system models. The results show that the coordination of memory and CPU elasticity controllers using the proposed fuzzy control provisions the right amount of resources to meet the response time target without over-committing any of the resource types. In contrast, with no coordinating between controllers, the behaviour of the system is unpredictable e.g., the application performance may be met but at the expense of over-provisioning of one of the resources, or application crashing due to severe resource shortage as a result of conflicting decisions.
Self-adaptation Challenges for Cloud-based Applications (Feedback Computing 2...Soodeh Farokhi
Abstract: Software applications accessible over the web are typically subject to very dynamic workloads. Since cloud elasticity provides the ability to adjust the deployed environment on the fly, modern software systems tend to target cloud as a fertile deployment environment. However, relying only on native elasticity features of cloud service providers is not sufficient for modern applications. This is because current features rely on users' knowledge for configuring the performance parameters of the elasticity mechanism and in general users cannot optimally determine such sensitive parameters. In order to overcome this user dependency, using approaches from autonomic computing is shown to be appropriate. Control theory proposes a systematic way to design feedback control loops to handle unpredictable changes at runtime for software applications. Although there are still substantial challenges to effectively utilize feedback control in self-adaptation of software systems, software engineering and control theory communities have made recent progress to consolidate their differences by identifying challenges that can be addressed cooperatively. This paper is in the same vein, but in a narrower domain given that cloud computing is a sub-domain of software engineering. It aims to highlight the challenges in the self-adaptation process of cloud-based applications in the perspective of control engineers.
Cost-Aware Virtual Machine Placement acrossDistributed Data Centers using Ba...Soodeh Farokhi
In recent years, cloud computing providers have been working to provide highly available and scalable cloud services to keep themselves alive in the competitive market of various cloud services. The difficulty is that to provide such high quality services, they need to enlarge data centers (DCs), and consequently, to increase operating costs. Hence, leveraging cost-aware solutions to manage resources is necessary for cloud providers to decrease the total energy consumption, while keeping their customers satisfied with high quality services. In this paper, we consider the cost-aware virtual machine (VM) placement across geographically distributed DCs as a multi-criteria decision making problem and propose a novel approach to solve it by utilizing Bayesian Networks and two algorithms for VM allocation and consolidation. The novelty of our work lays in building the Bayesian Network according to the extracted expert knowledge and the probabilistic dependencies among parameters to make decisions regarding cost-aware VM placement across distributed DCs, which can face power outages. Moreover, to evaluate the proposed approach we design a novel simulation framework that provides the required features for simulating distributed DCs. The performance evaluation results reveal that using the proposed approach can reduce operating costs by up to 45% in comparison with First-Fit-Decreasing heuristic method as a baseline algorithm.
Migration Control in Cloud Computing to Reduce the SLA Violationrahulmonikasharma
The requisition of cloud based services are more eminent because of the enormous benefits of cloud such as pay-as-you-use flexibility,scalability and low upfront cost. Day-by-day due to growing number of cloud consumers the load on the datacenters is also increasing. Various load distribution and dynamic load balancing approaches are being followed in the datacenters to optimize the resource utilization so that the performance may be maintained during the increased load. Virtual machine (VM) migration is primarily used to implement dynamic load balancing in the datacenters. But, the poorly designed dynamic VM migration policies may negate its benefits. The VM migration overheads result in the violations of service level agreement (SLA) in the cloud environment.In this paper,an extended VM migration control model is proposedto minimize the SLA violations while controlling the energy consumption of the datacenter during VM migration. The parameters of execution boundary threshold is used to extend an existing VM migration control model. The proposed model is tested through extensive simulations using CloudSim toolkit by executing real world workload. Results are obtained in terms of number of SLA violations while controlling the energy consumption in the datacenter. Results show that the proposed modelachieves better performance in comparison to the existing model.
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Buc...ServiceWave 2010
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Bucchiarone: Using a Lifecycle Model for Adaptable Interactive Distributed Applications
E VALUATION OF T WO - L EVEL G LOBAL L OAD B ALANCING F RAMEWORK IN C L...ijcsit
With technological advancements and c
onstant changes of Internet, cloud computing has been today's
trend. With the lower cost and convenience of cloud computing services, users have increasingly put
their
Web resources and information in the cloud environment. The availability and reliability
of the client
systems will become increasingly important. Today cloud applications slightest interruption, the imp
act
will be significant for users. It is an important issue that how to ensure reliability and stability
of the cloud
sites. Load balancing w
ould be one good solution.
This paper presents a framework for global server load balancing of the Web sites in a cloud with tw
o
-
level
load balancing model. The proposed framework is intended for adapting an open
-
source load
-
balancing
system and the frame
work allows the network service provider to deploy a load balancer in different data
centers dynamically while the customers need more load balancers for increasing the availability
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
SYBL: An extensible language for elasticity specifications in cloud applicati...Georgiana Copil
Presentation given at CCGRID, May 2013
Abstract: Elasticity in cloud computing is a complex problem, regarding not only resource elasticity but also quality and cost elasticity, and most importantly, the relations among the three. Therefore, existing support for controlling elasticity in complex applications, focusing solely on resource scaling, is not adequate. In this paper we present SYBL - a novel language for controlling elasticity in cloud applications - and its runtime system. SYBL allows specifying in detail elasticity monitoring, constraints, and strategies at different levels of cloud applications, including the whole application, application component, and within application component code. Based on simple SYBL elasticity directives, our runtime system will perform complex elasticity controls for the client, by leveraging underlying cloud monitoring and resource management APIs. We also present a prototype implementation and experiments illustrating how SYBL can be used in real-world scenarios.
Cost-aware scalability of applications in public clouds Daniel Moldovan
Presentation given in International Conference on Cloud Engineering (IC2E), IEEE, Berlin, Germany, 4-8 April, 2016.
Paper accessible on my website http://www.infosys.tuwien.ac.at/staff/dmoldovan/
Scalable applications deployed in public clouds can be built from a combination of custom software components and public cloud services. To meet performance and/or cost requirements, such applications can scale-out/in their components during run-time. When higher performance is required, new component instances can be deployed on newly allocated cloud services (e.g., virtual machines). When the instances are no longer needed, their services can be deallocated to decrease cost. However, public cloud services are usually billed over predefined time and/or usage intervals, e.g., per hour, per GB of I/O. Thus, it might not be cost efficient to scale-in public cloud applications at any moment in time, without considering their billing cycles.
In this work we aid developers of scalable applications for public clouds to monitor their costs, and develop cost-aware scalability controllers. We introduce a model for capturing the pricing schemes of cloud services. Based on the model we determine and evaluate the application's costs depending on its used cloud services and their billing cycles. We further evaluate cost efficiency of cloud applications, analyzing which application component is cost efficient to deallocate and when. We integrate our approach in a platform for cost-aware scalability of applications running in public clouds. We evaluate our approach on a scalable platform for IoT, deployed in Flexiant, one of the leading European public cloud providers. We show that cost-aware scalability can achieve higher application stability and performance, while reducing its operation costs.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
On Analyzing Elasticity Relationships of Cloud ServicesDaniel Moldovan
Presentation given in 6'th International Conference on Cloud Computing, CloudCom, IEEE, Singapore, 15-18 December, 2014
With the increasing cloud popularity, substantial effort has been paid for the development of emerging elastic cloud services, consisting of different units distributed among virtual machines/containers in different clouds. Due to the software stack and deployment complexity in single and multi-cloud scenarios, developing and managing such services is impeded by a lack of tools and techniques for understanding the elasticity relationships among individual service units, which influence the service's overall elasticity. In this paper we characterize the elasticity relationships, and develop mechanisms for analyzing them, based on service monitoring information and elasticity requirements. From collected monitoring information we abstract the elasticity behavior of the whole cloud service and individual units, over which we design a customizable algorithm for relationships analysis. We illustrate our approach via several experiments with an elastic data service for M2M platforms, highlighting the importance of determining elasticity relationships for the development and operation of elastic services.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
LIVE VIRTUAL MACHINE MIGRATION USING SHADOW PAGING IN CLOUD COMPUTINGcsandit
Cloud Computing shares computing resources to execute application. Cloud systems provide high-specification resources in the form of services, leading to the provision of user convenience and greater ease for personal-computer users; however, expansions of the cloudsystem service necessitate a corresponding enhancement of the technology that is used for server-resource management. In this paper, by monitoring the resources of a cloud server, we sought to identify the causes of server overload and degradation, followed by the running of a dynamic-page-migration mechanism. According to this process, we designed the proposed migration architecture for the minimization of user inconvenience
Multi-level Elasticity Control of Cloud Services -- ICSOC 2013Georgiana Copil
Presentation given at ICSOC 2013
Abstract: Fine-grained elasticity control of cloud services has to deal with multiple elasticity perspectives (quality, cost, and resources). We propose a cloud services elasticity control mechanism that considers the service structure for controlling the cloud service elasticity at multiple levels, by firstly defining an abstract composition model for cloud services and enabling multi-level elasticity control. Secondly, we define mechanisms for solving conflicting elasticity requirements and generating action plans for elasticity control. Using the defined concepts and mechanisms we develop a runtime system supporting multiple levels of elasticity control and validate the resulted prototype through experiments.
Autonomic SLA-driven Provisioning for Cloud Applicationsnbonvin
Significant achievements have been made for automated allocation of cloud resources. However, the performance of applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). We propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server, or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual tasks in the application workflow according to the user requirements.
SLAs and Performance in the Cloud: Because There is More Than "Just" Availabi...Michael Kopp
An SLA is only useful if it guarantees a certain level of quality. Current Cloud SLAs cover availability but ignore a key ingredient: Response and Throughput Performance. A Performance SLA would need to relate to the applications performance itself, something that no Cloud Provider has control over. We will discuss how Application Performance Monitoring can be used to define, measure and enforce a usable SLA for both sides. We will talk about the differences between IaaS and PaaS cloud providers concerning such an SLA. We will also show how this will lead to better User Experience with less R&D effort. Finally it enables us to easily compare cloud performance across vendors in terms that really matter: Response Time per Cost.
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Buc...ServiceWave 2010
D. Meiländer, S. Gorlatch, C. Cappiello,V. Mazza, R. Kazhamiakin, and A. Bucchiarone: Using a Lifecycle Model for Adaptable Interactive Distributed Applications
E VALUATION OF T WO - L EVEL G LOBAL L OAD B ALANCING F RAMEWORK IN C L...ijcsit
With technological advancements and c
onstant changes of Internet, cloud computing has been today's
trend. With the lower cost and convenience of cloud computing services, users have increasingly put
their
Web resources and information in the cloud environment. The availability and reliability
of the client
systems will become increasingly important. Today cloud applications slightest interruption, the imp
act
will be significant for users. It is an important issue that how to ensure reliability and stability
of the cloud
sites. Load balancing w
ould be one good solution.
This paper presents a framework for global server load balancing of the Web sites in a cloud with tw
o
-
level
load balancing model. The proposed framework is intended for adapting an open
-
source load
-
balancing
system and the frame
work allows the network service provider to deploy a load balancer in different data
centers dynamically while the customers need more load balancers for increasing the availability
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
SYBL: An extensible language for elasticity specifications in cloud applicati...Georgiana Copil
Presentation given at CCGRID, May 2013
Abstract: Elasticity in cloud computing is a complex problem, regarding not only resource elasticity but also quality and cost elasticity, and most importantly, the relations among the three. Therefore, existing support for controlling elasticity in complex applications, focusing solely on resource scaling, is not adequate. In this paper we present SYBL - a novel language for controlling elasticity in cloud applications - and its runtime system. SYBL allows specifying in detail elasticity monitoring, constraints, and strategies at different levels of cloud applications, including the whole application, application component, and within application component code. Based on simple SYBL elasticity directives, our runtime system will perform complex elasticity controls for the client, by leveraging underlying cloud monitoring and resource management APIs. We also present a prototype implementation and experiments illustrating how SYBL can be used in real-world scenarios.
Cost-aware scalability of applications in public clouds Daniel Moldovan
Presentation given in International Conference on Cloud Engineering (IC2E), IEEE, Berlin, Germany, 4-8 April, 2016.
Paper accessible on my website http://www.infosys.tuwien.ac.at/staff/dmoldovan/
Scalable applications deployed in public clouds can be built from a combination of custom software components and public cloud services. To meet performance and/or cost requirements, such applications can scale-out/in their components during run-time. When higher performance is required, new component instances can be deployed on newly allocated cloud services (e.g., virtual machines). When the instances are no longer needed, their services can be deallocated to decrease cost. However, public cloud services are usually billed over predefined time and/or usage intervals, e.g., per hour, per GB of I/O. Thus, it might not be cost efficient to scale-in public cloud applications at any moment in time, without considering their billing cycles.
In this work we aid developers of scalable applications for public clouds to monitor their costs, and develop cost-aware scalability controllers. We introduce a model for capturing the pricing schemes of cloud services. Based on the model we determine and evaluate the application's costs depending on its used cloud services and their billing cycles. We further evaluate cost efficiency of cloud applications, analyzing which application component is cost efficient to deallocate and when. We integrate our approach in a platform for cost-aware scalability of applications running in public clouds. We evaluate our approach on a scalable platform for IoT, deployed in Flexiant, one of the leading European public cloud providers. We show that cost-aware scalability can achieve higher application stability and performance, while reducing its operation costs.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
On Analyzing Elasticity Relationships of Cloud ServicesDaniel Moldovan
Presentation given in 6'th International Conference on Cloud Computing, CloudCom, IEEE, Singapore, 15-18 December, 2014
With the increasing cloud popularity, substantial effort has been paid for the development of emerging elastic cloud services, consisting of different units distributed among virtual machines/containers in different clouds. Due to the software stack and deployment complexity in single and multi-cloud scenarios, developing and managing such services is impeded by a lack of tools and techniques for understanding the elasticity relationships among individual service units, which influence the service's overall elasticity. In this paper we characterize the elasticity relationships, and develop mechanisms for analyzing them, based on service monitoring information and elasticity requirements. From collected monitoring information we abstract the elasticity behavior of the whole cloud service and individual units, over which we design a customizable algorithm for relationships analysis. We illustrate our approach via several experiments with an elastic data service for M2M platforms, highlighting the importance of determining elasticity relationships for the development and operation of elastic services.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
LIVE VIRTUAL MACHINE MIGRATION USING SHADOW PAGING IN CLOUD COMPUTINGcsandit
Cloud Computing shares computing resources to execute application. Cloud systems provide high-specification resources in the form of services, leading to the provision of user convenience and greater ease for personal-computer users; however, expansions of the cloudsystem service necessitate a corresponding enhancement of the technology that is used for server-resource management. In this paper, by monitoring the resources of a cloud server, we sought to identify the causes of server overload and degradation, followed by the running of a dynamic-page-migration mechanism. According to this process, we designed the proposed migration architecture for the minimization of user inconvenience
Multi-level Elasticity Control of Cloud Services -- ICSOC 2013Georgiana Copil
Presentation given at ICSOC 2013
Abstract: Fine-grained elasticity control of cloud services has to deal with multiple elasticity perspectives (quality, cost, and resources). We propose a cloud services elasticity control mechanism that considers the service structure for controlling the cloud service elasticity at multiple levels, by firstly defining an abstract composition model for cloud services and enabling multi-level elasticity control. Secondly, we define mechanisms for solving conflicting elasticity requirements and generating action plans for elasticity control. Using the defined concepts and mechanisms we develop a runtime system supporting multiple levels of elasticity control and validate the resulted prototype through experiments.
Autonomic SLA-driven Provisioning for Cloud Applicationsnbonvin
Significant achievements have been made for automated allocation of cloud resources. However, the performance of applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). We propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server, or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual tasks in the application workflow according to the user requirements.
SLAs and Performance in the Cloud: Because There is More Than "Just" Availabi...Michael Kopp
An SLA is only useful if it guarantees a certain level of quality. Current Cloud SLAs cover availability but ignore a key ingredient: Response and Throughput Performance. A Performance SLA would need to relate to the applications performance itself, something that no Cloud Provider has control over. We will discuss how Application Performance Monitoring can be used to define, measure and enforce a usable SLA for both sides. We will talk about the differences between IaaS and PaaS cloud providers concerning such an SLA. We will also show how this will lead to better User Experience with less R&D effort. Finally it enables us to easily compare cloud performance across vendors in terms that really matter: Response Time per Cost.
Application SLA - the missing part of complete SLA managementComarch
How can operators ensure that the proper quality of so many complex services is delivered? Has the software for network and service monitoring enough functionality to provide the right information? Fortunately, OSS systems have evolved, and they currently contain functionalities allowing the
operator to build comprehensive service management platforms. Today, operators cannot even think about delivering modern services of a high quality without providing a SLA. This means that service assurance with SLAs becomes the most critical aspect of modern OSS solutions. Additionally, since most modern services are built based on a number of applications, delivering the services over network, the applications are becoming the core of the service models.
reliability based design optimization for cloud migrationNishmitha B
reliability based design optimization for cloud migration is an application designed to manage applications..more precisely legacy applications..whose extraction n magmt. is crucial n troublesome.
Innovation with Open Source: The New South Wales Judicial Commission experienceLinuxmalaysia Malaysia
Innovation with Open Source: The New South Wales Judicial Commission experience. MyGOSSCON 2008. Mr. Murali Sagi
Director,
Information Management & Corporate Services,
JUDICIAL COMMISSION OF NSW, SYDNEY, AUSTRALIA
Massimiliano Raks, Naples University on SPECS: Secure provisioning of cloud s...SLA-Ready Network
The cloud is both a risk and an opportunity depending on the service. Despite the opportunities, security is a top concern for a growing number of cloud service customers, and rightfully so. A key challenge is representing security and measuring it in a service level agreement? How can a cloud service provider grant the security level? And how can a cloud service customer automatically enforce it?
Prof. Massimiliano Raks, University of Naples, talks us through Security Service Level Agreement (SecureSLAs), looking at
Security SLA Negotiation, Security SLA (Automatic) Enforcement and Security SLA Continuous Monitoring with the SPECS platform for SecSLAs.
5 Cloud Migration Experiences Not to Be RepeatedHostway|HOSTING
As a project manager at HOSTING, Kellen Amobi has assisted in many customer data migrations over the years. Kellen shares the top five migration mistakes that companies have made in the past and what experience has taught her about resolving the issues quickly, including:
-Developing realistic project scopes
-Managing timelines
-Avoiding security risks
Forecast 2014 Keynote: State of Cloud Migration…What's Occurring Now, and Wha...Open Data Center Alliance
While public cloud computing continues to mature as a technology, those in charge of public cloud solutions within enterprises adopt the technology at their own pace, and for their own reasons. Indeed, they are all on separate journeys, but with many of the same business objectives. In order to determine the progress of enterprises’ journey to the public cloud, Gigaom created a survey designed to understanding what is happening within enterprises that are adopting public cloud computing.
Key items discovered in this survey include:
1. The use of public cloud computing is quickly expanding, and most organizations already exploit public cloud-based resources to run both critical and non-critical business systems.
2. Application development and testing are among the highest value uses of leveraging the public cloud, and most enterprises have moved from proof of concept to actual deployments in the last few years.
3. A larger number of business units leverage public cloud resources that are made up largely of AWS and a few other public cloud providers.
4. A surprising number of public cloud instances support the daily operations of many businesses, with a smaller percentage emerging as heavy users that leverage a massive amount of public cloud resources.
5. First cloud projects are a thing of the past, with most working on their second, third, or more major cloud deployments.
6. Change management and cloud governance are becoming more commonplace and accepted by enterprises.
In this lunchtime keynote presentation, David Linthicum provides a look at what’s occurring right now as enterprises move to public clouds using real data from real adopters. What’s more, Linthicum will provide predictions around what is likely to occur in the world of cloud computing in 2015 and 2016, as well as recommendations around how you can exploit changes and growth of cloud-based platforms to your own best advantage.
Tens thousands of customers, a few millions of users, frequent deployments and immediate feedback about bugs. This is how I would shortly describe the context of deploying JIRA Cloud releases. In Spartez, based on our partnership with Atlassian, not only we take part in the process of developing new functionalities but also we take care of the quality of JIRA Cloud releases. It is not difficult to notice that taking into account above mentioned context measuring the quality of our product is a challenge. How to manage thousands of customer tickets a year? How to handle the fact that in case of Cloud solutions we are service providers and not only product providers? How to automate the process of measuring quality at least partially? These are the kind of challenges we as Quality Assistance Engineers have to face.
During the presentation I will answer the above questions. I will present our approach to measuring quality, its advantages and disadvantages, data and experiences. I will also go beyond the specifics of our product and process. Measuring quality of any product is difficult. Very often either we give up getting valid data completely or we have metrics in which no one believes. What is even worse, sometimes although we have proper data we do not use them to improve our process and product. In the presentation I will describe our best practices to get valid measurements and how we use this information to avoid defects in the future.
Planning for a (Mostly) Hassle-Free Cloud Migration | VTUG 2016 Winter WarmerJoe Conlin
There is no "one right way" when it comes to a cloud migration or cloud transformation, and in this 2016 VTUG talk I explore some of the methods that have proven successful in my experience.
A Cloud Service Selection Model Based on User-Specified Quality of Service Levelcsandit
Recently, it emerges lots of cloud services in the
cloud service market. After many candidate
services are initially chosen by satisfying both th
e behavior and functional criteria of a target
cloud service. Service consumers need a selection m
odel to further evaluate nonfunctional QOS
properties of the candidate services. Some prior wo
rks have focused on objective and
quantitative benchmark-testing of QOS by some tools
or trusted third-party brokers, as well as
reputation from customers. Service levels have been
offered and designated by cloud service
providers in their Service Level Agreement (SLA). C
onversely, in order to meet user
requirement, it is important for users to discover
their own optimal parameter portfolio for
service level. However, some prior works focus only
on specific kinds of cloud services, or
require users to involve in some evaluating process
. In general, the prior works cannot evaluate
the nonfunctional properties and select the optimal
service which satisfies both user-specified
service level and goals most either. Therefore, the
aim of this study is to propose a cloud service
selection model, CloudEval, to evaluate the nonfunc
tional properties and select the optimal
service which satisfies both user-specified service
level and goals most. CloudEval applies a
well-known multi-attribute decision making techniqu
e, Grey Relational Analysis, to the
selection process. Finally, we conduct some experim
ents. The experimental results show that
CloudEval is effective, especially while the quanti
ty of the candidate cloud services is much
larger than human raters can handle.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
Presentation of the Ph. D. dissertation SLA-Driven Cloud Computing Domain Representation and Management. This presentation explains a new methodology for the representation and management of Cloud services using SLA fragments. Cloud resources are described as independent SLA fragments, which are composed on the fly to create complete Cloud services.
An architecture for the management of Cloud services is also presented.
Cloudcompaas, an open source SLA-driven framework is introduced. Cloudcompaas implements the methodology and architecture presented earlier and enables the management of the complete lifecycle of Cloud services.
Finally a set of experiments to validate the utility and performance of the contributions is presented.
Organisations can find it more difficult to manage public cloud services than they originally expected and often they lack the skills and expertise necessary to do it properly.
Management becomes increasingly complex if using multiple cloud providers. Vendors are approaching cloud management in unique ways and in most cases tying you into specific cloud providers, making it difficult to compare select and even control public hybrid solutions
This session will look at tools and approaches that can assist your organisation to not only migrate workloads to the public cloud but also between the clouds, and toolsets designed to improve and control performance and flexibility in the cloud.
IEEE 2015-2016 A Profit Maximization Scheme with Guaranteed Quality of Servic...1crore projects
1 CRORE PROJECTS offers M.E, BE, M. Tech, B. Tech, PhD, MCA, BCA, MSC & MBA projects based on IEEE (2014-2015) and also a real time application projects.
Final Year Projects for BE, B. Tech - ECE, EEE, CSE, IT, MCA, ME, M. Tech, M SC (IT), BCA, BSC and MBA.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Hierarchical SLA-based Service Selection for Multi-Cloud Environments
1. Hierarchical SLA-based Service Selection
for Multi-Cloud Environments
Soodeh Farokhi, Foued Jrad, Ivona Brandic and Achim Streit
Vienna University of Technology, Austria
2. Motivation Scenario
SaaS providers potentianl IaaS users
• Less maintenance overhead
• More customer satisfaction
Multiple Clouds
• Various QoSs and pricing models
Service Level Agreement
Multi-Cloud delivery model
Problem: SLA-based Multi-Cloud Service selection for the SaaS provider
2
CAD Application UI
(1 Large VM)
CAD Models
(2x 1000 GB Storage)
GPU
(5 Large VM)
CAD App Provider CAD App Customer
HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
3. Approach Overview
Problem domain
• Handling SLA heterogeneity
• Maximizing SaaS provider benefits
Solution domain
• InterCloud-SLA
• Service selection by utilizing prospect theory
3HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
4. Prospect Theory is a behavioral economic theory
• by Daniel Kahneman in 1979 (Nobel prize in 2002)
• a descriptive model for decision making under uncertainty
• based on the potential value of losses and gains rather than the final outcome
Alternative decision making model for the Utility Theory
• more realistic in calculating the user satisfaction
• psychologically more accurate
First application at Cloud service selection problem
What is the Prospect Theory?
4HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
5. Service Ranking
• Calculating the user satisfaction based on Prospect Theory principles
Determining how well a service satisfies a user’s requirements
• Satisfaction with a specific service is not the same for different users
• The most suitable service for a user is not the one with the best QoS
Modeling user satisfaction as a function of
• Service quality
• The importance of each QoS parameter
Using Prospect Theory
5HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
6. HS4MC Phases and Architecture
Phases
• Phase 1: SLA Construction
− Meta-SLA
− Sub-SLA
• Phase 2: Service Selection
− Service Selection Algorithm
6
SLA
Repository
IaaS offers
Repository
SLA Construction Engine
Meta-SLA
Sub-SLA
Service Selection Engine
Service Ranker
Composite
Service Ranker
SaaS provider Requirements
Phase 1
Phase 2
2 sub
SLA
InterCloud
SLA
Meta
SLAsub
SLA
sub
SLA
sla sla
sla
sla
Composite
Infrastructure Service
7. Meta-SLA and Sub-SLA
7
Meta-SLA Parameters
Functional Contract duration hour
Non-functional
VM Budget $ per hour
Storage Budget $ per month
Data-traffic Budget $ per month
Availability %
Throughput Mbit/ sec
Latency ms
Reputation 1 to 10
Sub-SLA Parameters
Functional
Resource type VM or Storage
Resource Size S, M, L
Number How many?
Geo-location Where?
Non-functional
Availability %
Response time sec
Reputation 1 to 10
For Multi-Cloud App (Composite infrastructure service) For each component of composite service
Non-functional tuple= {Min, Max, Weight, Type} e.g. Availability (%) = {99.5, 100, very important, soft}
HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
9. Service Selection Algorithm (simplified)
9{s4, s10, s2, s3}
1
23
{s8, s4}
{s5, s10, s3}
Scored lists
Step 2
{S2, S3, S4, S10}
1
23
{S4, S8}
{S3, S5, S10}
Filtered lists
Step 1
Available services : {S1, S2, S3, …, S10}
Step1
Filtering out unqualified
services for each sub-SLA
• Hard non-functional constraints
• Functional constraints
Step 2
Local scoring of services
• Sub-SLA values and weights
• Sub-scoring based on
satisfaction function
10. Satisfaction Function
10
Availability = {Min, Max, Weight} = {99.5, 100, w1 / w2 / w3}
Service1 = {QoS, Value} = {Availability, 99.7}
--------------------------------------------------------------------------------
Normalize value
99.7% = 0.4 as normalized value
--------------------------------------------------------------------------------
Calculate satisfaction score
HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
11. All possible combinations
(meta-SLA ranking)
ranked 1: {s10, s4,s2}
ranked 2: {s5, s4, s4}
….
ranked 24: {s3, s8, s3}
Service Selection Algorithm (simplified)
11
Step 3 Step 4
Best combination
(meta-SLA and sub-SLA ranking)
{s5, s4, s4}
{s4, s10, s2, s3}
1
23
{s8, s4}
{s5, s10, s3}
Sorted lists
Step 3
Scoring of possible compositions
• Meta-SLA values and weights
• Aggregated QoS values of selected services
• Meta-scoring based on satisfaction function
Step 4
Finding the best
combination of services
• Considering both meta-score
and sub-scores
Evaluation
12. Evaluation (1)
Comparing functionality with a utility-based algorithm [Jrad et al., 2013]
• Implementing both algorithms
• Considering SLA satisfaction as metric
Setup a realistic simulation environment
• Simulating 12 commercial IaaS Cloud providers (20 distributed DC)
− QoS attributes by using CloudHarmony service (via a Client located at Europe)
− Pricing models based on real Cloud providers
Considering the impact of 3 aspects on service selection
• Cost, meta-SLA, and sub-SLA
12
13. Evaluation (2)
Using CAD app scenario for users (3 software editions)
• One Meta-SLA for each software edition
• Three sub-SLA (UI, Computation, Storage) for each software edition
13
Meta-SLAs Sub-SLAs
14. Theorotical Comparison
Features Utility-based algorithm Prospect-based Algorithm
Cost Focused, fixed and predefined impact Flexible impact like other QoSs
Weighting From (0,1] dependently for each parameter From (0, 1] independently
SLA satisfaction Meta-SLA Both meta-SLA and sub-SLA
Hard non-functional - supported
Fitting function Separated and predefined for each QoS Unified based on prospect theory, flexible
by weight (scalable to support more QoS)
14
15. Experimental results (1)
Impact of cost on service selection
• Minimizing cost is the main goal for the user
• 3 executions of both algorithms for each software edition
− result of standard edition:
15
HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
16. Experimental results (2)
Impact of meta-SLA parameters on service selection
• Availability is as important for the user as cost
• One execution of both algorithms
− result of enterprise edition setup:
16HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
17. Experimental results (3)
Impact of sub-SLA parameters on service selection
• User has specific non-functional requirements for sub-components (UI, Storage)
• One execution of both algorithms
− result of professional edition setup:
17HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
18. Summary
Proposing for the SaaS providers
• Multi-Cloud Service selection
• InterCloud-SLA concept (sub-SLA and meta-SLA)
Service Selection Algorithm
• Justification of using prospect theory to rank the Cloud services
Evaluation
• Tendency of choosing a single Cloud (latency and traffic cost)
• The importance of Sub-SLA concept in a Multi-Cloud environment
• Cost is not always the most important factor
18HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
19. Future Work
scalability of algorithm (performance evaluation)
Complete InterCloud-SLA construction phase
• IEEE InterCloud Working Group (ICWG), InterCloud-SLA
Investigating on Multi-Cloud SLA management steps at runtime
• SLA monitoring strategies
• SLA violation detection and (pre) reaction
• SLA validation and enforcement (developing penalty model)
Related paper
• Soodeh Farokhi, “Towards a Multi-Cloud SLA Management Framework”, Doctoral
Symposium 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid
Computing (CCGrid 2014), USA, May 26-29, 2014 - accepted.
19HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
20. Thank you for your attention
Soodeh Farokhi
Distributed System Group,
Institute of Information Systems,
Vienna University of Technology, Austria
soodeh.farokhi@tuwien.ac.at
http://www.infosys.tuwien.ac.at/staff/sfarokhi/
20HS4MC: Hierarchical SLA-based Service Selection for Multi-Cloud Environments (CLOSER 2014)
23. Theoretical Comparison (2)
Utility-base Algorithm
Utility = QoS Scores * Budget – Service Cost
Prospect-based Algorithm
Final Score = SS sub-SLAs* Wsub-SLA + SS meta-SLA * Wmeta-SLA
23
24. Aggregate Functions
Non-functional parameters in the meta-SLA
• corresponding aggregation function
• To calculate the aggregated values of composite service non-functional
parameters based on its constituent services.
24
25. Latency Calculation
W(Sij) is the connectivity value of each edge in the graph
Lat(Sij) is gathered from the latency matrix
• According to the data center geographical location of included services
in the composition.
25
28. Utility based algorithm (details)
The utility for each composite service is calculated by subtracting the
total service usage costs (include VM, traffic, storage) from its
monetized usage benefit.
Monetized usage benefits: multiplying his overall score for the SLAs
with his maximal payment for a perfect service
The overall SLA scores (a normalized value between 0 and 1),
defined as the sum of the weighted single SLA parameter scores,
express the overall user satisfaction for a certain service quality.
SLA score: uses fitting functions to map each SLA metric value to a
satisfaction level between 0 and 1. 28
29. Prospect theory (details)
This theory implies that changes in a specific quality aspects of
a service is sensed more by users who have assigned higher
weights to those quality parameters.
The satisfaction of user for a service is based on the gains and
losses relative to the reference point (normalized value of 0.5)
instead of absolutely determined by the normalized QoS
parameters of that service.
The satisfaction function should be concave for gains, and
convex for losses.
29
30. What is missing in the related work?
Cloud Service Selection
• Maximizing the profit of either the customer or the IaaS provider
• Using only single Cloud services
• Without throughly investigate SLA challenges
30
31. Budget Calculation/ Cost Prediction
Budget: willingness of the user’s investment
• VM a unit of cost: value of renting a small VM per hour
• Storage cost: value of storing1GB of data per month
• Data Traffic cost: value of transferring 1GB of data per month
− Considering contract duration and the size of possible transferred data
31
33. Experimental results (1st
and 2nd)
Impact of cost on service selection
• Minimizing cost is the main goal for the user
• 3 executions of both algorithms for each software edition setup
− result of standard edition:
13
Good morning everybody and welcome to my talk. I‘m gonna present a new method for service selection for Multi-Cloud called HS4MC.
Let‘s start with a motivation scenari >>
General points:
Speed of my talking
Start my talk with asking a simple Q: How many of you thinking about how to select a Cloud service so far?
My core message should be repeated and emphaisized several times (specially in summary and motivation)
Use coloring to make some points special ... Very rare but for important ones
Pause in talking before saying important points and give signal in talking when I want to start major slides.
Pronounce: Multi, Component, virtual Machine, Economics, nobel prize, Daniel Kahneman,
Tell them that Intro and Background they are familiar with and it is only for reminding some points
Consider a Computer Aided Design (CAD) software company who wants to offer its CAD software to its customers as a Cloud software services.
It has various way to deploy the various components of this software
(Data based for keeping CAD models, Virtual machine for deploying the CAD application user interface and another virtual machine for Graphical Processing Unit (GPU) with various QoS requirements for each component
whether a private data center or using public Cloud infrastructure services.
For decreasing maintenance overhead and more importantly increase the customer satisfaction level in a very comparative environment, SaaS provider decides to deploy on Cloud infrastructure services.
There are two major points in this scenarios, first this deployment is not for a short time as we consider Software-as-a-Service (SaaS) provider as a Cloud infrastructure user (so we work in the level of a software company not an end user), and moreover, realizing the requested requirements effect the quality of offered services and consequently the reputation of SaaS provider.
As I mentioned there are various components which are needed to be deployed on Could in order to execute the CAD software, each may need different quality of service requirements, it is like a composite infrastructure service request
high availability and good response time for the application UI
Good reputation and geo-location (security aspects) for DB
High performance for VM related to the GPU
So all these requirements are not satisfiable only by using a single Cloud so we need to utilize Multiple Cloud services (based on a Multi-Cloud delivery model) is an effective solution which we used in our approach.
The problem remains is how to choose services for the SaaS provider in order to catch its goal (decrease overhead and increase the customer satisfaction)
Well, let’s look at the overview of our approach at the next slide >>
IaaS and SaaS providers and their offered services are growing rapidly!
- SaaS providers are looking into solutions that minimize their overall infrastructure leasing cost without adversely affecting the customers (which have wide range of requirements these days).
To achieve this goal, it is essential to have a clear definition of user requirements and then provide a solution by which these requirements are met.
Computer-aided design (CAD) is the use of computer systems to assist in the creation, modification, analysis, or optimization of a design. CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing.
The rationale behind choosing this example is that deploying CAD applications in the Cloud has been investigated widely; for example it is used in the Cloudflow project 4 which has the aim to make the Cloud infrastructures a practical solution for manufacturing by automatic provisioning of SaaS applications over Cloud infrastructure services.
The advantages of deploying on cloud:
1. Scalable infrastructure (VM and storage)
2. client with sharing and collaboration support
3. Cost: pay per usage
We took provider perspective.
There are three layers in this problem, customer layer, SaaS provider layer and IaaS provider layer.
As I mentioned SaaS provider is considered as our user, so it has various requirements for the software deployment,
On the other hand, we have a Multi-Cloud IaaS environment with different QoSs and pricing models.
As all of us is know, in a Multi-Cloud delivery model, we have a middle-ware which handles the interaction among various Cloud services.
The proposed approach sits at the middle-ware layer and assists SaaS providers to effectively select best fit infrastructure services in such a way to
Maximize the SaaS provider benefits
Handling SLA heterogeneity
How to achieve that? Our solution has these features:
In such environment, SLA plays a critical roles, so we construct SLAs which are provider independent called InterCloud-SLAs
We consider the SaaS provider request as a Composite infrastructure request
And the major point is using the principles of an economics theory called prospect theory for the service ranking in our method
Well, in order to see what is this theory about and how we use it in this work let’s look at the next slides >>
SaaS providers is our USER (By user we mean SaaS provider)
The position of our approach
SLA Hierarchy problem
Multi-Criteria SLA-based Service selection in a Multi-Cloud environment
Looking at the problem as a Composite Infrastructure Request, and defining SLAs for both sub-service and composite service
- in Multi-Cloud dependent components of a single SaaS application can be distributively deployed in diverse Cloud infrastructures with various QoS attributes, from the SaaS provider perspective, the deployment can be considered as a Composite Infrastructure Service with a set of functional and nonfunctional requirements for each included application component.
How to score and select services for each single component and on the other hand, optimize this allocation to satisfy the requirements of the composite service.
2) These concerned QoS parameters can be conflicting or have differing importance degrees for the SaaS provider.
The diagram shows that the pain of losses is stronger than the pleasure of gains (The loss side of the curve is steeper than the gain side)
Prospect theory is a behavioral economic theory that describes the way people make decisions
It developed by Daniel Kahneman in 1979 as a psychologically more accurate description of decision making, comparing to the expected utility theory
He won Nobel prize for this theory in 2002!
This theory says people make decisions regarding to the potential values of losses and gains instead of final outcome
So somehow we can consider it as a descriptive model to model real-life choices rather than optimal decisions
Utility based theory is a widely accepted theory in service selection and now as an alternative model for that, we’re gonna apply prospect theory as the first application on Service Selection on Cloud Computing,
At the next slide we will figure out where we can use the principle of this theory in service selection >>>
A bit history about it!
Nobel prize
- Utility is the satisfaction one gets from consuming a good or service
Many previous research works simply took utility values as satisfaction values. However, such approach assumes that the user’s satisfaction of a service candidate is proportionate to the service QoS attributes which is not always true as shown the prospect theory in economics.
Using utility value as satisfaction value directly implies the satisfaction of a service candidate is only determined by service utility, and user’s satisfaction will be doubled if service’s utility value is doubled. For example, according to such hypothesis, a service’s price changes from 2$ per day to 1$ per day will double the user’s satisfaction. However, prospect theory [15,16] points out that the satisfaction should be based on the gains and losses relative to the reference point instead of absolutely determined by services’ normalized QoS utility value.
According to the prospect theory, the satisfaction value function should be concave for gains and convex for losses, and the satisfaction value function that passes through the reference point is s-shaped and asymmetrical. It means that losses hurt more than gains feel good.
Satisfaction is subjective and various for different users.
The same QoS value will be gains for a user who lower weights while be losses for the user with higher weight.
We user this theory in calculating the user satisfaction score for a specific Cloud service
Indeed based on this theory the satisfaction with a specific service is not the same for different users
How to model the user satisfactions?
As you can see in this satisfaction function diagram, the satisfaction is a function of service quality and the importance and each QoS attribute for the user as it change the steep of the diagram based on the weight (we will discuss this modeling with more details later)
- This theory implies that changes in a specific quality aspects of a service is sensed more by users who have assigned higher weights to those quality parameters.
- Now that you got familiar with an overview of the solution, let’s have a look at the phases and the architecture which we realize this solution on at the next slide >>
This theory is proper for describing user decisions among various choices with uncertainty, and considers human behavior in computation of the user satisfaction.
Nobel prize
- Utility is the satisfaction one gets from consuming a good or service
Many previous research works simply took utility values as satisfaction values. However, such approach assumes that the user’s satisfaction of a service candidate is proportionate to the service QoS attributes which is not always true as shown the prospect theory in economics.
Using utility value as satisfaction value directly implies the satisfaction of a service candidate is only determined by service utility, and user’s satisfaction will be doubled if service’s utility value is doubled. For example, according to such hypothesis, a service’s price changes from 2$ per day to 1$ per day will double the user’s satisfaction. However, prospect theory [15,16] points out that the satisfaction should be based on the gains and losses relative to the reference point instead of absolutely determined by services’ normalized QoS utility value.
According to the prospect theory, the satisfaction value function should be concave for gains and convex for losses, and the satisfaction value function that passes through the reference point is s-shaped and asymmetrical. It means that losses hurt more than gains feel good.
Satisfaction is subjective and various for different users.
The same QoS value will be gains for a user who lower weights while be losses for the user with higher weight.
The whole solution is realized through two main phases, at the first one, the interCloud SLAs are constructed for the requested composite infrastructure service and also each included component.
At the second phase the Cloud infrastructures services are chosen based on the constructed SLAs and the SLAs of the Cloud IaaS offers.
The introduced phases are mapped to the following architecture, which includes two main components, SLA constructor Engine (corresponding to the first phase) and Service selection which has the responsibilities of the second phase.
The focus of this work is on the second phase.
Well, you are wondering what are these two concepts, Meta-SLA and sub-SLAs, right? So, we are going to explain them at the next slide >>
Say composite service here
two main phases:
SLA Construction phase
the Composite Infrastructure Service of SaaS provider is broken down to a set of functional and non-functional requirements ,called sub-SLAs and a meta-SLA. (forms SLAs named InterCloud-SLAs that are provider-independent.)
- As the input of this phase, the SaaS provider submits its Cloud infrastructure requirements to the SLA Construction Engine as a single XML file.
(two abstract parts: one including the requirements for each infrastructure component (Cloud virtual machine (VM) or storage), and the other enfolding the requirements of the whole set of requested infrastructures.)
(2) Service Selection phase
- The InterCloud-SLAs and the IaaS providers’ offers are two inputs of this phase.
- Meta-SLA includes the requirements of a more abstract level related to the whole composite service
Sub-SLAs express requirements of each infrastructure service included in this composite service.
In order to have InterCloud SLAs, we proposed two new concepts called meta-SLA and sub-SLA both have some functional and non-functional parameters:
While Meta-SLA consists of the parameters related to the whole composite service
sub-SLA includes the parameters corresponding to each included service of the composite service request
For example for the whole composite service, as meta-SLA parameters, we will support availability, throughput, latency and reputation (a number from 1 to 10)
And for each component, we define sub-SLA and cover functional parameters such as resource type (we support both Storage and VM in our work), number of each, size and location as well as availability, response time and reputation as non-functional parameters.
Sub-SLA concept is a new concept in comparison with existing service selection approaches.
Our general modeling is this way that for each non-functional parameter we have four attributes, lower accepted boundary, upper accepted boundary, the important of this parameter for the user and type of it.
E.g. consider availability as a non-func parameter, …. , we treat with the hard non-functional parameters like functional parameters which have to be covered in the final solution instead of user preference
Now, in order to have a clearer view of the introduced models, let’s again loot at he motivation scenario but this time with the constructed meta-SLA and sub-SLAs at the next slide >>
We have three sub-SLAs which are related to each component:
for the GPU and computation unit we need 5 large high performance VM
For the application UI we need one large high available and low response time VM
And for storing CAD models we need 2 large (1000 GB) Cloud storage, located in Europe and their provider has high reputation
We also have some parameters corresponding to the whole CAD software deployment, which are meta-SLA parameters such as resource cost (VM, storage), traffic cost and latency.
Until now, we introduced the supported SLA, motivation scenarios and the idea of our service selection, now let’s start the “Service Selection Algorithm” which are based on the principles of prospect theory at the next slides >>
- Deploying CAD applications in the Cloud has been investigated widely;
- CAD software provider aims to deploy its CAD software in Cloud and offers it as a public Cloud SaaS, called CAD-as-a-Service
(CAD-aaS), in three software editions: enterprise, professional and standard. Each edition has a specific set of requirements.
The CAD-aaS provider wants to deploy the various components of its CAD service.
The components communicate with each other based on the CAD software application topology.
5 large VM with GPU features for the computation component
The cornerstone of the Service Selection Phase is a selection algorithm that works based on prospect theory to compute the user satisfaction score for a certain service.
- Due to the variety of differing nonfunctional parameters in metrics and scales for a given service set, they are needed to be normalized before being used in service ranking.
So again consider availability as a non-functional parameter requested at sub-SLA by Min, Max and importance weight, we want to see how to calculate the satisfaction score of a service based on this SLA values in our “Service Selection Algorithm”:
First of all Step (2) Due to the variety of differing nonfunctional parameters in metrics and scales for a given service set, they are needed to be normalized before being used in service ranking, where NF is the value of availability of the Cloud service, and Min and Max are the accepted boundaries included in the SLA.
Then based on the satisfaction scoring function, which is a function of the normalized value and the importance weight, we calculate the satisfaction score of a specific service aspect (like availability) for a certain SLA parameter.
Suppose we have three users who assign three different weights to “Availability” as a non-functional values, based on the following diagrams, the satisfaction of them for the same service are not the same, if we consider 0.5 as a reference normalized value, for example the higher the importance weight the lower satisfaction of user of that service and vise versa for the normalized values greater than 0.5, e.g. 0.6
Until now we have introduced the “Service Selection Algorithm” along with the sub-SLA and meta-SLA concepts, at the next slides we will explain how we have evaluated our work >>
We normalize service quality parameters in such a way that the higher value always means better.
This formula computes the user satisfaction score of the normalized value Nik of each non-functional parameter in Si based on the Wj in subSLAj
- In the diagram that the accepted boundaries are 99.5% as minimum and 100% as maximum, then the normalized value of 0.4 (value=99.7%) has different satisfaction scores for standard, professional and enterprise users based on their specified weights.
- Justify how I use the prospect-theory to calculate the satisfaction function for each normalized value of NF and Weight.
We will explain the prospect-based satisfaction function at the next slide >>
As mentioned before now, proposed theory is an alternative model for the widely accepted and used utility theory, so for the evaluation, we compare the result with a state-of-the-art utility-based algorithm,
For this reason, we setup up a simulation environment and to model a heterogeneous multi-Cloud environment we simulated
12 commercial IaaS provider profiles (in total 20 distributed data centers) by using Cloud test at “Cloud Harmony” service to have
To have various SLA parameters, we consider that CAD software provider is going to deploy three different software editions (standard, professional and enterprise) of the CAD software on Cloud, so it has different set of requirements (sub-SLAs and meta-SLA) for each software edition.
At the next slides we will see the experimental results via various test scenarios >>
Practice More (these 3 slides)!
At the first scenario, we showed that the proposed algorithm (prospect-based) can behave almost the same as as the utility-based algorithm in case that cost is the most important factor in service selection.
We consider the standard software edition SLAs, as cost is a major factor for the standard users.
although the impact of cost in the prospect-based algorithm is not a predetermined like the utility-based algorithm, by assigning
proportional weights to the service selection criteria, the proposed algorithm can behave the same as a utility-based method which is widely accepted at Cloud service selection from the efficiency point of view.
At the diagrams, the first column shows the requested meta-SLA value for different non-functional parameters (availability, throughput, latency and reputation)
The second column (green one) shows the value of this parameter for the selected services included in the composite service using the utility-based algorithm and the red one is the corresponding values of these parameters by using the prospect-based algorithm.
For example Reputation is not supported by the utility-based algorithm, that’s why we don’t have any value for it here
Both algorithm at this scenario chose services from the same provider, that’s why the value of Traffic-cost at the last diagram are zero for them.
At this scenario, we consider a case in which another non-functional parameter is as important as cost for the user, for example SaaS provider need high available composite service for the deployment of an enterprise software edition, at this case the prospect-based algorithm outperformed the utility based algorithm. As there is a violation of SLA (availability) for the chosen services by utility-based algorithm corresponding.
It selects more suitable services which first do not violate any requested meta-SLA parameters, and then chooses the services with better quality values for the parameters that are more important for the user.
The last test scenario, highlights one of a new features of the proposed algorithm, which is supporting sub-SLA instead of only considering an SLA for the whole composte service as almost all the existing service selection algorithm support.
It means the proposed algorithm not only tries to find an optimum composite service with accepted aggregated QoS values to satisfy the meta-SLA, but also consider the sub-SLA. While, existing service selection algorithms that support composite service selection, only focus on finding the set of services which satisfies the aggregated QoS parameters and neglect to view the included services in this composite service separately. We can cover this need easily by the sub-SLA concept.
Utility-based algorithm could not support non[-functional parameters as sub-SLAs!
The diagrams shows that the selected services satisfied all requested values mentioned at sub-SLAs as well as aggregated values of meta-SLA.
Well now let’s summarize my talk at the next slide >>
Let me just summarize
We proposed a method called HS4MC as an assistant of the SaaS provides for two goals:
Service selection for Multi-Cloud environmnts
Constructing Inter-Cloud SLA concept in order to handle SLA heterogeneity of IaaS providers in this environment
The cornerstone of our work was a service selection algorithm which works based on the principle of an economics theory called “Prospect theory” for the calculation of user satisfaction of a certain service based on the SLA (The top ranked service has largest satisfaction score not the best QoS!)
Well, let’s briefly look at the future steps of our work as the last slide >>
The top ranked service is the service with the largest satisfaction score which means that the service’s QoS can best satisfy users’ QoS requirements. Hence, the services whose QoS qualities are not the best among those of all available services satisfying the user’s QoS requirements can still be ranked on the top as long as they can perfectly satisfy users’ QoS requirements. This is helpful in improving services’ utilizations, and save resources.
users’ feeling on how well services’ QoS qualities satisfy their QoS requirements is modeled with the prospect theory, which considers human behavior and psychological principles in the computation of services’ satisfaction scores.
Make it clear for my message ! (less text and very clear focus ! >>> and focus on benefits of my model) >>> Coloring and mention the HS4MC!!
Let me just summarize
We proposed a method called HS4MC as an assistant of the SaaS provides for two goals:
Service selection for Multi-Cloud environmnts
Constructing Inter-Cloud SLA concept in order to handle SLA heterogeneity of IaaS providers in this environment
The cornerstone of our work was a service selection algorithm which works based on the principle of an economics theory called “Prospect theory” for the calculation of user satisfaction of a certain service based on the SLA (The top ranked service has largest satisfaction score not the best QoS!)
Well, let’s briefly look at the future steps of our work as the last slide >>
The top ranked service is the service with the largest satisfaction score which means that the service’s QoS can best satisfy users’ QoS requirements. Hence, the services whose QoS qualities are not the best among those of all available services satisfying the user’s QoS requirements can still be ranked on the top as long as they can perfectly satisfy users’ QoS requirements. This is helpful in improving services’ utilizations, and save resources.
users’ feeling on how well services’ QoS qualities satisfy their QoS requirements is modeled with the prospect theory, which considers human behavior and psychological principles in the computation of services’ satisfaction scores.
Make it clear for my message ! (less text and very clear focus ! >>> and focus on benefits of my model) >>> Coloring and mention the HS4MC!!
Invite the audience to ask any questions (I am looking forward to comments and Qs)
Thank people for good Qs (I am glad you ask that …)
The last part in our modeling is a graph which describes the degree of connectivity among sub-SLAs (a number between 1 to 3, larger means higher connectivity). In this graph, the nodes present the sub-SLAs and the edge shows to which degree these two corresponding components are going to transfer data at runtime, which influence the traffic cost and latency of the composite deployment. As depicted in Figure 3, the connectivity value for each edge shows the amount of data transfer between the involved nodes. For example, the computation node transfers the highest amount of data with the storage node, among other edges in the CAD-aaS graph.
although the impact of cost in the prospect-based algorithm is not a predetermined like the utility-based algorithm, by assigning
proportional weights to the service selection criteria, the proposed algorithm can behave the same as a utility-based method which is widely accepted Cloud service selection from the efficiency point of view.
Describe these three slides with every details, we have time so don’t worry about it! (describe every things, how we measure them, why traffic cost is 0, why there is no line for reputation of utility-based theory) ….
The position of our approach
SLA Hierarchy problem
Multi-Criteria SLA-based Service selection in a Multi-Cloud environment
Looking at the problem as a Composite Infrastructure Request, and defining SLAs for both sub-service and composite service
- in Multi-Cloud dependent components of a single SaaS application can be distributively deployed in diverse Cloud infrastructures with various QoS attributes, from the SaaS provider perspective, the deployment can be considered as a Composite Infrastructure Service with a set of functional and nonfunctional requirements for each included application component.
How to score and select services for each single component and on the other hand, optimize this allocation to satisfy the requirements of the composite service.
2) These concerned QoS parameters can be conflicting or have differing importance degrees for the SaaS provider.