SLALOM organized two live sessions to present the final versions of our legal terms and technical specifications for #Cloud #SLAs. The sessions provide examples showing how to practically apply SLALOM to improve current practice in the industry for # Cloud #SLAs and support development of cloud computing metrics.
The first webinar covered SLALOM Technical track "Using metrics to improve Cloud SLAs".
SLALOM Webinar Final Legal Outcomes Explanined "Using the SLALOM Contract Ser...Oliver Barreto Rodríguez
SLALOM organized two live sessions to present the final versions of our legal terms and technical specifications for #Cloud #SLAs. The sessions provide examples showing how to practically apply SLALOM to improve current practice in the industry for # Cloud #SLAs and support development of cloud computing metrics.
The second webinar covered SLALOM legal track, "Ready to Use Cloud Master Agreement for SLAs". You can now have access to the slides used in the legal webinar here.
CompatibleOne bringing cloud as a commodityCompatibleOne
Cloud Brokers enable interoperability and portability of applications across multiple Cloud Providers. On the other hand, incoming Cloud Providers start to support more and more unbundled Cloud Instances offerings. Thus, consumers may set up at their will the quantity of CPU, network bandwidth and memory or hard disk capacities their Cloud Instances will have. These facts enable the standardization of interoperable Cloud Instance configurations. In this paper, CompatibleOne is presented as an approach to bring Cloud Computing as a commodity. For this, the requirements to make of a product a commodity have been identified and have been mapped into the CompatibleOne architecture components. Our approach shows the practical feasibility of bringing Cloud Computing as a commodity.
Presentation of the Ph. D. dissertation SLA-Driven Cloud Computing Domain Representation and Management. This presentation explains a new methodology for the representation and management of Cloud services using SLA fragments. Cloud resources are described as independent SLA fragments, which are composed on the fly to create complete Cloud services.
An architecture for the management of Cloud services is also presented.
Cloudcompaas, an open source SLA-driven framework is introduced. Cloudcompaas implements the methodology and architecture presented earlier and enables the management of the complete lifecycle of Cloud services.
Finally a set of experiments to validate the utility and performance of the contributions is presented.
SLALOM Webinar Final Legal Outcomes Explanined "Using the SLALOM Contract Ser...Oliver Barreto Rodríguez
SLALOM organized two live sessions to present the final versions of our legal terms and technical specifications for #Cloud #SLAs. The sessions provide examples showing how to practically apply SLALOM to improve current practice in the industry for # Cloud #SLAs and support development of cloud computing metrics.
The second webinar covered SLALOM legal track, "Ready to Use Cloud Master Agreement for SLAs". You can now have access to the slides used in the legal webinar here.
CompatibleOne bringing cloud as a commodityCompatibleOne
Cloud Brokers enable interoperability and portability of applications across multiple Cloud Providers. On the other hand, incoming Cloud Providers start to support more and more unbundled Cloud Instances offerings. Thus, consumers may set up at their will the quantity of CPU, network bandwidth and memory or hard disk capacities their Cloud Instances will have. These facts enable the standardization of interoperable Cloud Instance configurations. In this paper, CompatibleOne is presented as an approach to bring Cloud Computing as a commodity. For this, the requirements to make of a product a commodity have been identified and have been mapped into the CompatibleOne architecture components. Our approach shows the practical feasibility of bringing Cloud Computing as a commodity.
Presentation of the Ph. D. dissertation SLA-Driven Cloud Computing Domain Representation and Management. This presentation explains a new methodology for the representation and management of Cloud services using SLA fragments. Cloud resources are described as independent SLA fragments, which are composed on the fly to create complete Cloud services.
An architecture for the management of Cloud services is also presented.
Cloudcompaas, an open source SLA-driven framework is introduced. Cloudcompaas implements the methodology and architecture presented earlier and enables the management of the complete lifecycle of Cloud services.
Finally a set of experiments to validate the utility and performance of the contributions is presented.
The increase in design sizes and the complexity of timing checks at 40nm technology nodes
and below is responsible for longer run times, high memory requirements, and the need for a
growing set of gate-level simulation (GLS) applications including design for test (DFT) and lowpower considerations. As a result, in order to complete the verification requirements on time,
it becomes extremely important for GLS to be started as early in the design cycle as possible,
and for the simulator to be run in high-performance mode. This application note describes
new methodologies and simulator use models that increase GLS productivity, focusing on two
techniques for GLS to make the verification process more effective
CMGT/410 v19
Business Requirements Template
CMGT/410 v19
Page 2 of 14Business Requirements TemplateHow to Use This Document
This document is a template for creating a Business Requirements Document (BRD); it includes instructions and examples for guidance. As you complete your BRD using the template, only include sections pertinent to your project.Table of Contents
How to Use This Document1
Table of Contents1
1.Executive Summary2
1.1Project Overview2
1.2Purpose and Scope of this Specification2
2.Product/Service Description3
2.1Product Context3
2.2User Characteristics3
2.3Assumptions3
2.4Constraints3
2.5Dependencies3
3.Requirements4
3.1Functional Requirements4
3.2User Interface Requirements5
3.3Usability5
3.4Performance6
3.4.1Capacity6
3.4.2Availability6
3.4.3Latency6
3.5Manageability/Maintainability6
3.5.1Monitoring6
3.5.2Maintenance6
3.5.3Operations7
3.6System Interface/Integration7
3.6.1Network and Hardware Interfaces7
3.6.2Systems Interfaces7
3.7Security8
3.7.1Protection8
3.7.2Authorization and Authentication8
3.8Data Management8
3.9Standards Compliance9
3.10 Portability9
4.User Scenarios/Use Cases9
5.Deleted or Deferred Requirements9
6.Requirements Confirmation/Stakeholder Sign-Off10
Appendices11
Appendix A: Definitions, Acronyms, and Abbreviations11
Appendix B: References11
Appendix C: Requirements Traceability Matrix12
Appendix D: Organizing the Requirements131. Executive Summary
1.1 Project Overview
Describe this project or product and its intended audiences, or provide a link or reference to the project charter.
1.2 Purpose and Scope of this Specification
Describe the purpose of this specification and its intended audience. Include a description of what is within the scope what is outside of the scope of these specifications.
Example:
In Scope
This document addresses requirements related to Phase 2 of Project A:
· Modification of Classification Processing to meet legislative mandate ABC
· Modification of Labor Relations Processing to meet legislative mandate ABC
Out of Scope
The following items in Phase 3 of Project A are out of scope:
· Modification of Classification Processing to meet legislative mandate XYZ
· Modification of Labor Relations Processing to meet legislative mandate XYZ
(Phase 3 will be considered in the development of the requirements for Phase 2, but the Phase 3 requirements will be documented separately.)2. Product/Service Description
In this section, describe the general factors that affect the product and its requirements. This section should contain background information, not state specific requirements (provide the reasons why certain specific requirements are later specified).
2.1 Product Context
How does this product relate to other products? Is it independent and self-contained? Does it interface with a variety of related systems? Describe these relationships or use a diagram to show the major components of the larger system, interconnections, and external interfaces.
2.2 User Characteristics
Create gen.
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://cainc.to/CAW17-CD
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://ow.ly/3X1E50g62YC
The Heterogeneous Hardware & Software Alliance (Heterogeneity Alliance) is an association (non-profit & non-legal) formed of different organizations is an initiative created by TANGO project and now followeed by six other members, which pursues a common objective: influence the H-HW&SW market.
The alliance is focused on all phases of applications for H-HW&SW to enable a new wave of development and execution tools for next-generation applications:
- from design time,
- to enhanced execution,
- to parallel programming,
- and to an optimized runtime in a number of dimensions (energy, performance, data locality and security among others)
DITAS Cloud Platform allows developers to design data-intensive applications, deploy them on a mixed cloud/edge environment and execute the resulting distributed application in an optimal way by exploiting the data and computation movement strategies, no matter the number of different devices, their type and the heterogeneity of runtime environments. It brings to your developer toolbox the best of Cloud & Edge worlds.
More Related Content
Similar to SLALOM Webinar Final Technical Outcomes Explanined "Using the SLALOM Technical Model to Improve #Cloud #SLA" v1
The increase in design sizes and the complexity of timing checks at 40nm technology nodes
and below is responsible for longer run times, high memory requirements, and the need for a
growing set of gate-level simulation (GLS) applications including design for test (DFT) and lowpower considerations. As a result, in order to complete the verification requirements on time,
it becomes extremely important for GLS to be started as early in the design cycle as possible,
and for the simulator to be run in high-performance mode. This application note describes
new methodologies and simulator use models that increase GLS productivity, focusing on two
techniques for GLS to make the verification process more effective
CMGT/410 v19
Business Requirements Template
CMGT/410 v19
Page 2 of 14Business Requirements TemplateHow to Use This Document
This document is a template for creating a Business Requirements Document (BRD); it includes instructions and examples for guidance. As you complete your BRD using the template, only include sections pertinent to your project.Table of Contents
How to Use This Document1
Table of Contents1
1.Executive Summary2
1.1Project Overview2
1.2Purpose and Scope of this Specification2
2.Product/Service Description3
2.1Product Context3
2.2User Characteristics3
2.3Assumptions3
2.4Constraints3
2.5Dependencies3
3.Requirements4
3.1Functional Requirements4
3.2User Interface Requirements5
3.3Usability5
3.4Performance6
3.4.1Capacity6
3.4.2Availability6
3.4.3Latency6
3.5Manageability/Maintainability6
3.5.1Monitoring6
3.5.2Maintenance6
3.5.3Operations7
3.6System Interface/Integration7
3.6.1Network and Hardware Interfaces7
3.6.2Systems Interfaces7
3.7Security8
3.7.1Protection8
3.7.2Authorization and Authentication8
3.8Data Management8
3.9Standards Compliance9
3.10 Portability9
4.User Scenarios/Use Cases9
5.Deleted or Deferred Requirements9
6.Requirements Confirmation/Stakeholder Sign-Off10
Appendices11
Appendix A: Definitions, Acronyms, and Abbreviations11
Appendix B: References11
Appendix C: Requirements Traceability Matrix12
Appendix D: Organizing the Requirements131. Executive Summary
1.1 Project Overview
Describe this project or product and its intended audiences, or provide a link or reference to the project charter.
1.2 Purpose and Scope of this Specification
Describe the purpose of this specification and its intended audience. Include a description of what is within the scope what is outside of the scope of these specifications.
Example:
In Scope
This document addresses requirements related to Phase 2 of Project A:
· Modification of Classification Processing to meet legislative mandate ABC
· Modification of Labor Relations Processing to meet legislative mandate ABC
Out of Scope
The following items in Phase 3 of Project A are out of scope:
· Modification of Classification Processing to meet legislative mandate XYZ
· Modification of Labor Relations Processing to meet legislative mandate XYZ
(Phase 3 will be considered in the development of the requirements for Phase 2, but the Phase 3 requirements will be documented separately.)2. Product/Service Description
In this section, describe the general factors that affect the product and its requirements. This section should contain background information, not state specific requirements (provide the reasons why certain specific requirements are later specified).
2.1 Product Context
How does this product relate to other products? Is it independent and self-contained? Does it interface with a variety of related systems? Describe these relationships or use a diagram to show the major components of the larger system, interconnections, and external interfaces.
2.2 User Characteristics
Create gen.
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://cainc.to/CAW17-CD
Case Study: How CA Went From 40 Days to Three Days Building Crystal-Clear Tes...CA Technologies
Here at CA Technologies, our development teams share many of the same challenges producing quality software as our customers.
For more information on DevOps: Continuous Delivery, please visit: http://ow.ly/3X1E50g62YC
The Heterogeneous Hardware & Software Alliance (Heterogeneity Alliance) is an association (non-profit & non-legal) formed of different organizations is an initiative created by TANGO project and now followeed by six other members, which pursues a common objective: influence the H-HW&SW market.
The alliance is focused on all phases of applications for H-HW&SW to enable a new wave of development and execution tools for next-generation applications:
- from design time,
- to enhanced execution,
- to parallel programming,
- and to an optimized runtime in a number of dimensions (energy, performance, data locality and security among others)
DITAS Cloud Platform allows developers to design data-intensive applications, deploy them on a mixed cloud/edge environment and execute the resulting distributed application in an optimal way by exploiting the data and computation movement strategies, no matter the number of different devices, their type and the heterogeneity of runtime environments. It brings to your developer toolbox the best of Cloud & Edge worlds.
DITAS Cloud Platform allows developers to design data-intensive applications, deploy them on a mixed cloud/edge environment and execute the resulting distributed application in an optimal way by exploiting the data and computation movement strategies, no matter the number of different devices, their type and the heterogeneity of runtime environments. It brings to your developer toolbox the best of Cloud & Edge worlds.
DITAS Cloud Platform allows developers to design data-intensive applications, deploy them on a mixed cloud/edge environment and execute the resulting distributed application in an optimal way by exploiting the data and computation movement strategies, no matter the number of different devices, their type and the heterogeneity of runtime environments. It brings to your developer toolbox the best of Cloud & Edge worlds.
DITAS Cloud Platform allows developers to design data-intensive applications, deploy them on a mixed cloud/edge environment and execute the resulting distributed application in an optimal way by exploiting the data and computation movement strategies, no matter the number of different devices, their type and the heterogeneity of runtime environments. It brings to your developer toolbox the best of Cloud & Edge worlds.
Heterogeneous Hardware & Software Alliance...
The time to start pushing heterogeneity forward has come. We are glad to invite you to the first session of the “Heterogeneous Hardware & Software Alliance”. This alliance is an initiative originally undertaken by members of TANGO project (Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation) with the intention to create an association of research institutions and industry organizations, that join efforts of organizations interested in the development of future technologies and tools to advance and take full advantage of computing and applications using heterogeneous hardware.
The main goal of the Alliance is to leverage the idea of creating a common approach which will help research institutions like yours, to promote and advance your technologies under a common approach and stronger branding that will help you benefit from creating impact with your technology assets and investigations. It will also help closing the gap between real-world needs and research lines.
TANGO Project is a new initiative undartaken by a group of european organizations and institutions to fullfil one purpose: Simplify the way developers approach the development of next-generation applications based in heterogeneous hardware architectures, configurations and software systems including heterogeneous clusters, chips and programmable logic devices.
TANGO Project is a new initiative undartaken by a group of european organizations and institutions to fullfil one purpose: Simplify the way developers approach the development of next-generation applications based in heterogeneous hardware architectures, configurations and software systems including heterogeneous clusters, chips and programmable logic devices.
TANGO Project is a new initiative undartaken by a group of european organizations and institutions to fullfil one purpose: Simplify the way developers approach the development of next-generation applications based in heterogeneous hardware architectures, configurations and software systems including heterogeneous clusters, chips and programmable logic devices.
TANGO Project is a new initiative undartaken by a group of european organizations and institutions to fullfil one purpose: Simplify the way developers approach the development of next-generation applications based in heterogeneous hardware architectures, configurations and software systems including heterogeneous clusters, chips and programmable logic devices.
SLALOM Best Practice DOs & DON'Ts Guide on Cloud SLAs for Project ResearchersOliver Barreto Rodríguez
Sharing is Caring! SLALOM shares best practice DOs & DON'Ts Guide on Cloud SLAs for Project Researchers
SLALOM project wants to share with the European Research & Scientific Community some lessons learned from bridging research, EC initiatives and ISO in SLA standardisation. It is now important to follow and include these advices when elaborating new proposals for the next H2020 Call.
These DOs and DON’Ts were formulated by the SLALOM consortium based on our experience working with research projects, ISO and EC initiatives in the course of the H2020 SLALOM project.
This presentation gives context of the main problems of today's IT market solved by MODAClouds.
MODAClouds focus on 3 main innovative aspects, creating technology that :
- puts the focus on enabling Business-Driven QoS influencing the way Cloud applications are created and operated.
- enables DevOps methodologies
- facilitates living in a Multi-Cloud world
MODAClouds - Solving Top Cloud Problems
MODAClouds Project is a research project initiate to research in various fields in cloud Computing. The project was born with the objective to to provide methods, a decision support system (DSS) and an open source IDE and run-time environment to enhance the way we use multiple cloud services. In addition, the project builds leverages one key characteristic, guaranteeing quality of service in MultiCloud scenarios.
MODAClouds Multi-Cloud DevOps Toolbox is a set of tools and best practice methods specifically created "for Clouds", for the most demanding Multi-Cloud scenarios. MODAClouds Toolbox is here to help developers and operators to change and improve the way software is created and operated, in a more agile manner.
How to create expectations and sell the value of a project from minute zero !!!
focus on real needs and problems, not on technology state of the art. Read these 3 easy steps...
MODAClouds - Underpinning the Leap to DevOps Movement on Clouds scenariosOliver Barreto Rodríguez
MODAClouds - Underpinning the Leap to DevOps Movement on Clouds scenarios is a white paper made with the intention to provide insights of MODAClouds project and easily identify valueble usage by cloud players
Cloud Interoperability and Portability at Future Pre-FIA 2013 Multi-Clouds Wo...Oliver Barreto Rodríguez
This presentation was given by OPTIMIS Project Coordinator, Ana Juan, at the Multi-Clouds Workshop celebrated on the previous day of the Future Internet Assembly 2013 in Dublin.
This paper will provide you a quick overview of OPTIMIS Toolkit from the perspective of value provided to businesses and end users, such as Organizations with, or without a Cloud Strategy, Cloud Application Developers, Cloud Service and Infrastructure Providers.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
3. Problem snapshot
SLA Technological Landscape
• A lot of ambiguities exist in SLAs of Cloud providers
• The measurement/auditing process of an SLA cannot be done
non-repudiably
– i.e., the involved parties may be able to challenge the auditing of the SLOs
• Standard models are rare and are not widely used
• Differences between Cloud providers cannot be easily assessed
– Absolute percentages cannot be compared among providers
3
4. Problem snapshot
Ambiguities in SLAs
• Availability (as defined by providers) definition may encapsulate different
formulas for its calculation
• The definition and calculation of availability may include different ways of
identifying a failure, e.g.:
– Response time less than a limit
– Returned response within a string enumeration
(i.e. a predefined range of string values)
• Preconditions apply
4
5. Problem snapshot
Real world example of Ambiguity
• Ambiguity in the measurement process of AWS EC2 SLA
• “Unavailable” and “Unavailability” mean:
– When all of your running instances have no external connectivity
• Determination of external connectivity. How?
– Internet Layer: Pinging (ICMP)?
• Security threat
– Application layer: Endpoint checking?
• Includes application downtime
• Not exclusively the responsibility of AWS EC2
5
6. Problem snapshot
Examples of preconditions
• For any SLA to apply, a number of preconditions typically exist per
provider
• Examples:
– Deployment: A specified number of Availability Zones must be used
– Deployment: Replication options must be used
– Usage/Measurement: Unavailable resources must first be restarted
– Usage/Measurement: The number of request must be throttled
6
7. Problem snapshot
SLALOM Technical objectives
• To have a standard model for defining SLAs that eliminates ambiguities
• To facilitate the measurement, monitoring and enforcement of SLAs to
achieve non-repudiability
• To abstract the SLA definition process (SLA SLO metric sub-
metric) so as to enable the application of metrics that allow for
direct comparability
7
9. SLALOM@ISO
Interaction with ISO
• Mapped SLALOM 3-layer initial approach to ISO baseline model
– ISO approach powerful at describing more complex metrics (e.g. MS Azure SLA)
• Demonstrated and suggested the ISO model Extendibility for fully defining the
way an SLO can be audited – ACCEPTED
– Suggested the inclusion of an Extension class in the ISO model
– Instantiate the ISO Extension class as the base Sample class of SLALOM
– Introduce the SLALOM Sample layer for concretely defining the sampling process
– In the latest revision of the draft ISO model all classes are extendable
• Applied on different types of Objectives of Commercial SLAs
– GAE Datastore (PaaS)
– AWS EC2 (IaaS)
– Microsoft Azure (Storage)
• Showed applicability of the proposed approach for directly creating machine
understandable descriptions of the SLOs
9
10. SLALOM@ISO
ISO 19086-2 Metric model
• SLALOM two-fold contribution:
– ISO model classes parameters: machine understandable
– ISO model extension: definition of sampling process
10
SLALOM - proposed
extension
Model from the latest
revision of the 19086-2
draft standard,
to be made available in
the forthcoming weeks
All classes extendible
11. SLALOM@ISO
SLALOM vs. ISO compliance
ISO-compliant SLA
• Usage of the ISO fields
(classes, parameters)
• SLA not necessarily fully
defined
11
SLALOM-compliant SLA
• ISO compliant
• Clear and Well-defined
• Non-repudiable
• SLAs still not comparable
among providers
13. Commercial SLAs @SLALOM
Amazon WS EC2
Amazon EC2
Level / definition Expression Notes
Sample definition
sc: UNDEFINED (assumed ‘ping’->
ICMP)
The sampling condition is not defined in the
Amazon EC2 SLA. The concrete wording is “when
all of your running instances have no external
connectivity”. Nonetheless, the way to specify /
measure “external connectivity” is not defined.
For example, a customer could use a ping
operation or a custom monitoring mechanism.
Type of operation: ping
Not defined how the condition of connectivity
can be actually measured (e.g. the ping operation
mentioned previously).
Boundary period
and error
definitions
bp > 60 sec
The exact wording is “the percentage of
minutes”, thus the period is 60 seconds.
ec = 100%
Error condition reflecting that the error ratio is
that for the entire bp the resource must be
continuously “unavailable”.
Abstract metric
definition
availability < 99.95 %
Availability metric definition given the boundary
period and error condition.
13
14. Commercial SLAs @SLALOM
Google AE Datastore
Google AppEngine Datastore
Level / definition Expression Notes
Sample definition
sc: INTERNAL_ERROR
Several sampling conditions are
defined per type of operation. For
example it is specified (exact wording)
“INTERNAL_ERROR, TIMEOUT, …” for
API calls.
Type of operation: API calls
Several type of operations are defined.
An example is provided here.
Boundary period
and error
definitions
bp > 300 sec
The exact wording is “five consecutive
minutes”.
ec > 10%
Error condition reflecting that the
error ratio is (exact wording) “ten
percent Error Rate”.
Abstract metric
definition
availability < 99.95 %
Availability metric definition given the
boundary period and error condition.
14
15. Commercial SLAs @SLALOM
Microsoft Azure
15
Microsoft Azure Storage
Level / definition Expression Notes
Sample definition
sc = 60 sec
Several sampling conditions are defined
per type of operation. For example it is
specified (exact wording) “Sixty (60)
seconds” for PutBlockList and
GetBlockList.
Type of operation: PutBlockList and
GetBlockList
Several type of operations are defined.
An example is provided here.
Boundary period
and error
definitions
bp > 3600 sec
The exact wording is “given one-hour
interval”.
ec > 0%
Error condition reflecting that all periods
should be taken into account for the
availability metric evaluation (exact
wording) “is the sum of Error Rates for
each hour”.
Abstract metric
definition
availability < 99.9 %
Availability metric definition given the
boundary period and error condition.
17. SLA comparability
Overview
• Despite the fact that through the SLALOM / ISO model SLA descriptions
may be aligned, this does not mean that SLAs (or their parameters) will be
directly comparable
• Need for more abstract metrics, that result in direct comparisons
– SLA success ratio (Published* by Cloud WG of SPEC**)
– SLA strictness (Published* by Cloud WG of SPEC+)
– Standardised datasets
• SLALOM model enables the application of comparable metrics
– All SLA parameters are clearly and well defined
– The SLAs are machine readable
– Greatly simplifies the process and its automation
* Ready for Rain? A View from SPEC Research on the Future of Cloud Metrics
** SPEC: Standard Performance Evaluation Corporation
17
18. SLA comparability
Comparative metrics
• SLA success ratio
– Based on experience of usage of a service or provider
– In the course of time keep track of successful or violated SLAs and total SLAs
– Calculate the ratio: (Successful SLAs / Total SLAs)
• SLA strictness
– Extract static SLA parameters of importance for a given domain or application
– Assign weights to parameters and normalise
– Map these parameters to an arbitrary function
– Results in a comparative ranking of different SLAs
• Standardised datasets
– Define a set of failure scenarios
– Benchmark each provider SLA definition against the predefined scenario
18
20. Lessons Learnt
Do
1) Target metrics that are directly comparable among providers
2) Consider directly machine understandable descriptions via standardised
templates
3) Look into the ISO 19086 series of standards and adopt if applicable
4) Think outside the narrow Cloud box. With the advent of *aaS and the
emergence of IoT, SLAs may refer to services external to the data center or to
specific metrics needed by Cloud Services based on the individual Use Case
5) Consider composite services that may create chains of SLAs and their
interdependencies. For guaranteeing response time to service-support services
consider downstream (reseller) and upstream (e.g. provider’s subcontractors)
actors’ requirements and the need to ‘float’ SLA clauses down the chain
6) Consider resource management as a key part of SLA upkeep and analysis process
7) Consider mechanisms that would allow providers, resellers and users to easily
monitor the SLA in a common and understandable way, even if not experts.
20
21. Lessons Learnt
Don’t
1) Consider that offered terms are equivalent, even if they originally seem to refer
to the same SLO. Always check the fine print for differences in how metrics are
actually calculated
2) Consider that SLAs are monitored by providers.
3) Leave end users out of the loop. Comprehensiveness and clarity of an SLA (or its
relevant metric) for non-experts should be a key target. Translate your metrics
into plain English if necessary.
4) Limit yourself to popular metrics (e.g. availability) in SLAs. Users are also
interested in more generic Quality of Experience (QoE) indexes such as stability
5) Expect the market to bend for you: fit in to current practice to the maximum
extent and if not possible, hone your value proposition
21
23. SLALOM contribution
Tender Evaluation
• Usable by various actors
– Adopters to specify their needs
– Providers to describe their value proposition
– Third parties (resellers/brokers) to combine and offer services and
suggest options
• Added value
– Application of comparative metrics
– Automation of the process
• Benefits
– Improve transparency
– Enhance efficiency
– Establish fairness
23
24. SLALOM contribution
Contract monitoring
• Benefits
– Achieve SLA non-repudiation
– Establish trust and transparency for service execution compliant to
the terms and proper violation management
– Enable automation of contract and performance management and
monitoring
– Aid the involvement of actors like trusted third parties offering
relevant services
24
25. • SLALOM proposed specification / reference model already
takes into account:
– Standardisation approaches and working groups outcomes
– Current SLAs and metrics offered by commercial Cloud providers
– Views expressed by Cloud providers and adopters
– Research outcomes
• Further feedback regarding applicability and practical usage
of our model is more than welcome
• Please take the survey on IoT/Cloud metrics here:
https://docs.google.com/forms/d/1JmwDXyO_1hT9iR-lm1c3LCQu_zF64nf-uFnxBeGMv3g/viewform
25
SLALOM contribution
Your feedback needed
27. SLALOM Project 27
SLALOM is a CSA financed by European
Commission under Grant agreement 644270
For more information on the initiative contact us:
@CloudSLAlom
www.SLALOM-Project.eu
SLALOM Project Coordinator
(daniel.field@atos.net)
29. Backup sliSLA strictness example
29
Provider/Service t q (s1 * q) q’ (s2 * q) p (s3 * p) x S S’
Google Compute 0 5 (1.00) 5 (0.10) 99.95 (0.50) 0 0.50 1.60
Amazon EC2 0 1 (0.20) 1 (0.02) 99.95 (0.50) 0 1.30 1.48
MS Azure Compute 1 1 (0.20) 1 (0.02) 99.95 (0.50) 0 2.30 2.48
• Extract static SLA parameters of importance for a given domain/application
– All these parameters (e.g. boundary period, error rates) are described in the SLALOM model
• Map these parameters to an arbitrary Function, e.g.:
, where:
– q: size of the boundary period
– p: percentage of availability
– t: running time vs. overall monthly time (boolean), t ϵ {0,1}
– x: existence of performance metrics (boolean), x ϵ {0,1}
– si: normalisation factor for the continuous variables so that:
(s1*q) ϵ [0,1], (s2*q) ϵ [0,0.1] and (s3*p) ϵ [0,0.5]
• Resulting value may be compared between providers
S = t + (1 - s1/2q) + s3p + x
31. AWS EC2 SLA @SLALOM (1/9)
Amazon EC2
Level / definition Expression Notes
Sample definition
sc: UNDEFINED (assumed ‘ping’->
ICMP)
The sampling condition is not defined in the
Amazon EC2 SLA. The concrete wording is “when
all of your running instances have no external
connectivity”. Nonetheless, the way to specify /
measure “external connectivity” is not defined.
For example, a customer could use a ping
operation or a custom monitoring mechanism.
Type of operation: ping
Not defined how the condition of connectivity
can be actually measured (e.g. the ping operation
mentioned previously).
Boundary period
and error
definitions
bp > 60 sec
The exact wording is “the percentage of
minutes”, thus the period is 60 seconds.
ec = 100%
Error condition reflecting that the error ratio is
that for the entire bp the resource must be
continuously “unavailable”.
Abstract metric
definition
availability < 99.95 %
Availability metric definition given the boundary
period and error condition.
31
32. AWS EC2 SLA @SLALOM (2/9)
32
Abstract metric
definition
availability < 99.95 %
Availability metric definition given
the boundary period and error
condition.
Condition of SLA violation specification
Availability threshold specification
Availability definition and calculation
Billing period specification
Unavailability definition and calculation
Unavailability interval definition and calculation
Boundary period specification
Unreachable sample specification
Sample definition and retrieval
PARAM_001
PARAM_002
SAMPLE_001
QDT_001
UAP_001
BP_001
CFA_002
PARAM_003
CONDITION
33. AWS EC2 SLA @SLALOM (3/9)
33
• Examples of preconditions:
– Deployment: Number of Availability Zones used
– Deployment: Replication options used
– Usage/Measurement: Restarting of resources when unavailable
– Usage/Measurement: Applied Throttling of requests
• Practical suggestions:
– The strict definition of the Rules class to be concerning the
necessary preconditions to apply
– Note field as placeholder for the actual SLA text that refers to a
given block
34. AWS EC2 SLA @SLALOM (4/9)
34
SAMPLE_001
Sample
definition
sc: UNDEFINED
(assumed ‘ping’-
> ICMP)
The sampling condition is not defined in
the Amazon EC2 SLA. The concrete wording
is “when all of your running instances have
no external connectivity”. Nonetheless, the
way to specify / measure “external
connectivity” is not defined. For example, a
customer could use a ping operation or a
custom monitoring mechanism.
Type of
operation: ping
Not defined how the condition of
connectivity can be actually measured (e.g.
the ping operation mentioned previously).
SAMPLE_001
35. AWS EC2 SLA @SLALOM (5/9)
35
Boundary period
and error
definitions
bp > 60 sec
The exact wording is “the percentage of
minutes”, thus the period is 60 seconds.
ec = 100%
Error condition reflecting that the error ratio
is that for the entire bp the resource must be
continuously “unavailable”.
PARAM_001
PARAM_002
SAMPLE_001
PARAM_001
PARAM_002
36. AWS EC2 SLA @SLALOM (6/9)
36
PARAM_001
PARAM_002
SAMPLE_001
QDT_001
PARAM_001
PARAM_002SAMPLE_001
QDT_001
• Calculation of Cloud Service Unavailability Interval
• Based on:
- The current sample
- The defined boundary period
- The definition of unreachable sample
QDT_001
SAMPLE_001
PARAM_001
PARAM_002
37. AWS EC2 SLA @SLALOM (7/9)
37
PARAM_001
PARAM_002
SAMPLE_001
QDT_001
• Calculation of Cloud Service Unavailability
• Based on:
- The Cloud Service Unavailability Interval
QDT_001
QDT_001
UAP_001
UAP_001
UAP_001
38. AWS EC2 SLA @SLALOM (8/9)
38
PARAM_001
PARAM_002
SAMPLE_001
QDT_001
• Calculation of Cloud Service Availability
• Based on:
- Billing period
- The Cloud Service Unavailability
UAP_001
UAP_001
UAP_001
UAP_001
BP_001
BP_001 BP_001
BP_001
BP_001
CFA_002
CFA_002
CFA_002
41. GAE Datastore SLA @SLALOM(1/11)
Google AppEngine Datastore
Level / definition Expression Notes
Sample definition
sc: INTERNAL_ERROR
Several sampling conditions are
defined per type of operation. For
example it is specified (exact wording)
“INTERNAL_ERROR, TIMEOUT, …” for
API calls.
Type of operation: API calls
Several type of operations are defined.
An example is provided here.
Boundary period
and error
definitions
bp > 300 sec
The exact wording is “five consecutive
minutes”.
ec > 10%
Error condition reflecting that the
error ratio is (exact wording) “ten
percent Error Rate”.
Abstract metric
definition
availability < 99.95 %
Availability metric definition given the
boundary period and error condition.
41
42. GAE Datastore SLA @SLALOM(2/11)
42
SAMPLE_001SAMPLE_001
PARAM_003
PARAM_002
PARAM_001
ER_001
DUR_001
QDT_001
UAP_001
BP_001
CFA_002
PARAM_004
ASV_001
Condition of SLA Violation specification
Availability threshold specification
Availability definition and calculation
Billing Period specification
Unavailability definition and calculation
Unavailability Interval definition and calculation
Sampling Period duration definition and calculation
Error Rate definition and calculation
Boundary Period specification
Error Rate threshold specification
Unreachable sample values specification
Sample definition and retrieval
Abstract metric
definition
availability < 99.95 %
Availability metric definition given the
boundary period and error condition.
43. GAE Datastore SLA @SLALOM(3/11)
43
• Examples of preconditions:
– Deployment: Number of Availability Zones used
– Deployment: Replication options used
– Usage/Measurement: Restarting of resources when unavailable
– Usage/Measurement: Applied Throttling of requests
• Practical suggestions:
– The strict definition of the Rules class to be concerning the necessary
preconditions to apply
– Note field as placeholder for the actual SLA text that refers to a given
block
44. GAE Datastore SLA @SLALOM(4/11)
44
Sample
definition
sc: INTERNAL_ERROR
Several sampling conditions are
defined per type of operation. For
example it is specified (exact wording)
“INTERNAL_ERROR, TIMEOUT, …” for
API calls.
Type of operation: API calls
Several type of operations are
defined. An example is provided here.
SAMPLE_001SAMPLE_001
SAMPLE_001
45. GAE Datastore SLA @SLALOM(5/11)
45
Sample
definition
sc: INTERNAL_ERROR
Several sampling conditions are
defined per type of operation. For
example it is specified (exact wording)
“INTERNAL_ERROR, TIMEOUT, …” for
API calls.
Type of operation: API calls
Several type of operations are
defined. An example is provided here.
SAMPLE_001SAMPLE_001PARAM_003
PARAM_003
46. GAE Datastore SLA @SLALOM(6/11)
46
SAMPLE_001SAMPLE_001PARAM_003
PARAM_003
Boundary period
and error
definitions
bp > 300 sec The exact wording is “five consecutive minutes”.
ec > 10%
Error condition reflecting that the error ratio is
(exact wording) “ten percent Error Rate”.
PARAM_002
PARAM_002
PARAM_001
PARAM_001
47. GAE Datastore SLA @SLALOM(7/11)
47
SAMPLE_001SAMPLE_001
PARAM_003
PARAM_002
PARAM_001
• Calculation of duration of sampling period:
- The period during which a number of samples was
received
- Period duration calculation based on samples timestamp
• Calculation of actual Error Rate for sampling period:
- Number of violation samples / number of total samples
- Violation samples: samples containing values from a
specific values pool
ER_001
ER_001
SAMPLE_001
SAMPLE_001
PARAM_003
SAMPLE_001DUR_001
DUR_001
SAMPLE_001
DUR_001
ER_001
PARAM_003
48. GAE Datastore SLA @SLALOM(8/11)
48
SAMPLE_001SAMPLE_001
PARAM_003
PARAM_002
PARAM_001
• Calculation of Unavailability Interval
- IF [Sampling Period duration > Boundary Period]
- AND IF [Error Rate > Thershold (10%)]
- THEN [Unavailability Interval = Sampling Period duration]
ER_001
ER_001
QDT_001 DUR_001
DUR_001
PARAM_001
PARAM_002
DUR_001
QDT_001
QDT_001
QDT_001
QDT_001
ER_001 PARAM_002
DUR_001 PARAM_001
QDT_001 DUR_001
49. GAE Datastore SLA @SLALOM(9/11)
49
SAMPLE_001SAMPLE_001
PARAM_003
PARAM_002
PARAM_001
• Calculation of Unavailability period
- It equals the SUM of Unavailability Intervals
ER_001
DUR_001
QDT_001
QDT_001
UAP_001
UAP_001
UAP_001
QDT_001
UAP_001
50. GAE Datastore SLA @SLALOM(10/11)
50
SAMPLE_001SAMPLE_001
PARAM_003
PARAM_002
PARAM_001
ER_001
DUR_001
QDT_001
UAP_001
UAP_001
BP_001
BP_001BP_001
BP_001
CFA_002
CFA_002
CFA_002
• Calculation of Cloud Service Availability
• Based on:
- Billing period
- The Cloud Service Unavailability
CFA_002
BP_001
UAP_001