Cloud computing provides economic benefits through common infrastructure, location independence, online connectivity, utility pricing, and on-demand resources. Pooled, standardized resources lower overhead costs and increase utilization through statistical multiplexing. Aggregating independent workloads reduces variability, lowering the cost per delivered resource. In reality, workloads may be correlated, limiting these statistical economies. However, mid-size providers can achieve scale benefits by aggregating independent demands. Large cloud providers utilize scale through low-cost components and automation.
Cloud computing deployment models include public, private, hybrid, and community clouds. A public cloud has infrastructure open for public use, owned by a business, academic, or government organization. Examples are Google App Engine and Amazon EC2. Workloads in a public cloud may be relocated anywhere and shared on multi-tenant machines, introducing reliability and security risks. Subscribers have limited visibility and control over their data security.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services. It has essential characteristics like on-demand self-service, broad network access, resource pooling and rapid elasticity. The cloud services models include Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The deployment models are private cloud, community cloud, public cloud and hybrid cloud.
This document discusses key concepts related to cloud adoption and cloud rudiments. For cloud adoption, it states that cloud is suitable for low priority or short term projects that have low availability requirements and short life spans. For cloud rudiments, it outlines essential cloud capabilities like resource aggregation, application services, self-service portals, and dynamic resource management. It also discusses concepts like reservation of services, allocation engines, reporting and accounting, and metering of resources.
This presentation provides an overview of cloud computing, including:
1. Cloud computing allows on-demand access to computing resources like servers, storage, databases, networking, software, analytics and more over the internet.
2. Key features of cloud computing include scalability, availability, agility, cost-effectiveness, and device/location independence.
3. Popular cloud storage services include Google Drive, Dropbox, and Apple iCloud which offer free basic storage with options to pay for additional storage.
The document discusses cloud resource management and cloud computing architecture. It covers the following key points in 3 sentences:
Cloud architecture can be broadly divided into the front end, which consists of interfaces and applications for accessing cloud platforms, and the back end, which comprises resources for providing cloud services like storage, virtual machines, and security mechanisms. Common cloud service models include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Virtualization techniques allow for the sharing of physical resources among multiple organizations by assigning logical names to physical resources and providing pointers to access them.
Cloud computing deployment models include public, private, hybrid, and community clouds. A public cloud has infrastructure open for public use, owned by a business, academic, or government organization. Examples are Google App Engine and Amazon EC2. Workloads in a public cloud may be relocated anywhere and shared on multi-tenant machines, introducing reliability and security risks. Subscribers have limited visibility and control over their data security.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services. It has essential characteristics like on-demand self-service, broad network access, resource pooling and rapid elasticity. The cloud services models include Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The deployment models are private cloud, community cloud, public cloud and hybrid cloud.
This document discusses key concepts related to cloud adoption and cloud rudiments. For cloud adoption, it states that cloud is suitable for low priority or short term projects that have low availability requirements and short life spans. For cloud rudiments, it outlines essential cloud capabilities like resource aggregation, application services, self-service portals, and dynamic resource management. It also discusses concepts like reservation of services, allocation engines, reporting and accounting, and metering of resources.
This presentation provides an overview of cloud computing, including:
1. Cloud computing allows on-demand access to computing resources like servers, storage, databases, networking, software, analytics and more over the internet.
2. Key features of cloud computing include scalability, availability, agility, cost-effectiveness, and device/location independence.
3. Popular cloud storage services include Google Drive, Dropbox, and Apple iCloud which offer free basic storage with options to pay for additional storage.
The document discusses cloud resource management and cloud computing architecture. It covers the following key points in 3 sentences:
Cloud architecture can be broadly divided into the front end, which consists of interfaces and applications for accessing cloud platforms, and the back end, which comprises resources for providing cloud services like storage, virtual machines, and security mechanisms. Common cloud service models include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Virtualization techniques allow for the sharing of physical resources among multiple organizations by assigning logical names to physical resources and providing pointers to access them.
This document provides an overview of cloud computing. It defines cloud computing as network-based computing that takes place over the internet using integrated hardware, software, and internet infrastructure. Cloud computing is characterized by services being remotely hosted and available from anywhere, and having a utility-based payment model. The document outlines the three main cloud service models: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also discusses some of the opportunities of cloud computing, such as flexibility and scalability, as well as advantages like lower costs, improved performance, and unlimited storage. Finally, it briefly introduces the different types of cloud models including private, hybrid, and public
Cloud computing :
Accessibility: Cloud computing facilitates the access of applications and data from any location worldwide and from any device with an internet connection.
Cost savings: Cloud computing offers businesses scalable computing resources hence saving them on the cost of acquiring and maintaining them.
Security: Cloud providers especially those offering private cloud services, have strived to implement the best security standards and procedures in order to protect client’s data saved in the cloud.
Disaster recovery: Cloud computing offers the most efficient means for small, medium, and even large enterprises to backup and restore their data and applications in a fast and reliable way.
Cloud computing provides dynamically scalable resources as a service over the Internet. It addresses problems with traditional infrastructure like hard-to-scale systems that are costly and complex to manage. Cloud platforms like Google Cloud Platform provide computing services like Compute Engine VMs and App Engine PaaS, as well as storage, networking, databases and other services to build scalable applications without managing physical hardware. These services automatically scale as needed, reducing infrastructure costs and management complexity.
This document discusses cloud computing, defining it as storing and accessing data and programs over the Internet instead of a computer's hard drive. It describes the types of cloud computing including public, private, hybrid, and community clouds. The advantages of cloud computing are reduced costs, increased storage, flexibility, mobility, and automation. Potential applications include word processing, customized programs, and data storage. The document also outlines some disadvantages like being unable to access the cloud without an Internet connection.
Azure was announced in October 2008 and released on 1 February 2010 as Windows Azure, before being renamed to Microsoft Azure on 25 March 2014. Along with Amazon Web Services Azure is considered a leader in the IAAS field.
Microsoft Azure is an open and flexible cloud platform that enables you to quickly build, deploy, and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool, or framework. And you can integrate your public cloud applications with your existing IT environment.
This definition tells us that Microsoft Azure is a cloud platform, which means you can use it for running your business applications, services, and workloads in the cloud. But it also includes some key words that tell us even more:
Open Microsoft Azure provides a set of cloud services that allow you to build and deploy cloud-based applications using almost any programming language, framework, or tool.
Flexible Microsoft Azure provides a wide range of cloud services that can let you do everything from hosting your company’s website to running big SQL databases in the cloud. It also includes different features that can help deliver high performance and low latency for cloud-based applications.
Microsoft-managed Microsoft Azure services are currently hosted in several datacenters spread across the United States, Europe, and Asia. These datacenters are managed by Microsoft and provide expert global support on a 24x7x365 basis.
Compatible Cloud applications running on Microsoft Azure can easily be integrated with on-premises IT environments that utilize the Microsoft Windows Server platform.
It provides both PAAS and IAAS services and supports many different programming languages, tools and frameworks, including both Microsoft-specific and third-party software and systems.
Cloud computing provides computation, software, data access, and storage access via the internet without requiring end user knowledge. It describes a new model of consumption where applications are delivered through the cloud and can be accessed from anywhere as long as there is internet access. Cloud computing shares characteristics with grid computing in that applications can run anywhere over the cloud without worrying about where they are located physically.
Modern Network Operations with no Myths on SaaS, IaaS and PaaS discusses cloud computing characteristics such as massive, abstracted infrastructure and dynamic allocation of applications. It defines cloud services as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document also outlines cloud architecture types including public, private, and hybrid clouds. It analyzes the cloud computing market and opportunities for enterprises and software developers in utilizing public and private cloud services.
This document provides a history and overview of Microsoft Azure. It describes how Azure began with a focus on scalable cloud services and has expanded to include infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) offerings. The document also outlines Azure's computing and storage services, pricing models, and timeline of features and releases from 2008 to 2010.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It includes key components like Pods, Services, ReplicationControllers, and a master node for managing the cluster. The master maintains state using etcd and schedules containers on worker nodes, while nodes run the kubelet daemon to manage Pods and their containers. Kubernetes handles tasks like replication, rollouts, and health checking through its API objects.
This document summarizes an AWS symposium held in Washington DC on June 25-26, 2015. It discusses how AWS started by providing internal infrastructure for Amazon and has grown to serve over 1 million active customers globally across 11 regions and 29 availability zones. The document outlines AWS's broad range of services including compute, storage, databases, analytics and more and how its experience, service breadth, pace of innovation and global footprint set it apart in the cloud market.
The document discusses the top 10 cloud service providers:
1. Amazon EC2 provides scalable computing resources that can be accessed over the internet and only pay for what is used.
2. Verizon offers vCloud Express which provides flexible and on-demand computing resources through an intuitive web console.
3. IBM provides private, hybrid, and public cloud solutions including infrastructure, platforms and software as a service.
It then briefly describes each of the top 10 providers and their key cloud computing offerings.
The document presents a presentation on cloud computing. It begins with an outline of topics to be covered, including definitions of cloud computing, the history of cloud computing, components and characteristics of cloud computing, cloud service models, types of clouds, cloud architecture, properties, security, operating systems, applications, and advantages and disadvantages. It then goes on to define cloud computing and describe its various components, characteristics, service models including SaaS, PaaS, and IaaS. It also discusses types of clouds, properties, security considerations, operating systems, applications, and the advantages and disadvantages of cloud computing.
Analyze key aspects to be considered before embarking on your cloud journey. The presentation outlines the strategies, approach, and choices that need to be made, to ensure a smooth transition to the cloud.
This document discusses quality of service (QoS) aspects of cloud computing, including QoS management, auto scaling, load balancing, and resource scheduling. It provides details on each of these topics: for QoS management, it lists the phases involved; for auto scaling, it describes scaling resources according to user needs; for load balancing, it discusses algorithms like batch mode and online mode heuristic scheduling; and for resource scheduling, it outlines algorithms like genetic, bee, ant colony, and workflow. The document aims to explain how these techniques help provide quality service in cloud computing environments.
T-Systems is an ICT service provider that offers cloud-based solutions for business applications from its 75 data centers globally. It leverages cloud computing by delivering services from its data centers while ensuring solutions comply with security and legal requirements. T-Systems provides dynamic ICT services through standardized, automated, and modular cloud platforms to help companies launch new services and products flexibly. It offers core cloud computing, storage, and communication modules as well as dynamic applications for enterprises around areas like communications, ERP, development, and devices. One example is how T-Systems provided a flexible private cloud infrastructure service for a furniture manufacturer to scale its IT resources up or down based on seasonal demand changes.
cloud computing, Principle and Paradigms: 1 introdutionMajid Hajibaba
The document is a presentation on cloud computing that covers its principles, paradigms, and various models. It defines cloud computing, discusses its roots in technologies like grid computing and virtualization, and describes the different layers including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also covers deployment models, desired features, infrastructure management challenges, and examples of cloud providers like Amazon Web Services.
Cloud computing is the on-demand delivery of IT resources and applications via the Internet with pay-as-you-go pricing. It evolved from earlier technologies like grid computing and utility computing by providing greater ease of use and on-demand scaling. A cloud broker acts as an intermediary between cloud service providers and customers, providing a unified interface and moving workloads between public and private clouds for improved performance and redundancy.
Cloud Computing Environment using Cluster as a serviceANUSUYA T K
The document discusses enhancing cloud computing environments using a cluster as a service (CaaS). It first provides background on cloud computing elements like virtualization and service-oriented architecture. It then summarizes existing cloud services from Amazon (EC2), Google (App Engine), Microsoft (Azure), and Salesforce. The remainder of the document proposes a CaaS model that would allow dynamic discovery, selection, and use of clusters through a standardized interface using stateful web services and dynamic attributes. Key components described include cluster specification, discovery, selection, job submission, monitoring, and result collection.
AWS or Azure or Google Cloud | Best Cloud Platform | Cloud Platform ComparisonMariya James
Which cloud platform is the best among AWS or Azure or Google Cloud. Read the complete detailed comparison in terms of services, accessibility, pricing, & more. Choose the best cloud platform for your business.
This document discusses Service Level Agreements (SLAs) which define the level of service expected between a service provider and consumer. It covers what an SLA is, the contents of an SLA including service definitions, responsibilities, metrics, auditing and remedies. It describes different types of SLAs and considerations for designing a good SLA like meeting agreements and internal operational level agreements. Key SLA requirements and metrics for monitoring and auditing performance are also outlined such as availability, response time, and resolution time.
Webinar presentation July 28, 2016
Do you really know the implications for your business of all the terms and conditions listed in the agreements that a public cloud service provider asks you to sign? Public Cloud Service Agreements: What to Expect and What to Negotiate, Version 2.0 was written to help you, the customer, understand the meaning of these terms, obtain clarifications, and sometimes get stronger commitments. This white paper complements the Cloud Standards Customer Council’s Practical Guide to Cloud Service Agreements but goes deeper, based on analyzing dozens of actual agreements. Version 2.0 reflects the evolution of the market, the growing concerns about privacy, the development of hybrid clouds, and more. Join several of the paper’s co-authors who will share best practices to evaluate competing offers.
Read the CSCC's deliverable here: http://www.cloud-council.org/deliverables/public-cloud-service-agreements-what-to-expect-and-what-to-negotiate.htm
This document provides an overview of cloud computing. It defines cloud computing as network-based computing that takes place over the internet using integrated hardware, software, and internet infrastructure. Cloud computing is characterized by services being remotely hosted and available from anywhere, and having a utility-based payment model. The document outlines the three main cloud service models: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also discusses some of the opportunities of cloud computing, such as flexibility and scalability, as well as advantages like lower costs, improved performance, and unlimited storage. Finally, it briefly introduces the different types of cloud models including private, hybrid, and public
Cloud computing :
Accessibility: Cloud computing facilitates the access of applications and data from any location worldwide and from any device with an internet connection.
Cost savings: Cloud computing offers businesses scalable computing resources hence saving them on the cost of acquiring and maintaining them.
Security: Cloud providers especially those offering private cloud services, have strived to implement the best security standards and procedures in order to protect client’s data saved in the cloud.
Disaster recovery: Cloud computing offers the most efficient means for small, medium, and even large enterprises to backup and restore their data and applications in a fast and reliable way.
Cloud computing provides dynamically scalable resources as a service over the Internet. It addresses problems with traditional infrastructure like hard-to-scale systems that are costly and complex to manage. Cloud platforms like Google Cloud Platform provide computing services like Compute Engine VMs and App Engine PaaS, as well as storage, networking, databases and other services to build scalable applications without managing physical hardware. These services automatically scale as needed, reducing infrastructure costs and management complexity.
This document discusses cloud computing, defining it as storing and accessing data and programs over the Internet instead of a computer's hard drive. It describes the types of cloud computing including public, private, hybrid, and community clouds. The advantages of cloud computing are reduced costs, increased storage, flexibility, mobility, and automation. Potential applications include word processing, customized programs, and data storage. The document also outlines some disadvantages like being unable to access the cloud without an Internet connection.
Azure was announced in October 2008 and released on 1 February 2010 as Windows Azure, before being renamed to Microsoft Azure on 25 March 2014. Along with Amazon Web Services Azure is considered a leader in the IAAS field.
Microsoft Azure is an open and flexible cloud platform that enables you to quickly build, deploy, and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool, or framework. And you can integrate your public cloud applications with your existing IT environment.
This definition tells us that Microsoft Azure is a cloud platform, which means you can use it for running your business applications, services, and workloads in the cloud. But it also includes some key words that tell us even more:
Open Microsoft Azure provides a set of cloud services that allow you to build and deploy cloud-based applications using almost any programming language, framework, or tool.
Flexible Microsoft Azure provides a wide range of cloud services that can let you do everything from hosting your company’s website to running big SQL databases in the cloud. It also includes different features that can help deliver high performance and low latency for cloud-based applications.
Microsoft-managed Microsoft Azure services are currently hosted in several datacenters spread across the United States, Europe, and Asia. These datacenters are managed by Microsoft and provide expert global support on a 24x7x365 basis.
Compatible Cloud applications running on Microsoft Azure can easily be integrated with on-premises IT environments that utilize the Microsoft Windows Server platform.
It provides both PAAS and IAAS services and supports many different programming languages, tools and frameworks, including both Microsoft-specific and third-party software and systems.
Cloud computing provides computation, software, data access, and storage access via the internet without requiring end user knowledge. It describes a new model of consumption where applications are delivered through the cloud and can be accessed from anywhere as long as there is internet access. Cloud computing shares characteristics with grid computing in that applications can run anywhere over the cloud without worrying about where they are located physically.
Modern Network Operations with no Myths on SaaS, IaaS and PaaS discusses cloud computing characteristics such as massive, abstracted infrastructure and dynamic allocation of applications. It defines cloud services as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document also outlines cloud architecture types including public, private, and hybrid clouds. It analyzes the cloud computing market and opportunities for enterprises and software developers in utilizing public and private cloud services.
This document provides a history and overview of Microsoft Azure. It describes how Azure began with a focus on scalable cloud services and has expanded to include infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) offerings. The document also outlines Azure's computing and storage services, pricing models, and timeline of features and releases from 2008 to 2010.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It includes key components like Pods, Services, ReplicationControllers, and a master node for managing the cluster. The master maintains state using etcd and schedules containers on worker nodes, while nodes run the kubelet daemon to manage Pods and their containers. Kubernetes handles tasks like replication, rollouts, and health checking through its API objects.
This document summarizes an AWS symposium held in Washington DC on June 25-26, 2015. It discusses how AWS started by providing internal infrastructure for Amazon and has grown to serve over 1 million active customers globally across 11 regions and 29 availability zones. The document outlines AWS's broad range of services including compute, storage, databases, analytics and more and how its experience, service breadth, pace of innovation and global footprint set it apart in the cloud market.
The document discusses the top 10 cloud service providers:
1. Amazon EC2 provides scalable computing resources that can be accessed over the internet and only pay for what is used.
2. Verizon offers vCloud Express which provides flexible and on-demand computing resources through an intuitive web console.
3. IBM provides private, hybrid, and public cloud solutions including infrastructure, platforms and software as a service.
It then briefly describes each of the top 10 providers and their key cloud computing offerings.
The document presents a presentation on cloud computing. It begins with an outline of topics to be covered, including definitions of cloud computing, the history of cloud computing, components and characteristics of cloud computing, cloud service models, types of clouds, cloud architecture, properties, security, operating systems, applications, and advantages and disadvantages. It then goes on to define cloud computing and describe its various components, characteristics, service models including SaaS, PaaS, and IaaS. It also discusses types of clouds, properties, security considerations, operating systems, applications, and the advantages and disadvantages of cloud computing.
Analyze key aspects to be considered before embarking on your cloud journey. The presentation outlines the strategies, approach, and choices that need to be made, to ensure a smooth transition to the cloud.
This document discusses quality of service (QoS) aspects of cloud computing, including QoS management, auto scaling, load balancing, and resource scheduling. It provides details on each of these topics: for QoS management, it lists the phases involved; for auto scaling, it describes scaling resources according to user needs; for load balancing, it discusses algorithms like batch mode and online mode heuristic scheduling; and for resource scheduling, it outlines algorithms like genetic, bee, ant colony, and workflow. The document aims to explain how these techniques help provide quality service in cloud computing environments.
T-Systems is an ICT service provider that offers cloud-based solutions for business applications from its 75 data centers globally. It leverages cloud computing by delivering services from its data centers while ensuring solutions comply with security and legal requirements. T-Systems provides dynamic ICT services through standardized, automated, and modular cloud platforms to help companies launch new services and products flexibly. It offers core cloud computing, storage, and communication modules as well as dynamic applications for enterprises around areas like communications, ERP, development, and devices. One example is how T-Systems provided a flexible private cloud infrastructure service for a furniture manufacturer to scale its IT resources up or down based on seasonal demand changes.
cloud computing, Principle and Paradigms: 1 introdutionMajid Hajibaba
The document is a presentation on cloud computing that covers its principles, paradigms, and various models. It defines cloud computing, discusses its roots in technologies like grid computing and virtualization, and describes the different layers including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also covers deployment models, desired features, infrastructure management challenges, and examples of cloud providers like Amazon Web Services.
Cloud computing is the on-demand delivery of IT resources and applications via the Internet with pay-as-you-go pricing. It evolved from earlier technologies like grid computing and utility computing by providing greater ease of use and on-demand scaling. A cloud broker acts as an intermediary between cloud service providers and customers, providing a unified interface and moving workloads between public and private clouds for improved performance and redundancy.
Cloud Computing Environment using Cluster as a serviceANUSUYA T K
The document discusses enhancing cloud computing environments using a cluster as a service (CaaS). It first provides background on cloud computing elements like virtualization and service-oriented architecture. It then summarizes existing cloud services from Amazon (EC2), Google (App Engine), Microsoft (Azure), and Salesforce. The remainder of the document proposes a CaaS model that would allow dynamic discovery, selection, and use of clusters through a standardized interface using stateful web services and dynamic attributes. Key components described include cluster specification, discovery, selection, job submission, monitoring, and result collection.
AWS or Azure or Google Cloud | Best Cloud Platform | Cloud Platform ComparisonMariya James
Which cloud platform is the best among AWS or Azure or Google Cloud. Read the complete detailed comparison in terms of services, accessibility, pricing, & more. Choose the best cloud platform for your business.
This document discusses Service Level Agreements (SLAs) which define the level of service expected between a service provider and consumer. It covers what an SLA is, the contents of an SLA including service definitions, responsibilities, metrics, auditing and remedies. It describes different types of SLAs and considerations for designing a good SLA like meeting agreements and internal operational level agreements. Key SLA requirements and metrics for monitoring and auditing performance are also outlined such as availability, response time, and resolution time.
Webinar presentation July 28, 2016
Do you really know the implications for your business of all the terms and conditions listed in the agreements that a public cloud service provider asks you to sign? Public Cloud Service Agreements: What to Expect and What to Negotiate, Version 2.0 was written to help you, the customer, understand the meaning of these terms, obtain clarifications, and sometimes get stronger commitments. This white paper complements the Cloud Standards Customer Council’s Practical Guide to Cloud Service Agreements but goes deeper, based on analyzing dozens of actual agreements. Version 2.0 reflects the evolution of the market, the growing concerns about privacy, the development of hybrid clouds, and more. Join several of the paper’s co-authors who will share best practices to evaluate competing offers.
Read the CSCC's deliverable here: http://www.cloud-council.org/deliverables/public-cloud-service-agreements-what-to-expect-and-what-to-negotiate.htm
The document discusses cloud security based on a survey of cloud providers. Customers' biggest concerns with cloud computing are security, privacy, and compliance. To address these concerns, service level agreements should provide clarity around security, data encryption, privacy, retention, regulatory compliance, transparency, and performance monitoring. While cloud introduces some new considerations, most security issues are not unique to cloud. Steps taken by cloud providers to improve security include better threat detection, encryption, and access restrictions.
A cloud provisioning contract is the fundamental agr.docxsleeperharwell
A cloud provisioning contract is the fundamental agreement between the cloud consumer and cloud provider that encompasses the contractual terms and conditions of their business relationship.
CLOUD PROVISIONING CONTRACT STRUCTURE
A cloud provisioning contract is a legally binding document that defines rights, responsibilities, terms, and conditions for a scope of provisioning by a cloud provider to a cloud consumer.
• Technical Conditions – specifies the IT resources being provided and their corresponding SLAs
• Economic Conditions – defines the pricing policy and model with cost metrics, established pricing, and billing procedures
• Terms of Service – provides the general terms and conditions of the service provision, which are usually composed of the following five elements:
- Service Usage Policy – defines acceptable service usage methods, usage conditions, and usage terms, as well as suitable courses of action in response to violations
- Security and Privacy Policy – defines terms and conditions for security and privacy requirements
- Warranties and Liabilities – describes warranties, liabilities, and other risk reduction provisions including compensation for SLA non-compliance
- Rights and Responsibilities – outlines the obligations and responsibilities of the cloud consumer and cloud provider
- Contract Termination and Renewal – defines the terms and conditions of terminating and renewing the contract
Cloud provisioning contracts are usually based on templates and provided online, where they can be accepted by cloud consumers with the click of a button. These contracts are, by default, generally geared to limiting the cloud provider’s risk and liability.
Terms of Service
This part defines the general terms and conditions that can be broken down into the following sub-sections:
1.Service Usage Policy
A service usage policy, or acceptable use policy (AUP), comprises definitions of acceptable methods of cloud service usage, including clauses with stipulations such as:
• The cloud consumer shall be solely responsible for the content of the transmissions made through cloud services.
• Cloud services shall not be used for illegal purposes, and any transmitted materials shall not be unlawful, defamatory, libelous, abusive, harmful, or otherwise deemed objectionable by third parties or legal regulations.
• Cloud service usage shall not infringe on any party’s intellectual property rights, copyrights, or any other right.
• Transmitted and stored data shall not contain viruses, malware, or any other harmful content.
• Cloud services shall not be used for the unsolicited mass distribution of e-mail.
Some elements of the service usage policy that cloud consumers may need to review and negotiate include:
• Mutuality of Conditions – Conditions should be identically applicable to the cloud consumer and cloud provider, since the actions and business operations of one party directly impact the operations of the other.
• Policy Update Conditions – Ev.
SLA Basics describes service level agreements (SLAs) which define non-functional requirements for cloud services. SLAs consist of service level objectives (SLOs) evaluated using key performance indicators (KPIs) with thresholds. Automated SLA protection uses policy rules to evaluate KPIs periodically and trigger actions if conditions are met. SLAs are important in cloud computing to ensure customers receive the expected quality of service, as cloud providers may overcommit resources leading to variable performance without proper SLAs.
This document discusses cloud computing and service level agreements. It begins by defining different types of cloud computing models like SaaS, PaaS, and IaaS. It then discusses how cloud computing differs from traditional on-premise storage by addressing issues like data location, custody, and multi-tenancy. The document outlines important considerations for service level agreements including security, data encryption, privacy, regulatory compliance, and transparency. It emphasizes that SLAs should define metrics and responsibilities to ensure the cloud provider delivers the promised level of service. Finally, it cautions that moving to the cloud requires understanding issues like security, portability, accessibility, and data location laws.
The document discusses compliance and certification in the public cloud. It introduces the Cloud Security Alliance's Open Certification Framework, which provides three levels of trust and assurance for cloud consumers. Level 1 is the CSA STAR registry, a public registry of cloud provider self-assessments. Level 2 is CSA STAR Certification, which evaluates a cloud provider's information security management system. Level 3 is CSA STAR Attestation, which is based on the AICPA SOC 2 attestation standard supplemented by the Cloud Controls Matrix. The framework aims to build trust and transparency between cloud providers and consumers.
The document discusses compliance and certification in the public cloud. It introduces the Cloud Security Alliance's Open Certification Framework, which provides three levels of trust and assurance for cloud consumers. Level 1 is the CSA STAR registry, a public registry of cloud provider self-assessments. Level 2 is CSA STAR Certification, which evaluates a cloud provider's information security management system. Level 3 is CSA STAR Attestation, which is based on the AICPA SOC 2 attestation standard and provides assurance on a cloud provider's controls and processing. The framework aims to build trust between cloud providers and consumers through transparency, independent verification, and flexible, incremental certification.
IT Equipment and Services Agreements: Contractual Pitfalls and How to Avoid ThemMeyers Nave
Information technology equipment and services are rapidly becoming critical to every aspect of a public entity’s daily operation, from providing free Wi-Fi in downtown areas to upgrading a police department’s dispatch system, as well as creating large scale systems for public records data retention and enhancing a city’s fiber-optic network to provide ultra-high speed internet use for businesses.
Public scrutiny of an agency’s purchase and implementation of IT equipment and services is very high because of the costs and public interest, and members of the community often directly use the equipment and services (such as on-line communications with municipal entities) or it directly affects their health and safety (such as in-vehicle communications systems for fire protection services).
This presentation is designed to help public entities avoid the potential pitfalls in IT agreements and incorporate best practices when negotiating and managing IT contracts. Meyers Nave Principal Richard Pio Roda provides real-life examples of a variety of IT equipment and services agreements that he has negotiated on behalf of cities and special districts. He explains the primary areas of contractual risk and share advice on best practices for addressing each one. Topics he covers include:
- Key contractual differences and risks between purchasing, leasing and licensing
- Special considerations for Software as a Service (SaaS) and Infrastructure as a Service (IaaS)
- Long-term service agreements – performance guarantees, prolonged start-up risks, warranties vs. scheduled maintenance vs. extra work, termination damages
- New terms and conditions in software procurement and computer system integration services contracts that improve the security and protection of the public entity
This presentation series describe concepts that deliver ITIL best practice although this practice is developed for IT service management but its concepts is wider than ITSM and it could be used in other area that deliver service.
This document discusses definitions and concepts related to cloud computing. It begins by looking at definitions from NIST and WhatIs.com, which describe cloud computing as enabling on-demand access to configurable computing resources via a network. The document then covers central ideas like utility computing, service-oriented architecture (SOA), and service level agreements (SLAs). It discusses properties and characteristics of clouds like scalability, availability, reliability, manageability, interoperability, performance, and accessibility. Finally, it delves into concepts that enable these properties, such as virtualization, parallel computing, load balancing, fault tolerance, and system monitoring.
Chapter 1 Introduction to Cloud Computingnewbie2019
The document discusses cloud computing, including definitions from various sources, properties and characteristics of cloud computing, and service and deployment models. It defines cloud computing as on-demand access to shared configurable computing resources over the internet. The key properties discussed are high scalability, availability, reliability, manageability, interoperability, accessibility, and optimization through techniques like virtualization, parallel computing, and load balancing. It outlines service models of SaaS, PaaS, and IaaS and deployment models of private, public, hybrid and community clouds.
Cloud computing allows users to access computing resources over the network. It has several key characteristics including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. There are three main service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). Achieving high performance, availability, and manageability in cloud computing requires techniques like virtualization, parallel processing, fault tolerance, load balancing and automation.
Transforming cloud security into an advantageMoshe Ferber
- Moshe Ferber is an experienced information security professional who has founded and invested in several cloud security companies.
- The document discusses important concepts in cloud security including creating trust between cloud providers and customers, security best practices in development and operations, and compliance with standards and regulations.
- Key responsibilities in cloud security include securing data, applications, users and identities across the entire lifecycle from a shared responsibility model between providers and customers.
Nesta apresentação, mostramos como o Setor Público pode se beneficiar da nuvem da AWS. Melhores práticas para especificação e seleção de um provedor de nuvem para o serviço público. Apresentada no Public Sector Summit 2015
This document presents an overview of security issues in cloud computing. It begins by introducing cloud computing characteristics and models. It then identifies three main problems that create security issues: loss of control, lack of trust, and multi-tenancy issues. Several approaches are proposed to help address these problems, such as increasing monitoring and access control for customers, utilizing multiple clouds, and improving isolation between tenants. The document concludes by emphasizing the need to identify cloud computing security problems in terms of these three issues and consider approaches to minimize each one.
The document provides an overview of cloud computing and introduces a guide and handbook for auditing cloud computing risks. It defines cloud computing, describes service models (IaaS, PaaS, SaaS), and identifies key risk areas for cloud computing including service provider risks, technical risks, external/overseas risks, management/oversight risks, and security/connectivity/privacy risks. It states the guide describes these risks and mitigation strategies, while the handbook provides audit-related questions to help IT auditors evaluate if an organization is properly managing risks and its cloud computing vendor. Next steps include members sharing audit questions from cloud computing audits to update the guide and handbook.
Biometricstechnology in iot and machine learningAnkit Gupta
Ravi Kumar presented on biometrics technology. The presentation discussed what biometrics is, the importance of biometrics for security and convenience, and the history of biometrics. It described various physical and behavioral biometric characteristics like fingerprints, face recognition, iris scans, and voice recognition. Applications of biometrics technology discussed included access control, time and attendance tracking, and use at airports and ATMs. Both advantages like uniqueness and accountability and disadvantages like costs and potential for false readings were covered. Emerging biometric technologies of the future may include ear shape, body odor, and DNA identification.
(1) Sensor cloud computing integrates large-scale sensor networks with cloud computing infrastructures to collect and process data from various sensor networks. (2) It enables large-scale data sharing and collaborations among users and applications on the cloud. (3) Sensor cloud computing delivers cloud services via sensing applications and provides a truly pervasive computing environment by using sensors as an interface between the physical and cyber worlds.
The document discusses Google Cloud Platform (GCP), which provides a set of cloud computing services including computing, storage, databases, networking, big data, machine learning, and IoT. Some key benefits of GCP include running applications on Google's global infrastructure, focusing on product development rather than system administration, mixing and matching different cloud services, and scaling applications easily to handle millions of users in a cost-effective way. GCP offers both fully managed platform services and flexible virtual machines. It also provides storage, database, and networking services to store and access data.
This document discusses resource management in cloud computing. It begins by defining different types of resources, including physical resources like computers and disks, and logical resources like execution and communication applications. It then discusses the objectives and challenges of resource management, such as scalability, quality of service, and reducing overheads and latency. The document outlines various aspects of resource management including provisioning, allocation, mapping, adaptation, discovery, brokering, estimation, and modeling. It also discusses approaches to resource provisioning, allocation, mapping, adaptation and lists some key performance metrics.
This document discusses resource management in cloud computing and strategies for improving energy efficiency. It describes different resource types, including physical and logical resources. It then discusses how resource management controls access to cloud capabilities. The document outlines how data center power consumption is growing rapidly and motivating the need for green computing approaches. These include power-aware and thermal-aware scheduling of virtual machines, optimized data center design, and minimizing the size of virtual machine images to reduce energy usage. The overall summary advocates an integrated green cloud framework combining various efficiency techniques.
The document describes MapReduce, a programming model developed at Google for processing large datasets in a distributed computing environment. It discusses how MapReduce works, with mappers processing input data in parallel to generate intermediate key-value pairs, and reducers then merging all intermediate values associated with the same key. Three examples of MapReduce problems and their solutions are provided to illustrate how MapReduce can be used to calculate averages, group data by gender to find totals and averages, and categorize words by length.
1. The document discusses the economic properties of cloud computing including common infrastructure, location independence, online connectivity, utility pricing, and on-demand resources.
2. It provides details on utility pricing models and how cloud computing can be cheaper than owning resources depending on the ratio of peak to average demand.
3. On-demand cloud resources allow organizations to dynamically scale up or down based on changing demand levels without penalty, which provides significant economic benefits over static resource provisioning.
The document discusses service level agreements (SLAs) in cloud computing. It defines an SLA as a formal contract between a service provider and consumer that defines the level of availability and performance guaranteed by the provider. SLAs contain service level objectives that are measurable conditions used to select cloud providers. The document provides two example problems, the first calculating if an availability guarantee was violated given total outage time, and the second calculating the effective cost for a service given availability percentages and outage durations were below guarantees.
This document discusses security issues in collaborative Software as a Service (SaaS) cloud environments. It presents four objectives: 1) developing a framework to select a trustworthy SaaS cloud provider, 2) recommending access requests from anonymous users, 3) mapping authorized permissions to local roles, and 4) dynamically detecting and removing access policy conflicts. The document outlines challenges in securing loosely coupled collaborations in clouds and motivates addressing security in SaaS cloud delivery through risk estimation, access conflict mediation, and establishing trust in cloud service providers.
The document summarizes research on security risks in cloud computing due to multi-tenancy. It discusses how researchers were able to:
1) Map the physical layout of Amazon EC2 instances to determine placement parameters to achieve co-residence with target VMs.
2) Verify co-residence through network checks and a covert channel with over 60% success.
3) Cause co-residence by launching many probes or targeting recently launched instances, achieving up to 40% success.
4) Exploit co-residence to measure cache usage and network traffic, allowing for load monitoring and covert channels to leak information.
The document discusses security issues related to cloud computing. It begins by defining cloud computing and its economic advantages for consumers and providers. However, security concerns are a barrier to wider adoption of cloud computing. The document then examines seven specific security risks identified by Gartner: privileged user access, regulatory compliance and audit, data location, data segregation, recovery, investigative support, and long-term viability. Additional security issues discussed include virtualization, access control, application security, and data life cycle management. Throughout, the document emphasizes the importance of customers understanding security responsibilities and having visibility into a cloud provider's security practices.
This document discusses cloud computing security and covers the following key points in 15 sentences or less:
Cloud security involves ensuring confidentiality, integrity, and availability of data. There are four main types of security attacks: interruption, interception, modification, and fabrication. Security threats can be classified as disclosure, deception, disruption, or usurpation. Security policies define what is and is not allowed, while mechanisms enforce these policies. Security aims to prevent attacks, detect violations, and enable recovery from any successful attacks. Trust and assumptions underlie all aspects of security policies, mechanisms, operations, and issues.
This document discusses the development of a cloud computing broker that can intelligently select cloud providers and services for customers based on their requirements. It aims to address issues like varying quality of service across providers, flexibility in customer needs, and avoiding vendor lock-in. The proposed broker uses fuzzy logic techniques to select suitable providers based on promised quality of service and trustworthiness. It also monitors services and can trigger migration to another provider if service level agreements are not met. Case studies on infrastructure and software marketplaces demonstrate that the fuzzy-based broker performs better than conventional cost-based approaches.
Mobile cloud computing combines cloud computing, mobile computing and wireless networks to provide data storage and processing services to mobile users without requiring powerful device hardware. This allows mobile apps to be built and updated quickly using cloud services and to seamlessly continue across different devices. Key benefits include improved data access, reliability and flexibility compared to relying solely on local device resources. Effective mobile cloud computing requires dynamic partitioning of apps between mobile devices and cloud servers to optimize for factors like energy usage and execution time.
This document outlines the revised syllabus for the Bachelor of Technology in Computer Science and Engineering program at Gurukula Kangri Vishwavidyalaya in Haridwar, India effective from the 2015-2016 academic year. It lists the courses, subjects, evaluation schemes, credits and codes for each semester of the 4-year program. The syllabus includes both theory and practical courses covering topics such as engineering chemistry, mathematics, programming, data structures, operating systems, databases and more. It provides the framework for the BTech CSE degree over 8 semesters of study.
The document discusses the benefits of exercise for both physical and mental health. It notes that regular exercise can reduce the risk of diseases like heart disease and diabetes, improve mood, and reduce feelings of stress and anxiety. The document recommends that adults get at least 150 minutes of moderate exercise or 75 minutes of vigorous exercise per week to gain these benefits.
Intro/Overview on Machine Learning PresentationAnkit Gupta
This document provides an overview of a presentation on machine learning given at Gurukul Kangri University in 2017. It defines machine learning as a field that allows computers to learn without being explicitly programmed. It discusses different machine learning algorithms including supervised learning, unsupervised learning, and semi-supervised learning. Examples of applications of machine learning discussed include data mining, natural language processing, image recognition, and expert systems. The document also contrasts artificial intelligence, machine learning, and deep learning.
This document provides guidelines for using cloud computing. It defines cloud computing as delivering software, infrastructure and storage over the internet. Key benefits include reduced costs, flexibility, automatic updates, increased collaboration and security. The main types of cloud services are Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Best practices include assessing readiness, setting goals, learning from others' experiences, and establishing performance guarantees with providers. The document also outlines Qatar's legal protections for data privacy and security in the cloud.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
1. CLOUD COMPUTING
SERVICE LEVEL AGREEMENT (SLA)
PROF. SOUMYA K. GHOSH
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
IIT KHARAGPUR
2. What is Service Level Agreement?
• A formal contract between a Service Provider (SP) and a Service Consumer
(SC)
• SLA: foundation of the consumer’s trust in the provider
• Purpose : to define a formal basis for performance and availability the SP
guarantees to deliver
• SLA contains Service Level Objectives (SLOs)
– Objectively measurable conditions for the service
– SLA & SLO: basis of selection of cloud provider
2
3. SLA Contents
• A set of services which the provider will deliver
• A complete, specific definition of each service
• The responsibilities of the provider and the consumer
• A set of metrics to measure whether the provider is offering the services
as guaranteed
• An auditing mechanism to monitor the services
• The remedies available to the consumer and the provider if the terms are
not satisfied
• How the SLA will change over time
3
4. Web Service SLA
• WS-Agreement
– XML-based language and protocol for negotiating, establishing, and managing
service agreements at runtime
– Specify the nature of agreement template
– Facilitates in discovering compatible providers
– Interaction : request-response
– SLA violation : dynamically managed and verified
• WSLA (Web Service Level Agreement Framework)
– Formal XML-schema based language to express SLA and a runtime interpreter
– Measure and monitor QoS parameters and report violations
– Lack of formal definitions for semantics of metrics
4
5. Difference between Cloud SLA and Web
Service SLA
• QoS Parameters :
– Traditional Web Service : response time, SLA violation rate for reliability, availability, cost of
service, etc.
– Cloud computing : QoS related to security, privacy, trust, management, etc.
• Automation :
– Traditional Web Service : SLA negotiation, provisioning, service delivery, monitoring are not
automated.
– Cloud computing : SLA automation is required for highly dynamic and scalable service
consumption
• Resource Allocation :
– Traditional Web Service : UDDI (Universal Description Discovery and Integration) for
advertising and discovering between web services
– Cloud computing : resources are allocated and distributed globally without any central
directory
5
6. Types of SLA
• Present market place features two types of SLAs :
– Off-the-shelf SLA or non-negotiable SLA or Direct SLA
• Non-conducive for mission-critical data or applications
• Provider creates the SLA template and define all criteria viz. contract
period, billing, response time, availability, etc.
• Followed by the present day state-of-the-art clouds.
– Negotiable SLA
• Negotiation via external agent
• Negotiation via multiple external agents
6
7. Service Level Objectives (SLOs)
• Objectively measurable conditions for the service
• Encompasses multiple QoS parameters viz. availability,
serviceability, billing, penalties, throughput, response time, or
quality
• Example :
– “Availability of a service X is 99.9%”
– “Response time of a database query Q is between 3 to 5 seconds”
– “Throughput of a server S at peak load time is 0.875”
7
8. Service Level Management
• Monitoring and measuring performance of services based on
SLOs
• Provider perspective :
– Make decisions based on business objectives and technical realties
• Consumer perspective :
– Decisions about how to use cloud services
8
9. Considerations for SLA
• Business Level Objectives: Consumers should know why they are using cloud
services before they decide how to use cloud computing.
• Responsibilities of the Provider and Consumer: The balance of
responsibilities between providers and consumers will vary according to the
type of service.
• Business Continuity and Disaster Recovery: Consumers should ensure their
cloud providers have adequate protection in case of a disaster.
• System Redundancy: Many cloud providers deliver their services via massively
redundant systems. Those systems are designed so that even if hard drives or
network connections or servers fail, consumers will not experience any
outages.
9
10. Considerations for SLA (contd…)
• Maintenance: Maintenance of cloud infrastructure affects any kind of cloud offerings
(applicable to both software and hardware)
• Location of Data: If a cloud service provider promises to enforce data location regulations,
the consumer must be able to audit the provider to prove that regulations are being
followed.
• Seizure of Data: If law enforcement targets the data and applications associated with a
particular consumer, the multi-tenant nature of cloud computing makes it likely that other
consumers will be affected. Therefore, the consumer should consider using a third-party to
keep backups of their data
• Failure of the Provider: Consumers should consider the financial health of their provider and
make contingency plans. The provider’s policies of handling data and applications of a
consumer whose account is delinquent or under dispute are to be considered.
• Jurisdiction: Consumers should understand the laws that apply to any cloud providers they
consider.
10
11. SLA Requirements
• Security: Cloud consumer must understand the controls and federation patterns
necessary to meet the security requirements. Providers must understand what
they should deliver to enable the appropriate controls and federation patterns.
• Data Encryption: Details of encryption and access control policies.
• Privacy: Isolation of customer data in a multi-tenant environment.
• Data Retention and Deletion: Some cloud providers have legal requirements of
retaining data even of it has been deleted by the consumer. Hence, they must be
able to prove their compliance with these policies.
• Hardware Erasure and Destruction: Provider requires to zero out the memory if a
consumer powers off the VM or even zero out the platters of a disk, if it is to be
disposed or recycled.
11
12. SLA Requirements (Contd…)
• Regulatory Compliance: If regulations are enforced on data and applications, the providers should
be able to prove compliance.
• Transparency: For critical data and applications, providers must be proactive in notifying consumers
when the terms of the SLA are breached.
• Certification: The provider should be responsible in proving the certification of any kind of data or
applications and keeping its up-to date.
• Monitoring: To eliminate the conflict of interest between the provider and the consumer, a neural
third-party organization is the best solution to monitor performance.
• Auditability: As the consumers are liable to any breaches that occur, it is vital that they should be
able to audit provider’s systems and procedures. An SLA should make it clear how and when those
audits take place. Because audits are disruptive and expensive, the provider will most likely place
limits and charges on them.
12
13. Key Performance Indicators (KPIs)
• Low-level resource metrics
• Multiple KPIs are composed, aggregated, or
converted to for high-level SLOs.
• Example :
– downtime, uptime, inbytes, outbytes, packet size, etc.
• Possible mapping :
– Availability (A) = 1 – (downtime/uptime)
13
14. Industry-defined KPIs
• Monitoring:
– Natural questions:
• “who should monitor the performance of the provider?”
• “does the consumer meet its responsibilities?”
– Solution: neutral third-party organization to perform monitoring
– Eliminates conflicts of interest if:
• Provider reports outage at its sole discretion
• Consumer is responsible for an outage
• Auditability:
– Consumer requirement:
• Is the provider adhering to legal regulations or industry-standard
• SLA should make it clear how and when to conduct audits
14
15. Metrics for Monitoring and Auditing
• Throughput – How quickly the service responds
• Availability – Represented as a percentage of uptime for a service in a given
observation period.
• Reliability – How often the service is available
• Load balancing – When elasticity kicks in (new VMs are booted or terminated, for
example)
• Durability – How likely the data is to be lost
• Elasticity – The ability for a given resource to grow infinitely, with limits (the
maximum amount of storage or bandwidth, for example) clearly stated
• Linearity – How a system performs as the load increases
15
16. Metrics for Monitoring and Auditing (Contd…)
• Agility – How quickly the provider responds as the consumer's resource load scales up and
down
• Automation – What percentage of requests to the provider are handled without any human
interaction
• Customer service response times – How quickly the provider responds to a service request.
This refers to the human interactions required when something goes wrong with the on-
demand, self-service aspects of the cloud.
• Service-level violation rate – Expressed as the mean rate of SLA violation due to
infringements of the agreed warranty levels.
• Transaction time – Time that has elapsed from when a service is invoked till the completion
of the transaction, including the delays.
• Resolution time – Time period between detection of a service problem and its resolution.
16
17. SLA Requirements w.r.t. Cloud Delivery Models
Source: “Cloud Computing Use Cases
White Paper” Version 4.0
17
18. Example Cloud SLAs
Cloud
Provider
Service Type of
Delivery
Model
Service Level Agreement Guarantees
Amazon EC2 IaaS Availability (99.95%) with the following definitions : Service Year
: 365 days of the year, Annual Percentage Uptime, Region
Unavailability : no external connectivity during a five minute
period, Eligible Credit Period, Service Credit
S3 Storage-as-a-
Service
Availability (99.9%) with the following definitions: Error Rate,
Monthly Uptime Percentage, Service Credit
SimpleDB Database-as-
a-Service
No specific SLA is defined and the agreement does not guarantee
availability
Salesforce CRM PaaS No SLA guarantees for the service provided
Google Google App
Engine
PaaS Availability (99.9%) with the following definitions : Error Rate,
Error Request, Monthly Uptime Percentage, Scheduled
Maintenance, Service Credits, and SLA exclusions
18
19. Example Cloud SLAs (contd…)
Cloud
Provider
Service Type of
Delivery
Model
Service Level Agreement Guarantees
Microsoft Microsoft
Azure
Compute
IaaS/PaaS Availability (99.95%) with the following definitions : Monthly
Connectivity Uptime Service Level, Monthly Role Instance Uptime
Service Level, Service Credits, and SLA exclusions
Microsoft
Azure
Storage
Storage-as-a-
Service
Availability (99.9%) with the following definitions: Error Rate,
Monthly Uptime Percentage, Total Storage Transactions, Failed
Storage Transactions, Service Credit, and SLA exclusions
Zoho suite Zoho mail,
Zoho CRM,
Zoho books
SaaS Allows the user to customize the service level agreement
guarantees based on : Resolution Time, Business Hours & Support
Plans, and Escalation
19
20. Example Cloud SLAs (contd…)
Cloud
Provider
Service Type of Cloud Delivery
Model
Service Level Agreement Guarantees
Rackspace Cloud
Server
IaaS Availability regarding the following: Internal Network
(100%), Data Center Infrastructure (100%), Load
balancers (99.9%)
Performance related to service degradation: Server
migration, notified 24 hours in advance, and is
completed in 3 hours (maximum)
Recovery Time: In case of failure, guarantee of
restoration/recovery in 1 hour after the problem is
identified.
Terremark vCloud
Express
IaaS Monthly Uptime Percentage (100%) with the following
definitions: Service Credit, Credit Request and Payment
Procedure, and SLA exclusions
20
21. Example Cloud SLAs (contd…)
Cloud
Provider
Service Type of Cloud
Delivery Model
Service Level Agreement Guarantees
Nirvanix Public, Private,
Hybrid Cloud
Storage
Storage-as-a-Service Monthly Availability Percentage (99.9%) with the
following definitions: Service Availability, Service
Credits, Data Replication Policy, Credit Request
Procedure, and SLA Exclusions
21
22. Limitations
• Service measurement
– Restricted to uptime percentage
– Measured by taking the mean of service availability observed over a specific
period of time
– Ignores other parameters like stability, capacity, etc.
• Biasness towards vendors
– Measurement of parameters are mostly established according to vendor’s
advantage
• Lack of active monitoring on customer’s side
– Customers are given access to some ticketing systems and are responsible for
monitoring the outages.
– Providers do not provide any access to active data streams or audit trails, nor do
they report any outages.
22
23. Limitations (contd…)
• Gap between QoS hype and SLA offerings in reality
• QoS in the areas of governance, reliability, availability, security, and
scalability are not well addressed.
• No formal ways of verifying if the SLA guarantees are complying or
not.
• Proper SLA are good for both provider as well as the customer
– Provider’s perspective : Improve upon Cloud infrastructure, fair
competition in Cloud market place
– Customer’s perspective : Trust relationship with the provider, choosing
appropriate provider for moving respective businesses to Cloud
23
24. Expected SLA Parameters
• Infrastructure-as-a-Service (IaaS):
– CPU capacity, cache memory size, boot time of standard images,
storage, scale up (maximum number of VMs for each user), scale
down (minimum number of VMs for each user), On demand
availability, scale uptime, scale downtime, auto scaling, maximum
number of VMs configured on physical servers, availability, cost
related to geographic locations, and response time
• Platform-as-a-Service (PaaS):
– Integration, scalability, billing, environment of deployment (licenses,
patches, versions, upgrade capability, federation, etc.), servers,
browsers, number of developers
24
27. 1
Cloud Computing : Economics
Prof. Soumya K Ghosh
Department of Computer Science and Engineering
IIT KHARAGPUR
28. Cloud Properties: Economic Viewpoint
9/3/2017 2
• Common Infrastructure
– pooled, standardized resources, with benefits generated by statistical
multiplexing.
• Location-independence
– ubiquitous availability meeting performance requirements, with benefits
deriving from latency reduction and user experience enhancement.
• Online connectivity
– an enabler of other attributes ensuring service access. Costs and
performance impacts of network architectures can be quantified using
traditional methods.
29. Cloud Properties: Economic Viewpoint
Contd…
9/3/2017 3
• Utility pricing
– usage-sensitive or pay-per-use pricing, with benefits applying in
environments with variable demand levels.
• on-Demand Resources
– scalable, elastic resources provisioned and de-provisioned without
delay or costs associated with change.
30. Value of Common Infrastructure
• Economies of scale
– Reduced overhead costs
– Buyer power through volume purchasing
• Statistics of Scale
– For infrastructure built to peak requirements:
• Multiplexing demand higher utilization
• Lower cost per delivered resource than unconsolidated workloads
– For infrastructure built to less than peak:
• Multiplexing demand reduce the unserved demand
• Lower loss of revenue or a Service-Level agreement violation
payout.
9/3/2017 4
31. A Useful Measure of “Smoothness”
• The coefficient of variation CV
– ≠ the variance σ2 nor the correlation coefficient
• Ratio of the standard deviation σ to the absolute value of the mean |μ|
• “Smoother” curves:
– large mean for a given standard deviation
– or smaller standard deviation for a given mean
• Importance of smoothness:
– a facility with fixed assets servicing highly variable demand will achieve lower
utilization than a similar one servicing relatively smooth demand.
• Multiplexing demand from multiple sources may reduce the coefficient
of variation CV
9/3/2017 5
32. Coefficient of variation CV
9/3/2017 6
• X1, X2, …, Xn independent random variables for demand
– Identical standard variation σ and mean µ
• Aggregated demand
– Mean sum of means: n. µ
– Variance sum of variances: n. σ2
– Coefficient of variance
𝑛.σ
n. µ
=
σ
𝑛.µ
=
1
𝑛
Cv
• Adding n independent demands reduces the Cv by
1
𝑛
– Penalty of insufficient/excess resources grows smaller
– Aggregating 100 workloads bring the penalty to 10%
33. But What about Workloads?
9/3/2017 7
• Negative correlation demands
X and 1-X Sum is random variable 1
Appropriate selection of customer segments
• Perfectly correlated demands
Aggregated demand : n.X, varianceofsum:n2σ2(X)
Mean: n.µ, standard deviation: n.σ(X)
Coefficient of Variance remains constant
• Simultaneous peaks
34. Common Infrastructure in Real World
• Correlated demands:
– Private, mid-size and large-size providers can experience similar
statistics of scale
• Independent demands:
– Midsize providers can achieve similar statistical economies to an infinitely
large provider
• Available data on economy of scale for large providers is
mixed
– use the same COTS computers and components
– Locating near cheap power supplies
– Early entrant automation tools 3rd parties take care of it
9/3/2017 8
35. Value of Location Independence
• We used to go to the computers, but applications, services and contents now come
to us!
– Through networks: Wired, wireless, satellite, etc.
• But what about latency?
– Human response latency: 10s to 100s milliseconds
– Latency is correlated with:
• Distance (Strongly)
• Routing algorithms of routers and switches (second order effects)
– Speed of light in fiber: only 124 miles per millisecond
– If the Google word suggestion took 2 seconds
– VOIP with latency of 200ms or more
9/3/2017 9
36. Value of Location Independence
Contd…
9/3/2017 10
• Supporting a global user base requires a dispersed service
architecture
– Coordination, consistency, availability, partition-tolerance
– Investment implications
37. Value of Utility Pricing
• As mentioned before, economy of scale might not be very effective
• But cloud services don’t need to be cheaper to be economical!
• Consider a car
– Buy or lease for INR 10,000/- per day
– Rent a car for INR 45,000/- a day
– If you need a car for 2 days in a trip, buying would be much more costly
than renting
• It depends on the demand
9/3/2017 11
38. Utility Pricing in Detail
9/3/2017 12
D(t) demand for resources 0<t<T
P max (D(t)) : Peak Demand
A Avg (D(t)) : Average Demand
B Baseline (owned) unit cost
[BT : Total Baseline Cost]
C Cloud unit cost
[CT : Total Cloud Cost]
U (=C/B) Utility Premium
[For rental car example, U=4.5]
CT= 𝑈 ⨯ 𝐵 ⨯ 𝐷 𝑡 𝑑𝑡 = 𝐴 ⨯ 𝑈 ⨯
𝑇
0
B ⨯ T
BT= P ⨯ B ⨯ T
Because the baseline should
handle peak demand
When is cloud cheaper than owning?
CT< BT A ⨯ U ⨯ B ⨯ T < P ⨯ B ⨯ T
U <
𝑃
𝐴
When utility premium is less than ratio
of peak demand to Average demand
39. Utility Pricing in Real World
• In practice demands are often highly spiky
– News stories, marketing promotions, product launches, Internet flash floods
(Slashdot effect), tax season, Christmas shopping, processing a drone
footage for a 1 week border skirmish, etc.
• Often a hybrid model is the best
– You own a car for daily commute, and rent a car when traveling or when you
need a van to move
– Key factor is again the ratio of peak to average demand
– But we should also consider other costs
• Network cost (both fixed costs and usage costs)
• Interoperability overhead
• Consider Reliability, accessibility
9/3/2017 13
40. Value of on-Demand Services
9/3/2017 14
• Simple Problem: When owning your resources, you will pay a penalty
whenever your resources do not match the instantaneous demand
I. Either pay for unused resources, or suffer the penalty of missing service delivery
D(t) – Instantaneous Demand at time t
R(t) – Resources at time t
Penalty Cost α |D(t) – R(t)|dt
If demand is flat, penalty = 0
If demand is linear periodic provisioning
is acceptable
41. Penalty Costs for Exponential Demand
• Penalty cost ∝ |𝐷 𝑡 − 𝑅 𝑡 |𝑑𝑡
• If demand is exponential (D(t)=et), any
fixed provisioning interval (tp) according
to the current demands will fall
exponentially behind
• R(t) = 𝑒 𝑡−𝑡 𝑝
• D(t) – R(t) = 𝑒 𝑡 − 𝑒 𝑡−𝑡 𝑝 = 𝑒 𝑡 1 − 𝑒 𝑡 𝑝 =
𝑘1 𝑒 𝑡
• Penalty cost ∝c.k1et
9/3/2017 15
42. Coefficient of Variation - Cv
9/3/2017 16
• A statistical measure of the dispersion of data points in a data series around the
mean.
• The coefficient of variation represents the ratio of the standard deviation to the
mean, and it is a useful statistic for comparing the degree of variation from one
data series to another, even if the means are drastically different from each
other
• In the investing world, the coefficient of variation allows you to determine how
much volatility (risk) you are assuming in comparison to the amount of return
you can expect from your investment. In simple language, the lower the ratio
of standard deviation to mean return, the better your risk-return tradeoff.
43. Assignment 1
Consider the peak computing demand for an organization is 120 units. The
demand as a function of time can be expressed as:
17
𝐷 𝑡 =
50 sin 𝑡 , 0 ≤ 𝑡 < 𝜋
2
20 sin 𝑡 , 𝜋
2 ≤ 𝑡 < 𝜋
The resource provisioned by the cloud to satisfy current demand at time t is
given as:
𝑅 𝑡 = 𝐷 𝑡 + 𝛿. (
𝑑𝐷 𝑡
𝑑𝑡
)
Where, 𝛿 is the delay in provisioning the extra computing recourse on demand
44. Assignment 1 (contd…)
The cost to provision unit cloud resource for unit time is 0.9 units.
Calculate the penalty and draw inference.
[Assume the delay in provisioning is 𝜋
12 time units and minimum
demand is 0]
(Penalty: Either pay for unused resource or missing service delivery)
46. 1
Cloud Computing : Managing Data
Prof. Soumya K Ghosh
Department of Computer Science and Engineering
IIT KHARAGPUR
47. Introduction
• Relational database
– Default data storage and retrieval mechanism since 80s
– Efficient in: transaction processing
– Example: System R, Ingres, etc.
– Replaced hierarchical and network databases
• For scalable web search service:
– Google File System (GFS)
• Massively parallel and fault tolerant distributed file system
– BigTable
• Organizes data
• Similar to column-oriented databases (e.g. Vertica)
– MapReduce
• Parallel programming paradigm
9/3/2017 2
48. Introduction Contd…
9/3/2017 3
• Suitable for:
– Large volume massively parallel text processing
– Enterprise analytics
• Similar to BigTable data model are:
– Google App Engine’s Datastore
– Amazon’s SimpleDB
49. Relational Databases
9/3/2017 4
• Users/application programs interact with an RDBMS through SQL
• RDBM parser:
– Transforms queries into memory and disk-level operations
– Optimizes execution time
• Disk-space management layer:
– Stores data records on pages of contiguous memory blocks
– Pages are fetched from disk into memory as requested using pre-fetching and
page replacement policies
50. Relational Databases Contd…
9/3/2017 5
• Database file system layer:
– Independent of OS file system
– Reason:
• To have full control on retaining or releasing a page in memory
• Files used by the DB may span multiple disks to handle large storage
– Uses parallel I/O systems, viz. RAID disk arrays or multi-
processor clusters
51. Data Storage Techniques
9/3/2017 6
• Row-oriented storage
– Optimal for write-oriented operations viz. transaction processing applications
– Relational records: stored on contiguous disk pages
– Accessed through indexes (primary index) on specified columns
– Example: B+- tree like storage
• Column-oriented storage
– Efficient for data-warehouse workloads
• Aggregation of measure columns need to be performed based on values from dimension
columns
• Projection of a table is stored as sorted by dimension values
• Require multiple “join indexes”
– If different projections are to be indexed in sorted order
52. Data Storage Techniques Contd…
9/3/2017 7
Source: “Enterprise Cloud Computing” by Gautam Shroff
53. Parallel Database Architectures
9/3/2017 8
• Shared memory
– Suitable for servers with multiple CPUs
– Memory address space is shared and managed by a symmetric multi-processing (SMP) operating system
– SMP:
• Schedules processes in parallel exploiting all the processors
• Shared nothing
– Cluster of independent servers each with its own disk space
– Connected by a network
• Shared disk
– Hybrid architecture
– Independent server clusters share storage through high-speed network storage viz. NAS (network attached
storage) or SAN (storage area network)
– Clusters are connected to storage via: standard Ethernet, or faster Fiber Channel or Infiniband connections
55. Advantages of Parallel DB over Relational DB
9/3/2017 10
• Efficient execution of SQL queries by exploiting multiple processors
• For shared nothing architecture:
– Tables are partitioned and distributed across multiple processing nodes
– SQL optimizer handles distributed joins
• Distributed two-phase commit locking for transaction isolation between processors
• Fault tolerant
– System failures handled by transferring control to “stand-by” system [for transaction
processing]
– Restoring computations [for data warehousing applications]
56. Advantages of Parallel DB over Relational DB
9/3/2017 11
• Examples of databases capable of handling parallel
processing:
– Traditional transaction processing databases: Oracle, DB2, SQL Server
– Data warehousing databases: Netezza, Vertica, Teradata
57. Cloud File Systems
9/3/2017 12
• Google File System (GFS)
– Designed to manage relatively large files using a very large distributed cluster of
commodity servers connected by a high-speed network
– Handles:
• Failures even during reading or writing of individual files
• Fault tolerant: a necessity
– p(system failure) = 1-(1-p(component failure))N 1 (for large N)
• Support parallel reads, writes and appends by multiple simultaneous client programs
• Hadoop Distributed File System (HDFS)
– Open source implementation of GFS architecture
– Available on Amazon EC2 cloud platform
59. GFS Architecture Contd…
9/3/2017 14
• Single Master controls file namespace
• Large files are broken up into chunks (GFS) or blocks (HDFS)
• Typical size of each chunk: 64 MB
– Stored on commodity (Linux) servers called Chunk servers (GFS) or Data nodes (HDFS)
– Replicated three times on different:
• Physical rack
• Network segment
60. Read Operation in GFS
9/3/2017 15
• Client program sends the full path and offset of a file to the Master (GFS)
or Name Node (HDFS)
• Master replies with meta-data for one of replicas of the chunk where this
data is found.
• Client caches the meta-data for faster access
• It reads data from the designated chunk server
61. Write/Append Operation in GFS
9/3/2017 16
– Client program sends the full path of a file to the Master (GFS) or Name Node (HDFS)
– Master replies with meta-data for all of replicas of the chunk where this data is found.
– Client send data to be appended to all chunk servers
– Chunk server acknowledge the receipt of this data
– Master designates one of these chunk servers as primary
– Primary chunk server appends its copy of data into the chunk by choosing an offset
• Appending can also be done beyond EOF to account for multiple simultaneous
writers
– Sends the offset to each replica
– If all replicas do not succeed in writing at the designated offset, the primary retries
62. Fault Tolerance in GFS
9/3/2017 17
• Master maintains regular communication with chunk servers
– Heartbeat messages
• In case of failures:
– Chunk server’s meta-data is updated to reflect failure
– For failure of primary chunk server, the master assigns a new primary
– Clients occasionally will try to this failed chunk server
• Update their meta-data from master and retry
63. BigTable
9/3/2017 18
• Distributed structured storage system built on GFS
• Sparse, persistent, multi-dimensional sorted map (key-value pairs)
• Data is accessed by:
– Row key
– Column key
– Timestamp
Source: “Enterprise Cloud Computing” by Gautam Shroff
64. BigTable Contd…
9/3/2017 19
• Each column can store arbitrary name-value pairs in the form: column-
family : label
• Set of possible column-families for a table is fixed when it is created
• Labels within a column family can be created dynamically and at any time
• Each BigTable cell (row, column) can store multiple versions of the data in
decreasing order of timestamp
– As data in each column is stored together, they can be accessed efficiently
66. BigTable Storage Contd…
9/3/2017 21
• Each table is split into different row ranges, called tablets
• Each tablet is managed by a tablet server:
– Stores each column family for a given row range in a separate distributed file, called SSTable
• A single meta-data table is managed by a Meta-data server
– Locates the tablets of any user table in response to a read/write request
• The meta-data itself can be very large:
– Meta-data table can be similarly split into multiple tablets
– A root tablet points to other meta-data tablets
• Supports large parallel reads and inserts even simultaneously on the same table
• Insertions done in sorted fashion, and requires more work can simple append
67. Dynamo
9/3/2017 22
• Developed by Amazon
• Supports large volume of concurrent updates, each of which could be small
in size
– Different from BigTable: supports bulk reads and writes
• Data model for Dynamo:
– Simple <key, value> pair
– Well-suited for Web-based e-commerce applications
– Not dependent on any underlying distributed file system (for e.g. GFS/HDFS) for:
• Failure handling
– Data replication
– Forwarding write requests to other replicas if the intended one is down
• Conflict resolution
69. Dynamo Architecture Contd…
9/3/2017 24
• Objects: <Key, Value> pairs with arbitrary arrays of bytes
• MD5: generates a 128-bit hash value
• Range of this hash function is mapped to a set of virtual nodes arranged in a ring
– Each key gets mapped to one virtual node
• The object is replicated at a primary virtual node as well as (N – 1) additional
virtual nodes
– N: number of physical nodes
• Each physical node (server) manages a number of virtual nodes at distributed
positions on the ring
70. Dynamo Architecture Contd…
9/3/2017 25
• Load balancing for:
– Transient failures
– Network partition
• Write request on an object:
– Executed at one of its virtual nodes
– Forwards the request to all nodes which have the replicas of the object
– Quorum protocol: maintains eventual consistency of the replicas when a large
number of concurrent reads & writes take place
71. Dynamo Architecture Contd…
9/3/2017 26
• Distributed object versioning
– Write creates a new version of an object with its local timestamp incremented
– Timestamp:
• Captures history of updates
• Versions that are superseded by later versions (having larger vector timestamp) are
discarded
• If multiple write operations on same object occurs at the same time, all versions will be
maintained and returned to read requests
• If conflict occurs:
– Resolution done by application-independent logic
72. Dynamo Architecture Contd…
9/3/2017 27
• Quorum consistent:
– Read operation accesses R replicas
– Write operation access W replicas
• If (R + W) > N : system is said to be quorum consistent
– Overheads:
• For efficient write: larger number of replicas to be read
• For efficient read: larger number of replicas to be written into
• Dynamo:
– Implemented by different storage engines at node level: Berkley DB (used
by Amazon), MySQL, etc.
73. Datastore
9/3/2017 28
• Google and Amazon offer simple transactional <Key, Value> pair database stores
– Google App Engine’s Datastore
– Amazon’ SimpleDB
• All entities (objects) in Datastore reside in one BigTable table
– Does not exploit column-oriented storage
• Entities table: store data as one column family
Source: “Enterprise Cloud Computing” by Gautam Shroff
74. Datastore contd…
9/3/2017 29
• Multiple index tables are used to support efficient queries
• BigTable:
– Horizontally partitioned (also called sharded) across disks
– Sorted lexicographically by the key values
• Beside lexicographic sorting Datastore enables:
– Efficient execution of prefix and range queries on key values
• Entities are ‘grouped’ for transaction purpose
– Keys are lexicographic by group ancestry
• Entities in the same group: stored close together on disk
• Index tables: support a variety of queries
– Uses values of entity attributes as keys
75. Datastore Contd…
9/3/2017 30
• Automatically created indexes:
– Single-Property indexes
• Supports efficient lookup of the records with WHERE clause
– ‘Kind’ indexes
• Supports efficient lookup of queries of form SELECT ALL
• Configurable indexes
– Composite index:
• Retrieves more complex queries
• Query execution
– Indexes with highest selectivity is chosen
78. Introduction
• MapReduce: programming model developed at Google
• Objective:
– Implement large scale search
– Text processing on massively scalable web data stored using BigTable and GFS distributed file
system
• Designed for processing and generating large volumes of data via massively parallel
computations, utilizing tens of thousands of processors at a time
• Fault tolerant: ensure progress of computation even if processors and networks fail
• Example:
– Hadoop: open source implementation of MapReduce (developed at Yahoo!)
– Available on pre-packaged AMIs on Amazon EC2 cloud platform
9/3/2017 2
79. Parallel Computing
9/3/2017 3
• Different models of parallel computing
– Nature and evolution of multiprocessor computer architecture
– Shared-memory model
• Assumes that any processor can access any memory location
• Unequal latency
– Distributed-memory model
• Each processor can access only its own memory and communicates with other processors using message passing
• Parallel computing:
– Developed for compute intensive scientific tasks
– Later found application in the database arena
• Shared-memory
• Shared-disk
• Shared-nothing
81. Parallel Database Architectures Contd…
9/3/2017 5
• Shared memory
– Suitable for servers with multiple CPUs
– Memory address space is shared and managed by a symmetric multi-processing (SMP) operating system
– SMP:
• Schedules processes in parallel exploiting all the processors
• Shared nothing
– Cluster of independent servers each with its own disk space
– Connected by a network
• Shared disk
– Hybrid architecture
– Independent server clusters share storage through high-speed network storage viz. NAS (network
attached storage) or SAN (storage area network)
– Clusters are connected to storage via: standard Ethernet, or faster Fiber Channel or Infiniband
connections
82. Parallel Efficiency
9/3/2017 6
• If a task takes time T in uniprocessor system, it should take T/p if executed on p
processors
• Inefficiencies introduced in distributed computation due to:
– Need for synchronization among processors
– Overheads of message communication between processors
– Imbalance in the distribution of work to processors
• Parallel efficiency of an algorithm is defined as:
Scalable parallel implementation
parallel efficiency remains constant as the size of data is increased along with a
corresponding increase in processors
parallel efficiency increases with the size of data for a fixed number of processors
83. Illustration
9/3/2017 7
• Problem: Consider a very large collection of documents, say web pages crawled
from the entire Internet. The problem is to determine the frequency (i.e., total
number of occurrences) of each word in this collection. Thus, if there are n
documents and m distinct words, we wish to determine m frequencies, one for
each word.
• Two approaches:
– Let each processor compute the frequencies for m/p words
– Let each processor compute the frequencies of m words across n/p documents, followed by all the
processors summing their results
• Parallel computing is implemented as a distributed-memory model with a shared
disk, so that each processor is able to access any document from disk in parallel
with no contention
84. Illustration Contd…
9/3/2017 8
• Time to read each word from the document = Time to send the word to
another processor via inter-process communication = c
• Time to add to a running total of frequencies -> negligible
• Each word occurs f times in a document (on average)
• Time for computing all m frequencies with a single processor = n × m × f × c
• First approach:
– Each processor reads at most n × m/p × f times
– Parallel efficiency is calculated as:
– Efficiency falls with increasing p
– Not scalable
85. Illustration Contd…
9/3/2017 9
• Second approach
– Number of reads performed by each processor = n/p × m × f
– Time taken to read = n/p × m × f × c
– Time taken to write partial frequencies of m-words in parallel to disk = c × m
– Time taken to communicate partial frequencies to (p - 1) processors and then
locally adding p sub-vectors to generate 1/p of final m-vector of frequencies =
p × (m/p) × c
– Parallel efficiency is computed as:
86. Illustration Contd…
9/3/2017 10
• Since p << nf, efficiency of second approach is higher than that of first
• In fist approach, each processor is reading many words that it need not
read, resulting in wasted work
• In the second approach every read is useful in that it results in a
computation that contributes to the final answer
• Scalable
– Efficiency remains constant as both n and p increases proportionally
– Efficiency tends to 1 for fixed p and gradually increased n
87. MapReduce Model
9/3/2017 11
• Parallel programming abstraction
• Used by many different parallel applications which carry out large-scale
computation involving thousands of processors
• Leverages a common underlying fault-tolerant implementation
• Two phases of MapReduce:
– Map operation
– Reduce operation
• A configurable number of M ‘mapper’ processors and R ‘reducer’ processors are
assigned to work on the problem
• Computation is coordinated by a single master process
88. MapReduce Model Contd…
9/3/2017 12
• Map phase:
– Each mapper reads approximately 1/M of the input from the global file
system, using locations given by the master
– Map operation consists of transforming one set of key-value pairs to
another:
– Each mapper writes computation results in one file per reducer
– Files are sorted by a key and stored to the local file system
– The master keeps track of the location of these files
89. MapReduce Model
Contd…
9/3/2017 13
• Reduce phase:
– The master informs the reducers where the partial computations have been stored
on local files of respective mappers
– Reducers make remote procedure call requests to the mappers to fetch the files
– Each reducer groups the results of the map step using the same key and performs a
function f on the list of values that correspond to these key value:
– Final results are written back to the GFS file system
91. MapReduce: Fault Tolerance
9/3/2017 15
• Heartbeat communication
– Updates are exchanged regarding the status of tasks assigned to workers
– Communication exists, but no progress: master duplicate those tasks and assigns to
processors who have already completed
• If a mapper fails, the master reassigns the key-range designated to it to another
working node for re-execution
– Re-execution is required as the partial computations are written into local files,
rather than GFS file system
• If a reducer fails, only the remaining tasks are reassigned to another node, since
the completed tasks are already written back into GFS
92. MapReduce: Efficiency
9/3/2017 16
• General computation task on a volume of data D
• Takes wD time on a uniprocessor (time to read data from disk +
performing computation + time to write back to disk)
• Time to read/write one word from/to disk = c
• Now, the computational task is decomposed into map and reduce stages
as follows:
– Map stage:
• Mapping time = cmD
• Data produced as output = σD
– Reduce stage:
• Reducing time = crσD
• Data produced as output = σµD
93. MapReduce: Efficiency Contd…
9/3/2017 17
• Considering no overheads in decomposing a task into a map and a reduce stages, we have
the following relation:
• Now, we use P processors that serve as both mapper and reducers in respective phases to
solve the problem
• Additional overhead:
– Each mapper writes to its local disk followed by each reducer remotely reading from the local disk of
each mapper
• For analysis purpose: time to read a word locally or remotely is same
• Time to read data from disk by each mapper =
𝒘𝑫
𝑷
• Data produced by each mapper =
𝝈𝑫
𝑷
𝒘𝑫 = 𝒄𝑫 + 𝒄𝒎𝑫 + 𝒄𝒓𝝈𝑫 + 𝒄𝝈µ𝑫
94. MapReduce: Efficiency Contd…
9/3/2017 18
• Time required to write into local disk =
𝒄𝝈𝑫
𝑷
• Data read by each reducer from its partition in each of P mappers =
• The entire exchange can be executed in P steps, with each reducer r reading
from mapper r + i mod r in step i
• Transfer time from mapper local disk to GFS for each reducer =
• Total overhead in parallel implementation due to intermediate disk reads and
writes = (
𝒘𝑫
𝑷
+ 𝟐𝒄
𝝈𝑫
𝑷
)
• Parallel efficiency of the MapReduce implementation:
𝜺 𝑴𝑹 =
𝒘𝑫
𝑷(
𝒘𝑫
𝑷
+𝟐𝒄
𝝈𝑫
𝑷
)
=
𝟏
𝟏+
𝟐𝒄
𝒘
𝝈
𝝈𝑫
𝑷 𝟐
𝒄𝝈𝑫
𝑷𝟐
⨯ P =
𝒄𝝈𝑫
𝑷
95. MapReduce: Applications
9/3/2017 19
• Indexing a large collection of documents
– Important aspect in web search as well as handling structured data
– The map task consists of emitting a word-document/record-id pair for
each word: 𝒅 𝒌, 𝒘 𝟏 … 𝒘𝒏 → [ 𝒘𝒊, 𝒅𝒌 ]
– The reduce step groups the pairs by word and creates an index entry for
each word: 𝒘𝒊, 𝒅𝒌 → (𝒘𝒊, 𝒅𝒊 𝟏
… 𝒅𝒊𝒎 )
• Relational operations using MapReduce
– Execute SQL statements (relational joins/group by) on large data sets
– Advantages over parallel database
• Large scale
• Fault-tolerance
98. What is OpenStack?
OpenStack is a cloud operating system that controls large pools of compute,
storage, and networking resources throughout a datacenter, all managed
through a dashboard that gives administrators control while empowering their
users to provision resources through a web interface.
2
Source: OpenStack, http://www.doc.openstack.org
100. ▪ Software as Service (SaaS)
▪ Browser or Thin Client access
▪ Platform as Service (PaaS)
▪ On top of IaaS e.g. Cloud Foundry
▪ Infrastructure as Service (IaaS)
▪ Provision Compute, Network, Storage
OpenStack Capability
4
101. ▪ Virtual Machine (VMs) on demand
▪ Provisioning
▪ Snapshotting
▪ Network
▪ Storage for VMs and arbitrary files
▪ Multi-tenancy
▪ Quotas for different project, users
▪ User can be associated with multiple projects
OpenStack Capability
5
103. OpenStack Major Components
▪ Service - Compute
▪ Project - Nova
Manages the lifecycle of compute instances in an OpenStack
environment. Responsibilities include spawning, scheduling and
decommissioning of virtual machines on demand.
7
104. OpenStack Major Components
▪ Service - Networking
▪ Project - Neutron
• Enables Network-Connectivity-as-a-Service for other OpenStack
services, such as OpenStack Compute.
• Provides an API for users to define networks and the attachments
into them.
• Has a pluggable architecture that supports many popular networking
vendors and technologies.
8
105. OpenStack Major Components
▪ Service - Object storage
▪ Project - Swift
• Stores and retrieves arbitrary unstructured data objects via a RESTFul, HTTP
based API.
• It is highly fault tolerant with its data replication and scale-out architecture. Its
implementation is not like a file server with mountable directories.
• In this case, it writes objects and files to multiple drives, ensuring the data is
replicated across a server cluster.
9
106. OpenStack Major Components
▪ Service- Block storage
▪ Project- Cinder
• Provides persistent block storage to running instances.
• Its pluggable driver architecture facilitates the creation and management of
block storage devices.
10
107. OpenStack Major Components
▪ Service - Identity
▪ Project - Keystone
• Provides an authentication and authorization service for other
OpenStack services.
• Provides a catalog of endpoints for all OpenStack services.
11
108. OpenStack Major Components
▪ Service - Image service
▪ Project - Glance
• Stores and retrieves virtual machine disk images.
• OpenStack Compute makes use of this during instance provisioning.
12
109. OpenStack Major Components
▪ Service - Telemetry
▪ Project - Ceilometer
• Monitors and meters the OpenStack cloud for billing, benchmarking,
scalability, and statistical purposes.
13
110. OpenStack Major Components
▪ Service - Dashboard
▪ Project - Horizon
• Provides a web-based self-service portal to interact with underlying
OpenStack services, such as launching an instance, assigning IP
addresses and configuring access controls.
14
112. 1. User logs in to UI Specifies
VM params: name,flavor,keys,etc.
and hits "Create" button
2. Horizon sends HTTP
request to Keystone. Auth
info is specified in HTTP
headers.
3. Keystone sends
temporary token back to
Horizon via HTTP.
4. Keystone sends temporary token back to
Horizon via HTTP. Horizon sends POST
request to Nova API(signed with given
token).
5. Nova API sends HTTP
request to validate
API token to Keystone.
Openstack Work Flow
16
114. Provisioning Flow
▪ Nova API makes rpc.cast to Scheduler. It publishes a short message to scheduler queue with VM
info.
▪ Scheduler picks up the message from MQ.
▪ Scheduler fetches information about the whole cluster from database, filters, selects compute node
and updates DB with its ID
▪ Scheduler publishes message to the compute queue (based on host ID) to trigger VM provisioning
▪ Nova Compute gets message from MQ
▪ Nova Compute makes rpc.call to Nova Conductor for information on VM from DB
▪ Nova Compute makes a call to Neutron API to provision network for the instance
▪ Neutron configures IP, gateway, DNS name, L2 connectivity etc.
▪ It is assumed a volume is already created. Nova Compute contacts Cinder to get volume data. Can
also attach volumes after VM is built.
18
121. • Ephemeral storage:
• Persists until VM is terminated
• Accessible from within VM as local file system
• Used to run operating system and/or scratch space
• Managed by Nova
• Block storage:
• Persists until specifically deleted by user
• Accessible from within VM as a block device (e.g. /dev/vdc)
• Used to add additional persistent storage to VM and/or run operating system
• Managed by Cinder
• Object storage:
• Persists until specifically deleted by user
• Accessible from anywhere
• Used to add store files, including VM images
• Managed by Swift
OpenStack Storage Concepts
25
122. ▪ Users log into Horizon and initiates VM creation
▪ Keystone authorizes
▪ Nova initiates provisioning and saves state to DB
▪ Nova Scheduler finds appropriate host
▪ Neutron configures networking
▪ Cinder provides block device
▪ Image URI is looked up through Glance
▪ Image is retrieved via Swift
▪ VM is rendered by Hypervisor
Summary
26