This presentation provides the latest information on the OASIS Topology Orchestration Specification for Cloud Applications (TOSCA) v1.0 standard. TOSCA is a standard language used to describe a topology of cloud based web services, their components, relationships, and the processes that manage them. Key TOSCA concepts such as operational policy modeling, declarative composition and lifecycle management are covered along with the benefits both cloud customers and providers derive from using this standard. In addition, open source tooling support for TOSCA in projects such as OpenStack and the newly announced Aria project from Cloudify are discussed. Insight is given to the direction of the v1.1 specification and its timeline.
Tagging Best Practices for Cloud GovernanceRightScale
In the cloud, it’s critical to implement specific global tags across your organization that enable cloud governance and cost management. If, like most enterprises, you are using multiple clouds, you will want to ensure consistency across all of the clouds you use, despite varying tagging capabilities on each cloud.
AWS provides a range of Compute Services, Amazon EC2, Amazon ECS, AWS Lambda, and AWS Elastic Beanstalk – allowing you to build everything from web applications, mobile backends to data processing applications.
In this session, we will provide an intro level overview of these services and highlight suitable use cases. We will discuss which service to choose to best get your applications up and running on AWS.
How to test infrastructure code: automated testing for Terraform, Kubernetes,...Yevgeniy Brikman
This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics covered include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.
Microsoft Azure - Introduction to microsoft's public cloudAtanas Gergiminov
Microsoft Azure is Microsoft's application platform for the public cloud. The goal of this presentation is to give you a foundation for understanding the fundamentals of Azure, even if you don't know anything about cloud computing.
BriForum 2014 Boston
Dan Brinkmann presents on Identity Providers, SAML, and OAuth. An example of setting up Office 365 to use Active Directory Federation Services is also shown.
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdfOpen Source Consulting
최근 금융권이나 공공기관에서는 차세대 프로젝트에 PaaS 기반 시스템을 구축하고 그 위에 마이크로서비스아키텍처(MSA)를 구현하기 위해 많은 투자를 하고 있는데요, 많은 기업들이 오픈소스 기반의 인프라를 고려할 때 기술지원이나 버전 업그레이드 등에 대한 애로사항을 겪게 됩니다. 이런 문제에 대한 해결 방안 중 하나가 바로 커뮤니티 기반의 오픈소스 재단을 활용하는 것인데요!
본 자료에서 커뮤니티 오픈소스 기반 인프라 구축의 장점과 실제 사례에 대해 확인해 보실 수 있습니다.
Presentation of OpenStack survey to Internet Research Lab at National Taiwan University, Taiwan. OpenStack framework and architecture overview. (ppt slide for download.) Materials collected from various resources, not originally produced by the author.
Briefly explained Nova, Swift, Glance, Keystone, and Quantum.
Tagging Best Practices for Cloud GovernanceRightScale
In the cloud, it’s critical to implement specific global tags across your organization that enable cloud governance and cost management. If, like most enterprises, you are using multiple clouds, you will want to ensure consistency across all of the clouds you use, despite varying tagging capabilities on each cloud.
AWS provides a range of Compute Services, Amazon EC2, Amazon ECS, AWS Lambda, and AWS Elastic Beanstalk – allowing you to build everything from web applications, mobile backends to data processing applications.
In this session, we will provide an intro level overview of these services and highlight suitable use cases. We will discuss which service to choose to best get your applications up and running on AWS.
How to test infrastructure code: automated testing for Terraform, Kubernetes,...Yevgeniy Brikman
This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics covered include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.
Microsoft Azure - Introduction to microsoft's public cloudAtanas Gergiminov
Microsoft Azure is Microsoft's application platform for the public cloud. The goal of this presentation is to give you a foundation for understanding the fundamentals of Azure, even if you don't know anything about cloud computing.
BriForum 2014 Boston
Dan Brinkmann presents on Identity Providers, SAML, and OAuth. An example of setting up Office 365 to use Active Directory Federation Services is also shown.
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdfOpen Source Consulting
최근 금융권이나 공공기관에서는 차세대 프로젝트에 PaaS 기반 시스템을 구축하고 그 위에 마이크로서비스아키텍처(MSA)를 구현하기 위해 많은 투자를 하고 있는데요, 많은 기업들이 오픈소스 기반의 인프라를 고려할 때 기술지원이나 버전 업그레이드 등에 대한 애로사항을 겪게 됩니다. 이런 문제에 대한 해결 방안 중 하나가 바로 커뮤니티 기반의 오픈소스 재단을 활용하는 것인데요!
본 자료에서 커뮤니티 오픈소스 기반 인프라 구축의 장점과 실제 사례에 대해 확인해 보실 수 있습니다.
Presentation of OpenStack survey to Internet Research Lab at National Taiwan University, Taiwan. OpenStack framework and architecture overview. (ppt slide for download.) Materials collected from various resources, not originally produced by the author.
Briefly explained Nova, Swift, Glance, Keystone, and Quantum.
Infrastructure-as-Code (IaC) using TerraformAdin Ermie
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
This webinar recording will explain how to get started with Amazon Elastic MapReduce (EMR). EMR enables fast processing of large structured or unstructured datasets, and in this webinar we'll demonstrated how to setup an EMR job flow to analyse application logs, and perform Hive queries against it. We'll review best practices around data file organisation on Amazon Simple Storage Service (S3), how clusters can be started from the AWS web console and command line, and how to monitor the status of a Map/Reduce job. The security configuration that allows direct access to the Amazon EMR cluster in interactive mode will be shown, and we'll see how Hive provides a SQL like environment, while allowing you to dynamically grow and shrink the amount of compute used for powerful data processing activities.
Amazon EMR YouTube Recording: http://youtu.be/gSPh6VTBEbY
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016Amazon Web Services
This session provides the attendee with an overview of our AWS CloudFormation service and helps the customer to realize the benefits of "infrastructure as code." A demo is part of this session.
Identity and access control for custom enterprise applications - SDD412 - AWS...Amazon Web Services
This session by the AWS Security Jam team looks at some Amazon Cognito patterns used by the Jam Platform. The team shares their experience building SSO-enabled internal apps with fine-grained role-based access control using an identity provider based on Security Assertion Markup Language (SAML) 2.0.
Slide deck of the presentation done at Credit Agricole Corporate and Investment Bank demonstrating KEDA capabilities. The talk focused on different options for scaling in Kubernetes cluster. The demo covered the auto scaling options based on events using KEDA project.
In part one you will learn about benefits of moving Oracle Database Workloads to AWS, licensing and key aspects to consider. Part two is about understanding how to execute migrations, key success factors, and demonstration.
A description of Azure Key Vault. Why do we need Azure Key Vault where does it fit in a solution. The details of storing keys, secrets and certificate inside of key vault. Using key vault for encryption and decryption of data
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think.
This introductory session covers some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
Introduction to Google Cloud Services / PlatformsNilanchal
The presentation provides a brief Introduction to Google Cloud Services and Platforms. In the course of this slide, we will introduce you the different Google cloud computing options, Compute Engine, App Engine, Cloud function, Databases, file storage and security features of Google cloud platform.
TOSCA and OpenTOSCA: TOSCA Introduction and OpenTOSCA Ecosystem OverviewOpenTOSCA
TOSCA is a new standard facilitating platform independent description
of Cloud applications.
OpenTOSCA is an open source TOSCA ecosystem including the modeling tool "Winery", the TOSCA runtime "OpenTOSCA", and the self-service portal "Vinothek".
Infrastructure-as-Code (IaC) using TerraformAdin Ermie
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
This webinar recording will explain how to get started with Amazon Elastic MapReduce (EMR). EMR enables fast processing of large structured or unstructured datasets, and in this webinar we'll demonstrated how to setup an EMR job flow to analyse application logs, and perform Hive queries against it. We'll review best practices around data file organisation on Amazon Simple Storage Service (S3), how clusters can be started from the AWS web console and command line, and how to monitor the status of a Map/Reduce job. The security configuration that allows direct access to the Amazon EMR cluster in interactive mode will be shown, and we'll see how Hive provides a SQL like environment, while allowing you to dynamically grow and shrink the amount of compute used for powerful data processing activities.
Amazon EMR YouTube Recording: http://youtu.be/gSPh6VTBEbY
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016Amazon Web Services
This session provides the attendee with an overview of our AWS CloudFormation service and helps the customer to realize the benefits of "infrastructure as code." A demo is part of this session.
Identity and access control for custom enterprise applications - SDD412 - AWS...Amazon Web Services
This session by the AWS Security Jam team looks at some Amazon Cognito patterns used by the Jam Platform. The team shares their experience building SSO-enabled internal apps with fine-grained role-based access control using an identity provider based on Security Assertion Markup Language (SAML) 2.0.
Slide deck of the presentation done at Credit Agricole Corporate and Investment Bank demonstrating KEDA capabilities. The talk focused on different options for scaling in Kubernetes cluster. The demo covered the auto scaling options based on events using KEDA project.
In part one you will learn about benefits of moving Oracle Database Workloads to AWS, licensing and key aspects to consider. Part two is about understanding how to execute migrations, key success factors, and demonstration.
A description of Azure Key Vault. Why do we need Azure Key Vault where does it fit in a solution. The details of storing keys, secrets and certificate inside of key vault. Using key vault for encryption and decryption of data
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think.
This introductory session covers some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
Introduction to Google Cloud Services / PlatformsNilanchal
The presentation provides a brief Introduction to Google Cloud Services and Platforms. In the course of this slide, we will introduce you the different Google cloud computing options, Compute Engine, App Engine, Cloud function, Databases, file storage and security features of Google cloud platform.
TOSCA and OpenTOSCA: TOSCA Introduction and OpenTOSCA Ecosystem OverviewOpenTOSCA
TOSCA is a new standard facilitating platform independent description
of Cloud applications.
OpenTOSCA is an open source TOSCA ecosystem including the modeling tool "Winery", the TOSCA runtime "OpenTOSCA", and the self-service portal "Vinothek".
Deployment Automation on OpenStack with TOSCA and CloudifyCloudify Community
TOSCA (Topology and Orchestration Specification for Cloud Applications) is an emerging standard for modeling complete application stacks and automating their deployment and management. It’s been discussed in the context of OpenStack for quite some time, mostly around Heat. In this session we’ll discuss what TOSCA is all about, why it makes sense in the context of OpenStack, and how we can take it farther up the stack to handle complete applications, both during and after deployment, on top of OpenStack.
An overview of the OASIS TOSCA standard: Topology and Orchestration Specifica...Nebucom
TOSCA offers a structured (XML based) language that defines different components of an application and relations between them using an application topology while capturing all management tasks in management plans. The main motivation behind this document is to provide an informational overview of TOSCA to people who are new to the recent developments in the field. As such, this document contains a description of a representative set of works in literature that made contributions to TOSCA.
Watch the videos at http://cloudify.co/webinars/tosca-training-videos
Getting up to speed with TOSCA simple profile in YAML and its ARIA implementation.
Forecast 2014: TOSCA: An Open Standard for Business Application Agility and P...Open Data Center Alliance
Business applications are the crown jewels of the new, cloud-based, application-centric economy. Cloud service providers and their diverse platform technologies are striving to serve these increasingly complex, mission-critical business applications. However, rapidly accelerating business, technical, and even regulatory requirements for applications make it increasingly difficult for cloud service providers and cloud platform technologies to meet the needs of innovative businesses for speed, accuracy and agility.
What was missing, until recently, was an open standard that would enable business to capture and automate the use of expert knowledge regarding essential details such as business application components, dependencies, and a wide range of requirements that could be automatically matched to corresponding cloud service provider capabilities. Cloud vendor software leveraging such an open standard would enable, for the first time, a truly competitive ecosystem where cloud platform and service providers can leap beyond commoditization in order to compete, innovate, and better serve the accelerating needs of cloud-based businesses.
The Topology and Orchestration Specification for Cloud Applications (TOSCA) is a new open standard created with the active participation of leading technology vendors, cloud service providers, and customers that facilitates all of the above goals and more. TOSCA defines the interoperable description of applications; including their components, relationships, dependencies, requirements, and capabilities, thereby enabling portability and semi-automatic management across cloud providers regardless of underlying platform or infrastructure; thus expanding customer choice, improving reliability, and reducing cost and time-to-value. These characteristics also facilitate the portable, continuous delivery of applications (DevOps) across their entire lifecycle. In short, they empower a much higher level of agility and accuracy for business in the cloud.
The growing impact of TOSCA has already inspired an OASIS Interop with six vendors demonstrating cross-cloud interoperability, an ODCA Proof-of-Concept demonstration, and several open source projects. This lively and fast-paced session is suitable for both business and technology focused thought-leaders, and will provide you with a better understanding of the potential and business impact of TOSCA.
AWS User Group July 2014 - Getting Started with cloud computing and AWS
Getting Started with cloud computing and AWS
Slides for the following AWS User Group Talks:
"Public Cloud and AWS Overview" - Ryan Koop, Director of Products and Marketing at Cohesive @ryankoop
"Getting Started in AWS" - Jonny Sywulak, Continuous Delivery Engineer at Stelligent Systems LLC @jonathansywulak
July Sponsors:
Hosts: Cohesive
Beers and drinks: Cohesive
Pizza: el el see
Organizers: Cohesive
Interested in getting involved next time? Have an idea for a talk? email margaret.walkerATcohesive.net
#AWSChicago
Automating Cloud Orchestration with Puppet and CloudifyCloudify Community
Ron Zavner, Technical Director, EMEA
Presentation from the last DevOps Israel meetup where Ron presented how to achieve easy cloud orchestration using Cloudify for the post-deployment phase, while plugging into Puppet for the configuration management of the pre-deployment and deployment phase. All this on OpenStack.
Application and Network Orchestration using Heat & ToscaNati Shalom
The buzzwords Neutron, Heat, and TOSCA are spoken about quite often when it comes to the OpenStack - and many of us are still trying to make sense of the terminology and its place in the OpenStack world.
Where OpenStack Neutron provides APIs for creating network elements, OpenStack Heat provides an orchestration engine for automating the setup and configuration of OpenStack infrastructure, while TOSCA is a standard for templating and defining application topology and policies (that form the basis for Heat). In this context, it really makes sense to put these all together to achieve application and network automation for OpenStack on steroids.
In this session we will learn how we can use the robust combination of Heat and TOSCA to configure and control resources on Nova and Neutron in order to automate the network configuration as part of the application deployment.
The session will include a demo and code examples that show how you can configure virtual networks, attach public IPs, set up security groups, set up load balancing and automatically scale up/down and more. You will leave this session understanding where Neutron meets Heat and TOSCA.
This talk was delivered as part of OpenStack Paris summit - 2014 - http://openstacksummitnovember2014paris.sched.org/event/2b85b682ccaf3a5961e463b61e2403f8#.VFeuG_TF8mc
Deploy TOSCA Network Functions Virtualization (NFV) Workloads in OpenStackSahdev Zala
Talk was given at the OpenStack Austin Summit 2016 and demonstrates how TOSCA Network Functions Virtualization (NFV) workloads can be deployed in OpenStack cloud.
Summit 16: Open-O Mini-Summit - Open Source, Orchestration, and OPNFVOPNFV
Summit 16: Open-O Mini-Summit - Open Source, Orchestration, and OPNFV,
Deng Hui, Chair, OPEN-O Governing Board, China Mobile,
Christopher Donley, Chair, OPEN-O Technical Steering Committee, Huawei,
Marc Cohn, Director, OPEN-O Project, The Linux Foundation
Summit 16: OpenStack Tacker - Open Platform for NFV OrchestrationOPNFV
OpenStack Tacker project has provided a viable community built opensource software for NFV Orchestration. Tacker project is wrapping up its third release for Mitaka with many key features like TOSCA Parser integration, Multi-Site, Enhanced Platform Awareness (EPA), Auto VIM Resource handling. Tacker Multi-Site allows operators to place, manage and monitor VNFs in multiple OpenStack Clouds. TOSCA Parser integration brings industry's first OASIS NFV Profile Standards based TOSCA Orchestrator. Enhanced VNF Placement places VNFs in the most efficient way with CPU-Pinning, NUMA topology awareness, etc. Beyond Mitaka, Tacker project is embarking onto new areas like Network Service Orchestration, Service Function Chaining (SFC), VNF Auto Scaling. Audience will also learn how Tacker is being incorporated into OPNFV deliverables and in NFV Information Model Standardization efforts.
The Cloud offers real opportunities for full DevOps culture with everything automated and silo free. To make these opportunities come true, one needs to go beyond a simple siloed approach that assumes the IaaS setup is separate from the middleware setup, and altogether different than application deployment.
There is a need for automation of all processes, across layers using a customized workflow approach.
In this talk we will suggest modeling of such workflows and architecture to execute them.
ARIA is an agile reference implementation of automation based on OASIS TOSCA Specification. It is a framework for implementing orchestration software and a command line tool to execute TOSCA based application blueprints.
(SEC301) Strategies for Protecting Data Using Encryption in AWSAmazon Web Services
Protecting sensitive data in the cloud typically requires encryption. Managing the keys used for encryption can be challenging as your sensitive data passes between services and applications. AWS offers several options for using encryption and managing keys to help simplify the protection of your data at rest. This session will help you understand which features are available and how to use them, with emphasis on AWS Key Management Service and AWS CloudHSM. Adobe Systems Incorporated will present their experience using AWS encryption services to solve data security needs.
Cloud Computing Automation: Integrating USDL and TOSCAJorge Cardoso
-- Presented at CAiSE 2013, Valencia, Spain --
Standardization efforts to simplify the management of cloud applications are being conducted in isolation. The objective of this paper is to investigate to which extend two promising specifications, USDL and TOSCA, can be integrated to automate the lifecycle of cloud applications. In our approach, we selected a commercial SaaS CRM platform, modeled it using the service description language USDL, modeled its cloud deployment using TOSCA, and constructed a prototypical platform to integrate service selection with deployment. Our evaluation indicates that a high level of integration is possible. We were able to fully automatize the remote deployment of a cloud service after it was selected by a customer in a marketplace. Architectural decisions emerged during the construction of the platform and were related to global service identification and access, multi-layer routing, and dynamic binding.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
Bring N-Tier Apps to containers 2015 ContainerConChris Haddad
Containerization is moving from lab work to production application projects. Teams desire to achieve deployment agility, application resilience, and resource optimization. While container cookbooks show simple scenarios, containerizing production N-tier applications requires complex considerations. Chris describes how teams select complementary open source projects (i.e. Docker Compose, Apache Mesos, Mesos Marathon, Google Kubernetes, Apache Stratos) and craft an open source platform that shifts legacy applications away from virtual machines and into containers. He demonstrates how teams effectively manage container dependencies, independently scale container tiers, and deliver quality of service. From a developer’s perspective, Chris will show micro-service architecture patterns guiding teams towards application packaging strategies and container lifecycle decisions
This is a must-read for all engineers interested in developing a Micro services architecture. Turn your monolithic server into a prolific and multiple instance solution! Includes well-known example such as Netflix. Please contact me for more details.
Expressing Concept Schemes & Competency Frameworks in CTDLCredential Engine
This presentation is focused on how the Credential Engine can access 3rd party resource data stores and recipes for mapping and publishing competency frameworks as Linked Data.
Webinar presented live on May 29, 2018
The Cloud Native Computing Foundation builds sustainable ecosystems and fosters a community around a constellation of projects that orchestrate containers as part of a microservices architecture. CNCF serves as the vendor-neutral home for many of the fastest-growing projects on GitHub, including Kubernetes, Prometheus and Envoy, fostering collaboration between the industry’s top developers, end users, and vendors.
In this webinar, Dan Kohn, CNCF Executive Director, will present:
- A brief overview of CNCF
- Evolving monolithic applications to microservices on Kubernetes
- Why Continuous Integration is the most important part of the cloud native architecture
Watch the video: http://www.cloud-council.org/webinars/kubernetes-and-container-technologies-from-cncf.htm
Cloud Foundry is a collection of complementary open source technologies focused on application developers and operators, as well as many projects to support and extend them.
In this webinar on April 24th, Chip Childers provided an overview of Cloud Foundry technologies (Application Runtime, Container Runtime and BOSH) discussing their use cases and core project updates. He discussed the technical benefits of the platform, focus areas for 2018, and major highlights from the Cloud Foundry Summit held April 18-20 in Boston, MA.
To view the video recording & more: http://www.cloud-council.org/webinars/cloud-foundry-roadmap-in-2018.htm
Webinar presented live on February 27, 2018.
Introducing the OMG’s Data Residency Maturity Model
With the rise of managed IT services and cloud computing, sensitive data is regularly moved across countries and jurisdictions, which can be in direct conflict with various international, national or local regulations dictating where certain types of data can be stored (e.g., the European Union’ General Data Protection Regulation, or GDPR). Data residency is also a consideration of data owners responsible for protecting and securing data from unintended access.
The Object Management Group® (OMG®), a technology standards consortium, launched a working group in 2015 to address the challenges of data residency and define a standards roadmap to help stakeholders manage the location of their data and metadata.
Given the complexity of the issue, a stepwise improvement plan is necessary. This webinar will introduce a new Data Residency Maturity Model (DRMM) proposed in December 2017. Similar to the Capability Maturity Model (CMM) invented in 1990 at the Software Engineering Institute (SEI), the DRMM contains five maturity levels aimed at helping an organization improve their practices and governance of data residency. The OMG seeks feedback on the DRMM and calls on all interested parties to contribute to this work.
Webinar presented live on February 15, 2018.
Speakers:
Dan O’Prey, Chair of Hyperledger Marketing Committee and CMO at Digital Asset
Tracy Kuhrt, Community Architect, Hyperledger
Hyperledger is an umbrella open source project started in December 2015 by the Linux Foundation to support the collaborative development of blockchain-based distributed ledgers across industries. A blockchain is a continuously growing list of records, called blocks, that are linked and secured using cryptography. Transactions between two parties are recorded efficiently and in a verifiable and permanent way.
In this webinar, Dan O’Prey and Tracy Kuhrt will present an update on the blockchain market, industry trends, and new Hyperledger projects. They will discuss the technical items delivered over the last 6 months and focus areas for Hyperledger in 2018.
Version 2.0 of Interoperability and Portability for Cloud Computing: A Guide is now available.
http://www.cloud-council.org/deliverables/interoperability-and-portability-for-cloud-computing-a-guide.htm
This paper from the Cloud Standards Customer Council provides a clear definition of interoperability and portability and how these concepts relate to different components in the architecture of cloud computing, each of which needs to be considered in its own right. Version 2.0 reflects the new ISO/IEC 19941 Cloud Computing Interoperability and Portability standard and its facet models of interoperability, data portability, and application portability.
In this webinar, authors of the paper will discuss how to select and provision cloud services indicating how interoperability and portability affect the cost, security and risk involved.
Webinar presented live on January 10, 2018.
Version 3.0 of Security for Cloud Computing: Ten Steps to Ensure Success has just been released for publication. Read it here: http://www.cloud-council.org/deliverables/security-for-cloud-computing-10-steps-to-ensure-success.htm
As organizations consider a move to cloud computing, it is important to weigh the potential security benefits and risks involved and set realistic expectations with cloud service providers. The aim of this guide to help enterprise information technology (IT) and business decision makers analyze the security implications of cloud computing on their business.
In this webinar, authors of the paper will discuss:
• Security, privacy and data residency challenges relevant to cloud computing
• Considerations that organizations should weigh when migrating data, applications, and infrastructure to a cloud computing environment
• Threats, technology risks, and safeguards for cloud computing environments
• A cloud security assessment to help customers assess the security capabilities of cloud service provide
Webinar presented live on August 11, 2017
Today, the majority of big data and analytics use cases are built on hybrid cloud infrastructure. A hybrid cloud is a combination of on-premises and local cloud resources integrated with one or more dedicated cloud(s) and one or more public cloud(s). Hybrid cloud computing has matured to support data security and privacy requirements as well as increased scalability and computational power needed for big data and analytics solutions.
This webinar summarizes what hybrid cloud is, explains why it is important in the context of big data and analytics, and discusses implementation considerations unique to hybrid cloud computing.
The presentation draws from the CSCC's deliverable, Hybrid Cloud Considerations for Big Data and Analytics:
http://www.cloud-council.org/deliverables/hybrid-cloud-considerations-for-big-data-and-analytics.htm
Download the presentation deck here:
http://www.cloud-council.org/webinars/hybrid-cloud-considerations-for-big-data-and-analytics.htm
Webinar presented live on August 8, 2017
The CSCC has published version 2.0 of Cloud Customer Architecture for Big Data & Analytics – a reference architecture that describes elements and components needed to support big data and analytics solutions using cloud computing. Version 2.0 of the architecture includes support for new use cases and cognitive computing. Big data analytics (BDA) and cloud computing are a top priority for CIOs. As cloud computing and big data technologies converge, they offer a cost-effective delivery model for cloud-based analytics. Many companies are experimenting with different cloud configurations to understand and refine requirements for their big data analytics solutions.
This webinar will cover:
- Business reasons to adopt cloud computing for big data and analytics capabilities
- An architectural overview of a big data analytics solution in a cloud environment with a description of the capabilities offered by cloud providers
- Proven architecture patterns that have been deployed in successful enterprise BDA projects
The presentation draws from the CSCC's deliverable, Cloud Customer Architecture for Big Data and Analytics V2.0
http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-big-data-and-analytics.htm
Download the presentation deck here:
http://www.cloud-council.org/webinars/cloud-customer-architecture-for-big-data-and-analytics-v2.htm
Webinar presented live on July 26, 2017
Cloud Management Platforms (CMPs) are integrated products that provide for the management of public, private and hybrid cloud environments. The rise of hybrid IT architectures increases the need for process harmonization and tools interoperability to address these evolving requirements through the use of a CMP.
This webinar will cover:
- A review of CMP capabilities
- How to operate and manage applications and data across hybrid cloud infrastructures
- Available CMPs on the market today
- Evaluation criteria for selecting a CMP to meet your needs
- Deployment considerations
The presentation draws from the CSCC's deliverable, Practical Guide to Cloud Management Platforms.
http://www.cloud-council.org/deliverables/practical-guide-to-cloud-management-platforms.htm
Download the presentation deck here:
http://www.cloud-council.org/webinars/practical-guide-to-cloud-management-platforms.htm
Webinar presented live on July 18, 2017
Blockchain technology has the potential to have a major impact on how institutions process transactions and conduct business. At its core, blockchain features an immutable distributed ledger and a decentralized network that is cryptographically secured. A blockchain is a historical record of all the transactions that have taken place in the network since the beginning of the blockchain and serves as a single source of truth for the network.
Attend this webinar to learn about the capabilities of a Blockchain cloud reference architecture including deployment considerations and specific application examples.
This presentation draws from the CSCC's deliverable, Cloud Customer Architecture for Blockchain. Read it here: http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-blockchain.htm
Download the presentation deck here: http://www.cloud-council.org/webinars/cloud-customer-architecture-for-blockchain.htm
Webinar presented live on June 22, 2017.
Speaker: Chip Childers, CTO, Cloud Foundry
Cloud Foundry is an open source platform for deploying and managing cloud applications. Cloud Foundry CTO, Chip Childers, will provide an overview of the Cloud Foundry platform, its various use cases, and core project updates. Chip will discuss the technical benefits of the platform, highlight the technical direction of the project, explain focus areas for 2017, and provide highlights from the Cloud Foundry Summit, June 13-15 in Santa Clara, CA.
Webinar presented live on May 17, 2017
Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration, hosted by The Linux Foundation, including leaders in finance, banking, IoT, supply chain, manufacturing and technology.
In this webinar, Dan O'Prey, CMO at Digital Asset and Chair of the Hyperledger Marketing Committee, and IBM’s Chris Ferris, chair of the Hyperledger Technical Steering Committee, will provide an overview of Hyperledger. They will discuss the basics of distributed ledger technologies, business use cases for blockchain, and how to get involved with Hyperledger projects.
To view a video recording, visit: http://www.cloud-council.org/webinars/the-hyperledger-project-advancing-blockchain-technology-for-business.htm
Webinar presented live on May 11, 2017.
As data is increasingly accessed and shared across geographic boundaries, a growing web of conflicting laws and regulations dictate where data can be transferred, stored, and shared, and how it is protected. The Object Management Group® (OMG®) and the Cloud Standards Customer Council™ (CSCC™) recently completed a significant effort to analyze and document the challenges posed by data residency. Data residency issues result from the storage and movement of data and metadata across geographies and jurisdictions.
Attend this webinar to learn more about data residency:
• How it may impact users and providers of IT services (including but not limited to the cloud)
• The complex web of laws and regulations that govern this area
• The relevant aspects – and limitations -- of current standards and potential areas of improvement
• How to contribute to future work
Read the OMG's paper, Data Residency Challenges and Opportunities for Standardization: http://www.omg.org/data-residency/
Read the CSCC's edition of the paper, Data Residency Challenges: http://www.cloud-council.org/deliverables/data-residency-challenges.htm
Webinar presented live on April 19, 2017
The Cloud Standards Customer Council has published a reference architecture for securing workloads on cloud services. The aim of this new guide is to provide a practical reference to help IT architects and IT security professionals architect, install, and operate the information security components of solutions built using cloud services.
Building business solutions using cloud services requires a clear understanding of the available security services, components and options, allied to a clear architecture which provides for the complete lifecycle of the solutions, covering development, deployment and operations. This webinar will discuss specific security services and corresponding best practices for deploying a comprehensive cloud security architecture.
Read the whitepaper: http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-securing-workloads-on-cloud-services.htm
Webinar presented live on April 11, 2017.
The Cloud Standards Customer Council has published version 2.0 of the Impact of Cloud Computing on Healthcare whitepaper.
Over the past several years, the market dynamics of the healthcare industry have changed significantly with the growing impact of consumerism, digitalization, preventative healthcare and regulations. Attend this webinar to gain a fresh perspective on the current market dynamics, challenges and benefits of cloud computing on healthcare IT.
The webinar presentation will cover:
- Benefits and key considerations of leveraging cloud computing for healthcare IT
- Specific IT trends in the healthcare industry that are addressed most effectively, both technically and economically, by cloud computing
- Guidance on how best to achieve the benefits of cloud computing
Read the whitepaper: http://www.cloud-council.org/deliverables/impact-of-cloud-computing-on-healthcare.htm
Webinar presented live on April 4, 2017
The Cloud Standards Customer Council has published an API Management reference architecture. APIs allow companies to open up data and services to external third party developers, business partners, and internal departments within the company to create innovative channel applications and new business opportunities. An effective API management platform provides a layer of controlled and secure self-service access to core business assets for reuse.
In this webinar, the authors of the reference architecture will cover the architectural components and capabilities that make up a superior API Management Platform and will also cover important runtime characteristics and deployment considerations.
Read the CSCC's paper here: http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-api-management.htm
Webinar presentation March 9, 2017
IT environments are now fundamentally hybrid in nature – devices, systems, and people are spread across the globe, and at the same time virtualized. Achieving integration across this ever changing environment, and doing so at the pace of modern digital initiatives, is a significant challenge.
This presentation introduces a hybrid integration reference architecture published by the Cloud Standards Customer Council. Learn best practices from leading-edge enterprises that are starting to leverage a hybrid integration platform to take advantage of best of breed cloud-based and on-premises integration approaches.
This webinar draws from the CSCC's deliverable, Cloud Customer Architecture for Hybrid Integration. Read it here: http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-hybrid-integration.htm
Webinar presentation January 31, 2017
The CSCC shares a reference architecture for delivering Enterprise Social Collaboration solutions. The presentation covers the technical capabilities and integration requirements necessary to enable social collaboration. Presenters discuss the flows and relationships between business capabilities, functional areas, and architectural components delivered as a cloud solution.
Gain a better understanding of how to leverage social collaboration tools to harness ideas, exchange information, and increase the speed of innovation across the business. The presenters demonstrate a real-world example to illustrate these points.
This webinar draws from the CSCC's deliverable, Cloud Customer Architecture for Enterprise Social Collaboration. Read it here: http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-enterprise-social-collaboration.htm
Webinar presentation: November 17, 2016
Subject matter experts from the CSCC present an overview of the security standards, frameworks, and certifications that exist for cloud computing. We also discuss privacy considerations in light of new regulations (e.g., EU’s General Data Protection Regulation (GDPR)). This presentation helps cloud customers understand and distinguish between the different types of security standards that exist and assess the security standards support of their cloud service providers.
Read the CSCC's deliverable, Cloud Security Standards: What to Expect and What to Negotiate: http://www.cloud-council.org/deliverables/cloud-security-standards-what-to-expect-and-what-to-negotiate.htm
Webinar presentation: November 15, 2016
The topics of interoperability and portability are significant considerations in relation to the use of cloud services, but there is confusion and misunderstanding of exactly what this entails.
Interoperability and Portability for Cloud Computing: A Guide provides a clear definition of interoperability and portability and how these relate to various aspects of cloud computing and to cloud services.
This webinar will describe interoperability and portability in terms of a set of common cloud computing scenarios. This approach assists in demonstrating that both interoperability and portability have multiple aspects and relate to a number of different components in the architecture of cloud computing, each of which needs to be considered in its own right. The aim is to give both cloud service customers and cloud service providers guidance in the provision and selection of cloud services indicating how interoperability and portability affect the cost, security and risk involved.
Download the CSCC's deliverable: http://www.cloud-council.org/deliverables/interoperability-and-portability-for-cloud-computing-a-guide.htm
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Why React Native as a Strategic Advantage for Startup Innovation.pdf
OASIS TOSCA: Cloud Portability and Lifecycle Management
1. Cloud Portability, Lifecycle Management
and more!
@mrutkowski
Wednesday, 18 May, 2016 @ 11:00 AM EDT
Matt Rutkowski
IBM STSM, Cloud Open Technologies
OASIS TOSCA Chair, Simple Profile WG
2. 2
▪ What is TOSCA?
▪ milestones & participation
▪ What Makes TOSCA Unique?
▪ intent model
▪ Key Modeling Concepts
Topology, Composition, Portability, Lifecycle (management), Policy
▪ TOSCA’s Growing Eco-System
▪ in open source & standards
▪ What’s Next
▪ work group activities, version 1.1
3. An important Open standard, that is enabling a unique Cloud eco-system
supported by a large and growing number of international industry leaders…
TOSCA uses a domain-specific language (DSL) to define interoperable
descriptions of :
• Cloud applications, services, platforms, infrastructure and data
components, along with their relationships, requirements,
capabilities, configurations and operational policies…
• …thereby enabling portability and automated management
across cloud providers regardless of underlying platform or
infrastructure thus expanding customer choice, improving
reliability and time-to-value while reducing costs.
3
4. Associated Companies• TOSCA Version 1.0 Specification approved as an OASIS Standard
— published Nov 2013, XML format
• TOSCA Simple Profile v1.0 Specification (YAML format)
— final public review, ended March 2016, towards OASIS Standard
— TOSCA Simple Profile v1.1 Specification (target: June 2016)
Supports Domain-Specific Profile Specifications:
– Network Function Virtualization (NFV) Profile v1.0
• Government and Corporate Awareness:
– OASIS: 600+ participant organizations.
5000+ participants spanning 65+ countries
– TOSCA Committee: 170+ people 45+ companies/orgs
– International Standards & Research: ISO/IEC JTC 1 liaison, EU
FP7, ETSI NFV liaison, etc.
• Multi-company Interoperability Demonstrated:
– EuroCloud 2013, Open Data Center Alliance 2014, OSCON 2015,
OpenStack Summit 2016 (Indigo DataCloud)
4
Includes contributors, reviewers, implementers, users
or supporters of the TOSCA Standard via OASIS
5. incorporates both Data and Information Model features and concepts …
… but brings unique orchestration concepts focus in Lifecycle mgmt. and State
Information Models
Typically, used to model a constrained
domain that can be described by a closed
set of entity types, properties,
relationships and operations.
Data Models
Typically, describe the structure
(format), enabling manipulation (via
interfaces) of the data stored in data
management systems assuring integrity.
• Topology
• Composition
• Requirements - Capabilities
• State (Nodes, Relationships)
• Lifecycle (Management)
• Policy
Intent Model Adds:
TOSCA is an Intent Model which is declarative (integration points for imperative)
• Structure
• Format
• interfaces
• Types, Relationships
• Properties
• Operations
TOSCA is can work with
imperative scripts
(e.g., Ansible, Chef, Bash,
Ant, etc.)
TOSCA can include
other data models
(e.g., JSON, YANG)
7. Tier (Group Type)
TOSCA is used first and foremost to describe the topology of the deployment view for
cloud applications and services
7
source_resource
Node_Type_A
target_resource
Node_Type_B
Requirement
connect_relationship
ConnectsTo
Capability
Nodes - are the resources
or components that will be
materialized or consumed
in the deployment topology
Relationships
express the dependencies
between the nodes (not the
traffic flow)
Node templates to describe components in the
topology structure
Relationship templates to describe connections,
dependencies, deployment ordering
Requirement - Capability
Relationships can be
customized to match specific
source requirements to target
capabilities
Groups
Create Logical,
Management or Policy
groups (1 or more nodes)
8. 8
TOSCA Service Template
(container)Application Tier
(container)
Web Server
(container)
Web App
PHP Module
Database Tier
(container)
DB Server
(container)
Database
Service Templates provide the “container” to exchange and reuse topologies:
• Reusable models extend investments by making it easy to compose more valuable
and complex apps from existing apps
• Determines dependency boundaries to maximize parallelism of deployments
• Models (dependencies) can be validated by automation to ensure application-aware,
policy-aligned configuration, deployment and operational semantics
Containment
Connectivity
Example: a simple, 2-Tier Cloud application expressed in a TOSCA Service Template
11. Analytics
Service
(Topology)
Cloud Application
(Topology)
Orchestrators can “substitute” for abstract nodes…
… as long as all declared “requirements” are met:
• Monitoring Service can be substituted in Cloud Application
• Analytics Service can be substituted in Monitoring Service
Abstract nodes in one TOSCA topology can be substituted with another topology
Monitoring
Service
(Abstract)
Java
Application
Web
Application
Server
SQL
Datastore
Monitoring Service
(Topology)
Collector
Logger
Monitoring
Framework
Analytics
Service
(Abstract)
Analytics
Engine
Hadoop
Service Template 1
Service Template 2
Service Template 3
13. TOSCA Service Template
Storage
Compute1
DB
Compute2
App
Network
Scaling
Policy
TOSCA’s defines Normative Types for
different domains, for example:
Application, IaaS Types are part of
“core” specification
e.g., Web Server, Database, Compute,
Block Storage, Network
Cloud Application’s declarative
modelled from these normative types …
… Can be understood by any Cloud
Provider
unfulfilled
Application Requirements
can be exported
for Orchestrators to fulfill
Templates include (or reference) all necessary configuration and Infrastructure requirements
TOSCA applications, using normative types, are portable to different Cloud infrastructures
TOSCA Meta-Model Normative Types
Nodes
• Properties
• Attributes
Relationships
• Properties
• Attributes
Capabilities
Interfaces
(Operations)
Groups
Policies
Requirements
Interfaces
composedfrom
basedupon
14. Example: TOSCA applications are portable to different Cloud infrastructures
Application Requirements
TOSCA
Orchestration
TOSCA Service Template
Storage
Compute1
DB
Compute2
App
Network
Scaling
Policy
Cloud
Provider C
Cloud
Provider A
Cloud
Provider B
by expressing
application Requirements…
independently from
cloud provider
Capabilities…
& OptimizationAutomatic Matching
Infrastructure Capabilities
Orchestrators concern themselves dealing with disparate cloud APIs 14
16. TOSCA models have a consistent view of state-based lifecycle
have Operations (implementations) that can be sequenced against state of any dependent resources
fits into any Management Framework or Access Control System
my_resource_name
My_Resource_Type
Lifecycle.Standard
create
configure
start
stop
delete
Standardize Resource Lifecycle Standardize Relationship Lifecycle Lifecycle Customization
source_resource
Type_A
A
target_resource
Type_B
B
my_relationship
ConnectsTo
Operations
Lifecycle.Configure
pre_config_target
post_config_target
add_target
remove_target
pre_config_source
post_config_source
add_source
remove_source
Operations
Lifecycle.Configure.NFV
Operations
Lifecycle.Standard
create
configure
start
stop
delete
Create new Lifecycles or
Augment existing (via subclassing)
pre_config
pre_delete
16
17. my_resource_name
My_Resource_Type
Lifecycle.Standard
create
configure
start
stop
delete
Node
Lifecycle
Operations
Implementations (e.g.,
imperative scripts) can be
bound to operations.
source_resource
Type_A
A
target_resource
Type_B
B
my_relationship
ConnectsTo
Lifecycle.Configure
pre_config_target
post_config_target
add_target
remove_target
pre_config_source
post_config_source
add_source
remove_source
Operations
The Orchestrator moves the nodes through their Lifecycle States by executing their Lifecycle
Operations in topological order
• Orchestrators can work to deploy nodes in parallel based upon node relationships
Relationship
LifecycleNodes have their own Lifecycle
Operations which are invoked in
order to achieve a target state
Relationships also have their own
Lifecycle Operations to configure or
allocate and de-configure or
deallocate Node related resources
19. v1.0 includes the groundwork for Placement (Affinity), Scaling and Performance Policies
‒ Orchestrators can evaluate Conditions based on Events that trigger Automatic or Imperative Actions
Policies can be declared independently and ttached to various points in your models
1. That can be attached to Interfaces or specific Operations,
2. Nodes and
3. Groups of Nodes
my_app_1
Compute
Capabilities
Container
...Lifecycle
create
configure
...
Policy
• Type
• Event, Condition
• Action
my_scaling_group
backend_app
Compute
Policy
• Type
• Event, Condition
• Action
my_database
Compute
web-app
ComputePolicy
• Type
• Event, Condition
• Action
1
2
3
Scaling
“Policies are non-functional Requirements independent of nodes”
20. TOSCA Policy Definition (e.g., Placement, Scaling, Performance) :
<policy_name>:
type: <policy_type_name>
description: <policy_description>
properties: <property_definitions>
# allowed targets for policy association
targets: [ <list_of_valid_target_resources> ]
triggers:
<trigger_symbolic_name_1>:
event: <event_type_name>
target_filter:
node: <node_template_name> | <node_type>
# (optional) reference to a related node
# via a requirement
requirement: <requirement_name>
# (optional) Capability within node to monitor
capability: <capability_name>
# Describes an attribute-relative test that
# causes the trigger’s action to be invoked.
condition: <constraint_clause>
action:
# implementation-specific operation name
<operation_name>:
description: <optional description>
inputs: <list_of_parameters>
implementation: <script> | <service_name>
...
Event
• Name of a normative TOSCA Event Type
• That describes an event based upon a
Resource “state” change.
• Or a change in one or more of the
resources attribute value.
Condition
Identifies:
• the resource (Node) in the TOSCA
model to monitor.
• Optionally, identify a Capability of the
identified node.
• Describe the attribute (state) of the
resource to evaluate (condition)
1..NTriggerscanbedeclared
Describes:
• An Operation (name) to invoke when
the condition is met
• within the declared Implementation
• Optionally, pass in Input parameters to
the operation along with any well-
defined strategy values.
Action
21. • Reference by other Standards
• Open Source
• OpenStack
22. 22
Topology, Type & LCM Design
http://alien4cloud.github.io/
alien4cloud
Service Orchestration & Management
http://getcloudify.org/
Data/computing platform targeted at
scientific communities
http://information-
technology.web.cern.ch/about/projects/eu/indigo-datacloud
https://wiki.openstack.org/
Heat-Translator
(IaaS, App Orchestration)
Tacker
(Network Function Orchestration)
http://ariatosca.org//
Multi-Cloud Orchestration
(Amazon, Azure, VMware, OpenStack)
Open Sourced from Cloudify
www.seaclouds-project.eu/media.html
Open, Multi-Cloud Management
Parser
Deployment Template Translation
https://wiki.opnfv.org/display/parser/Parser
Note: ETSI NFV ack. TOSCA can be used as an
input model/format
24. • Interoperability (Conformance)
• Goal: Conformance test suite for v1.0; includes tests for each section of Simple Profile v1.0 specification.
• Each test is a TOSCA Service Templates with metadata describing test using the OASIS Test-Assertion (TAG) Standard
• Work underway to publish in new GitHub repo., announcement (target ~May 2016)
• Container (Clustering)
• Goal: Finish new Cluster capability definitions, Data Cluster use cases. for Simple Profile v1.1
• Instance Model
• Goal: new schema for an Instance Model (reuse existing schema where possible)
• Discussing API potentially enabling capture, export and management of deployed application
• Monitoring
• Goal: Create normative event types for basic operational events
• Focus on events types for Health, Scaling & Performance
• Support basic “Red-Yellow-Green” and Percentage-based monitoring for dashboards
• Network Function Virtualization (NFV)
• Expanded Scope: include Software-Defined Network (SDN) use cases
• Goal: Complete v1.0 Specification, v1.0 Public Review Draft 3 Published (17 March 2016)
• Can model complete ETSI MANO specification: Network Services, Virtual Network Functions (NFV)s, Virtual Links, with Forwarding Paths,
• Orchestration demonstrated with OpenStack Tacker Project, multi-VNF use cases for next release
25. Specification Release Targets
• Public Review Draft 01 - target June 2016
• “Final” Public Review Draft - target 3Q 2016
New Features
• Metadata (completed)
• now supported in all Types (Node, Relationship, Capability, Data, etc.)
• Conformance Testing metadata
• Group Type (completed)
• Expanded Group Type to allow management of member resources (i.e., Lifecycle)
• Has its own Capabilities and Requirements
• Policy Definition (completed)
• Event-Condition-Action model
• Includes Event Filters and Triggers
• Workflow (80% completed)
• Intermix declarative with Imperative (e.g., Ansible, Chef, Ant, Bash)
• Preserve investment in existing scripts for complex installations / configurations
• Cluster Type (75% completed)
• Add support for Cluster normative type; based upon new Group Type
• Will support new normative LoadBalancer , Scalable and Router Capability Types
• Data Clusters (e.g., Cassandra, MongoDB, etc.) – In-Progress
25
26. 26
• TOSCA Technical Committee Public Page (latest documents, updates, and more)
— https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
• OASIS YouTube Channel, TOSCA Playlist
—https://www.youtube.com/user/OASISopen , http://bit.ly/1BQGGHm
• LinkedIn Group: “TOSCA OASIS Standard”:
— https://www.linkedin.com/groups/8505536
• TOSCA Simple Profile in YAML v1.0 (final public review draft, 04, Feb. 2016)
— http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/TOSCA-Simple-Profile-YAML-v1.0.html
• TOSCA Simple Profile for NFV v1.0 (latest public review draft, 17 March 2016)
– http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
• Contact the Technical Committee Co-Chairs:
– Paul Lipton, paul.lipton@ca.com; Simon Moser, smoser@de.ibm.com