A look back at three years of OpenStack architecture as well as a view of the next version. Presented at OpenStack Korea in Seoul, South Korea on July 18th, 2013.
OpenStack Ecosystem – Xen Cloud Platform and Integration into OpenStack - in...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract: OpenStack is an Initiative by RackSpace and NASA that aims for building an Open cloud platform supported by a vibrant Ecosystem to encourage broad adoption in the market.This is currently a hot favorite of enterprises looking to build an Open cloud.
This talk will provide a brief overview of the different OpenStack Modules (Compute and Storage) and explain how to utilize these to build a cloud. We will also explore the newly released Xen Cloud Platform (XCP) and its integration with OpenStack Platform. There will be a hands-on demo (time permitting) where we will show how the integration between the OpenStack Platform and XCP works.
Key Takeaways for the audience:
1) Understanding of OpenStack platform.
2) How to get started with OpenStack for building your own cloud.
3) Understanding of XCP
3) How the integration (OpenStack-XCP) is supposed to work
4) What are the opportunities for building different products that add value in the OpenStack Ecosystem
Speaker: Amit Naik is an Architect at BMC Software and has 15 years of experience in the IT field with experience in delivering multiple end-to-end projects and Products. Multiple speaking engagements at different venues both in India and Abroad. Experience with blogging, evangelizing etc. Excellent communication and interpersonal skills.
Joint Speaker: Prasad Nirantar is a Staff Product Developer at BMC Software. He holds a B.E in Polymer Engineering from the University of Pune and an MS from University of Akron, US. He also holds a diploma in business management from Symbiosis University.
OpenStack is the prevailing open source cloud software. It includes numerous API services for programmatic management of all sorts of IaaS and SaaS services. VMs, Containers, Bare Metal, Multi-tenancy. Use this platform to strike the right balance between developer self-service to your infrastructure and a well defined platform for next generation containerized microservice applications that your IT department feels happy to support and your CFO would be happy to pay for.
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaSShixiong Shang
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaS workshop I delivered at OpenStack Vancouver Summit (May, 2015) jointly with Jason and Sharmin from Cisco System.
More details can be found at https://github.com/grimmtheory/autoscale
OpenStack Architected Like AWS (and GCP)Randy Bias
A description of how we built Open Cloud System (OCS), an OpenStack-powered complete cloud operating system. With a focus on AWS and GCE interoperability, we describe why hybrid cloud interoperability matters and how we got there. Anyone can do it and we think you should too.
A look back at three years of OpenStack architecture as well as a view of the next version. Presented at OpenStack Korea in Seoul, South Korea on July 18th, 2013.
OpenStack Ecosystem – Xen Cloud Platform and Integration into OpenStack - in...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract: OpenStack is an Initiative by RackSpace and NASA that aims for building an Open cloud platform supported by a vibrant Ecosystem to encourage broad adoption in the market.This is currently a hot favorite of enterprises looking to build an Open cloud.
This talk will provide a brief overview of the different OpenStack Modules (Compute and Storage) and explain how to utilize these to build a cloud. We will also explore the newly released Xen Cloud Platform (XCP) and its integration with OpenStack Platform. There will be a hands-on demo (time permitting) where we will show how the integration between the OpenStack Platform and XCP works.
Key Takeaways for the audience:
1) Understanding of OpenStack platform.
2) How to get started with OpenStack for building your own cloud.
3) Understanding of XCP
3) How the integration (OpenStack-XCP) is supposed to work
4) What are the opportunities for building different products that add value in the OpenStack Ecosystem
Speaker: Amit Naik is an Architect at BMC Software and has 15 years of experience in the IT field with experience in delivering multiple end-to-end projects and Products. Multiple speaking engagements at different venues both in India and Abroad. Experience with blogging, evangelizing etc. Excellent communication and interpersonal skills.
Joint Speaker: Prasad Nirantar is a Staff Product Developer at BMC Software. He holds a B.E in Polymer Engineering from the University of Pune and an MS from University of Akron, US. He also holds a diploma in business management from Symbiosis University.
OpenStack is the prevailing open source cloud software. It includes numerous API services for programmatic management of all sorts of IaaS and SaaS services. VMs, Containers, Bare Metal, Multi-tenancy. Use this platform to strike the right balance between developer self-service to your infrastructure and a well defined platform for next generation containerized microservice applications that your IT department feels happy to support and your CFO would be happy to pay for.
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaSShixiong Shang
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaS workshop I delivered at OpenStack Vancouver Summit (May, 2015) jointly with Jason and Sharmin from Cisco System.
More details can be found at https://github.com/grimmtheory/autoscale
OpenStack Architected Like AWS (and GCP)Randy Bias
A description of how we built Open Cloud System (OCS), an OpenStack-powered complete cloud operating system. With a focus on AWS and GCE interoperability, we describe why hybrid cloud interoperability matters and how we got there. Anyone can do it and we think you should too.
OpenStack at NTT Resonant: Lessons Learned in Web InfrastructureTomoya Hashimoto
This slide is what was announced at the OpenStack Summit Tokyo.
NTT Resonant Inc., one of NTT group company, is an operator of the "goo" Japanese web portal and a leading provider of Internet services. NTT Resonant deployed and has been operating OpenStack as its service infrastructure since October 2014 in production. The infrastructure started with 400 hypervisors and now accommodates more than 80 services and over 1700 virtual servers. It processes most of 170 Million unique users per month and 1 Billion page views per month.
We will show our knowledge based on our experience. This talk will specifically cover the following areas:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/openstack-at-ntt-resonant-lessons-learned-in-web-infrastructure
Spinnaker is a continuous delivery platform by Netflix and open sourced in late 2015. Fast-forward 3 years, Spinnaker can deploy to 9 (!) cloud providers and platforms; with many project contributions coming from the cloud providers themselves (Google, Amazon, Microsoft, etc.). This DevOps Toronto talk will feature a quick overview of what Spinnaker can do.
http://decks.pierre-nick.com/201904_Spinnaker_DevOpsTO/
https://github.com/pndurette/spinnaker-playground
https://github.com/pndurette/decks
An introduction to OpenStack as project. This overview covers the basic components and architecture of the OpenStack platform, as well as presents facts around the global and local community.
Watcher, a Resource Manager for OpenStack: Plans for the N-release and BeyondAntoine Cabot
Watcher is an open source software package which provides a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.
Watcher provides a complete optimization loop—including everything from a metrics receiver, optimization processor and an action plan applier. This provides a robust framework to realize a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration,increased energy efficiency, etc.
The overall goal is that OpenStack-based clouds equipped with Watcher will decrease their Total Cost of Ownership by way of more efficient use of their infrastructure through targeted optimizations and close-loop automation.
In this presentation we will go over the state of Watcher as it is today, its architecture, the team’s accomplishments for the Mitaka release and our plans for the N-release and beyond.
MicroServices at Netflix - challenges of scaleSudhir Tonse
MicroServices has caught on as the design pattern of choice for many companies at scale. While MicroServices and SOA in general have many positives compared to Monolithic apps, it does come with its own challenges - especially when running at scale. These slides were for a 15 min Meetup talk hosted at Cisco
Kubermatic How to Migrate 100 Clusters from On-Prem to Google Cloud Without D...Tobias Schneck
Have you ever thought about migrating your Kubernetes clusters to Google Cloud to get your services closer to your customers? Yes? We too! Join us on an interactive journey to discover the main challenges of live migration at scale of etcd's, traffic routing and application workloads from your on-premise platform to GCP. The talk will discuss the current state of the technical concept, known problems and insides of the already proven migration steps for stateless workload.
As part of the journey, we'll see the differences between migrating one or one hundred clusters with productive workloads; What parts can be automated? What steps may need to be manual? Let's see how an automated solution could look like in the future and what steps are missing.
Kubernetes Cluster API - managing the infrastructure of multi clusters (k8s ...Tobias Schneck
Thanks to tools like kubeadm, Terraform or Ansible setting up a Kubernetes cluster on a dedicated environment is getting reachable, but what’s about setting up a bunch of cluster in multiple clouds in automatic way? This is still a challenge. Also if you want to do same in your own datacenter. In this talk we will take a look to the approach to orchestrate and manage a whole set of k8s cluster by the Cluster API project of kubernetes (a subproject of sig-cluster-lifecycle). The main idea behind it is to use the Kubernetes API itself to manage multiple clusters with there master and worker nodes in same way you would manage your PODs - define the needed resources and the responsible controller will take care for providing it.
After an overview about the concepts of cluster API, I will show what’s needed to implement a cluster API conform machine class/deployment. There I will see that adding your own provider isn’t that hard as you may aspect. At the end of the day it just requires a simple interface to implement. The corresponding kubermatic controllers we implemented at Loodse are available as open source, so its possible to play around with it. A live demo will show how easy it is to spin up and maintain multiple Kubernetes cluster at different public and on-premise cloud providers over one managing cluster. A final wrap up will summarize the current state of the Cluster API project and the advantages of managing clusters as cattles instead of pets.
Architecture of massively scalable, distributed systems - InfoShare 2015Tomasz Zen Napierala
OpenStack is currently the biggest open source project in known universe. Besides social there are other, technical reasons for its rapid adoption. One of them is definately OpenStack architecture - massively scalable, extensible and distributed. Premature optimization is said to be source of all evil, but using some fundamental techniques we can save ourselves many problems in the future. Let’s look at some core scalability concepts and try to apply them to solve daily problems.
Are you overwhelmed by storage capacity requirements? Are you wondering how web giants are able to store large amounts of data at a fraction of your storage costs?
OpenStack is the fastest growing open-source project to date, and its community builds cloud software. Join us to learn about the two OpenStack storage projects and how your company can take advantage of them.
OpenStack storage allows the use of commodity hardware at massive scales that you can consume as a public, private, or hybrid cloud.
View the on-demand webinar. Special guest speaker Randy Bias, founder and CEO of Cloudscaling and member of the Board of Directors for OpenStack Foundation, and EVault big data expert Joey Yep will inform you about this fast-growing, open-source project: OpenStack.
• OpenStack Swift and Cinder storage projects
• High-level functionality and architecture
• Public, private, and hybrid use-cases
OpenStack at NTT Resonant: Lessons Learned in Web InfrastructureTomoya Hashimoto
This slide is what was announced at the OpenStack Summit Tokyo.
NTT Resonant Inc., one of NTT group company, is an operator of the "goo" Japanese web portal and a leading provider of Internet services. NTT Resonant deployed and has been operating OpenStack as its service infrastructure since October 2014 in production. The infrastructure started with 400 hypervisors and now accommodates more than 80 services and over 1700 virtual servers. It processes most of 170 Million unique users per month and 1 Billion page views per month.
We will show our knowledge based on our experience. This talk will specifically cover the following areas:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/openstack-at-ntt-resonant-lessons-learned-in-web-infrastructure
Spinnaker is a continuous delivery platform by Netflix and open sourced in late 2015. Fast-forward 3 years, Spinnaker can deploy to 9 (!) cloud providers and platforms; with many project contributions coming from the cloud providers themselves (Google, Amazon, Microsoft, etc.). This DevOps Toronto talk will feature a quick overview of what Spinnaker can do.
http://decks.pierre-nick.com/201904_Spinnaker_DevOpsTO/
https://github.com/pndurette/spinnaker-playground
https://github.com/pndurette/decks
An introduction to OpenStack as project. This overview covers the basic components and architecture of the OpenStack platform, as well as presents facts around the global and local community.
Watcher, a Resource Manager for OpenStack: Plans for the N-release and BeyondAntoine Cabot
Watcher is an open source software package which provides a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.
Watcher provides a complete optimization loop—including everything from a metrics receiver, optimization processor and an action plan applier. This provides a robust framework to realize a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration,increased energy efficiency, etc.
The overall goal is that OpenStack-based clouds equipped with Watcher will decrease their Total Cost of Ownership by way of more efficient use of their infrastructure through targeted optimizations and close-loop automation.
In this presentation we will go over the state of Watcher as it is today, its architecture, the team’s accomplishments for the Mitaka release and our plans for the N-release and beyond.
MicroServices at Netflix - challenges of scaleSudhir Tonse
MicroServices has caught on as the design pattern of choice for many companies at scale. While MicroServices and SOA in general have many positives compared to Monolithic apps, it does come with its own challenges - especially when running at scale. These slides were for a 15 min Meetup talk hosted at Cisco
Kubermatic How to Migrate 100 Clusters from On-Prem to Google Cloud Without D...Tobias Schneck
Have you ever thought about migrating your Kubernetes clusters to Google Cloud to get your services closer to your customers? Yes? We too! Join us on an interactive journey to discover the main challenges of live migration at scale of etcd's, traffic routing and application workloads from your on-premise platform to GCP. The talk will discuss the current state of the technical concept, known problems and insides of the already proven migration steps for stateless workload.
As part of the journey, we'll see the differences between migrating one or one hundred clusters with productive workloads; What parts can be automated? What steps may need to be manual? Let's see how an automated solution could look like in the future and what steps are missing.
Kubernetes Cluster API - managing the infrastructure of multi clusters (k8s ...Tobias Schneck
Thanks to tools like kubeadm, Terraform or Ansible setting up a Kubernetes cluster on a dedicated environment is getting reachable, but what’s about setting up a bunch of cluster in multiple clouds in automatic way? This is still a challenge. Also if you want to do same in your own datacenter. In this talk we will take a look to the approach to orchestrate and manage a whole set of k8s cluster by the Cluster API project of kubernetes (a subproject of sig-cluster-lifecycle). The main idea behind it is to use the Kubernetes API itself to manage multiple clusters with there master and worker nodes in same way you would manage your PODs - define the needed resources and the responsible controller will take care for providing it.
After an overview about the concepts of cluster API, I will show what’s needed to implement a cluster API conform machine class/deployment. There I will see that adding your own provider isn’t that hard as you may aspect. At the end of the day it just requires a simple interface to implement. The corresponding kubermatic controllers we implemented at Loodse are available as open source, so its possible to play around with it. A live demo will show how easy it is to spin up and maintain multiple Kubernetes cluster at different public and on-premise cloud providers over one managing cluster. A final wrap up will summarize the current state of the Cluster API project and the advantages of managing clusters as cattles instead of pets.
Architecture of massively scalable, distributed systems - InfoShare 2015Tomasz Zen Napierala
OpenStack is currently the biggest open source project in known universe. Besides social there are other, technical reasons for its rapid adoption. One of them is definately OpenStack architecture - massively scalable, extensible and distributed. Premature optimization is said to be source of all evil, but using some fundamental techniques we can save ourselves many problems in the future. Let’s look at some core scalability concepts and try to apply them to solve daily problems.
Are you overwhelmed by storage capacity requirements? Are you wondering how web giants are able to store large amounts of data at a fraction of your storage costs?
OpenStack is the fastest growing open-source project to date, and its community builds cloud software. Join us to learn about the two OpenStack storage projects and how your company can take advantage of them.
OpenStack storage allows the use of commodity hardware at massive scales that you can consume as a public, private, or hybrid cloud.
View the on-demand webinar. Special guest speaker Randy Bias, founder and CEO of Cloudscaling and member of the Board of Directors for OpenStack Foundation, and EVault big data expert Joey Yep will inform you about this fast-growing, open-source project: OpenStack.
• OpenStack Swift and Cinder storage projects
• High-level functionality and architecture
• Public, private, and hybrid use-cases
Autonomic Management of Cloud Applications with Tonomi, Gluecon Keynote, 2015Victoria Livschitz
Introduction to Tonomi, an autonomic application management platform for cloud applications, delivered as a keynote at Gluecon 2015, Broomfield, Colorado on May 20, 2015.
Enhancing employability through enterprise education: BSc Business Enterprise...HEA_AH
This presentation is linked to a workshop presented at the HEA Enhancement event ‘Successful students: enhancing employability through enterprise education’. The blog post that accompanies this presentation can be accessed via http://bit.ly/1JIE3wh
Embedding modern languages across the disciplines - Catriona CunninghamHEA_AH
This presentation is linked to a workshop presented at the HEA Enhancement event ‘Successful students: enhancing employability through enterprise education’. The blog post that accompanies this presentation can be accessed via
This presentation is linked to a workshop presented at the HEA Enhancement event ‘Successful students: enhancing employability through enterprise education’. The blog post that accompanies this presentation can be accessed via http://bit.ly/1JIE3wh
This presentation discusses how to achieve continuous delivery, leveraging on docker containers, here used as universal application artifacts. It has been presented at Voxxed '15 Bucharest.
This presentation is from the 2016 Enterprise Roadshow series in North America and Europe. This presentation explains the Docker enterprise solution including Containers as a Service workflows powered by Docker Datacenter and the integration with HPE to deliver a container platform on hybrid cloud infrastructure.
Learn more: www.docker.com/enterprise
Cloud-Größen wie Google, Twitter und Netflix haben die Kernbausteine ihrer Infrastruktur quelloffen verfügbar gemacht. Das Resultat aus vielen Jahren Cloud-Erfahrung ist nun frei zugänglich, und jeder kann seine eigenen Cloud-nativen Anwendungen entwickeln – Anwendungen, die in der Cloud zuverlässig laufen und fast beliebig skalieren. Die einzelnen Bausteine wachsen zu einem großen Ganzen zusammen, dem Cloud-Native-Stack. In dieser Session stellen wir die wichtigsten Konzepte und aktuellen Schlüsseltechnologien kurz vor. Anschließend implementieren wir einen einfachen Microservice mit .NET Core und Steeltoe OSS und bringen ihn zusammen mit ausgewählten Bausteinen für Service-Discovery und Konfiguration schrittweise auf einem Kubernetes-Cluster zum Laufen. @BASTAcon #BASTA17 @qaware #CloudNativeNerd
https://basta.net/microservices-services/cloud-native-net-microservices-mit-kubernetes/
A GitOps model for High Availability and Disaster Recovery on EKSWeaveworks
Enterprises today require high availability and disaster recovery for critical business systems. One of the advantages Kubernetes can bring to the table is greater reliability and stability. When disaster strikes, cluster or application recovery should be quick and dependable.
Paul Curtis, Principal Solutions Architect at Weaveworks will demonstrate how to leverage Weave Kubernetes Platform and GitOps to create disaster recovery plans and highly available clusters with minimal effort on EKS.
In this webinar you will learn:
The 4 principles of GitOps (operations by pull request)
How to build for reproducibility, security and scale with EKS from the start
GitOps driven cluster and cluster lifecycle management with WKP
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Moderne Serverless-Computing-Plattformen sind in aller Munde und stellen ein Programmiermodell zur Verfügung, wo sich der Nutzer keine Gedanken mehr über die Administration der Server, Storage, Netzwerk, virtuelle Maschinen, Hochverfügbarkeit und Skalierbarkeit machen brauch, sondern sich auf das Schreiben von eigenen Code konzentriert. Der Code bildet die Geschäftsanforderungen modular in Form von kleinen Funktionspaketen (Functions) ab. Functions sind das Herzstück der Serverless-Computing-Plattform. Sie lesen von der (oft Standard-)Eingabe, tätigen ihre Berechnungen und erzeugen eine Ausgabe. Die zu speichernden Ergebnisse von Funktionen werden in einem permanenten Datastore abgelegt, wie z.B. der Autonomous Database gespeichert. Die Autonomous Database besitzt folgende drei Eigenschaften self-driving, self-repairing und self-securing, die für einen modernen Anwendungsentwicklungsansatz benötigt werden.
Red Hat and kubernetes: awesome stuff coming your way
Autonomic Application Delivery with Tonomi
1. Victoria Livschitz, CEO of Tonomi
@vlivschitz
Autonomic Management of Cloud
Applications
Autonomic Application Delivery
SVDevOps Meetup
Intuit, May 19 2015
2. About the speaker
Founder & CEO, Tonomi (formerly, Qubell)
First autonomic application delivery and
management platform for cloud applications
Founder & CEO, Grid Dynamics, 2006 - 2013
Pioneer of enterprise clouds; leading provider of open, scalable,
next-generation technology solutions for Tier 1 retailers
Principal Architect, Sun: 1997 - 20o6
Chief architect of GM; chief architect of financial services; senior
scientist in SunLabs; technical lead on SunGrid, world’s first
public cloud service.
3. “ ”
Everyday is a battle
to keep up with the pace of innovation
4. Automation is the Battlefield
Speed and
Self-service
Stability
and Control
VS
9. Tonomi Focus: Adaptive Control
Externalize configuration of everything affecting
application and its environment. Enable centralized
control over configurations from a cloud.
Continuously monitor health of running
applications. Track changes in their environment.
Identify triggers that require controlled response.
Adaptively change application configuration by
applying orchestrated workflows based on
policies. Log all changes for analysis and audit.
1.
2.
3.
17. Configuration A Configuration B
CentOS
CentOS
EC2
20Mb
Data
WebLogic
Stub
API
Blueprint A Blueprint B
Testing
Production
RedHat
2Tb
Data
WebLogic
API
WebLogic
RedHat
Static configurations, folked
18. Configuration A Configuration B
CentOS
CentOS
EC2
20Mb
Data
WebLogic
Stub
API
2Tb
Data
WebLogic
API
Tonomi Way: Adaptive Configuration
Environment BEnvironment A
WebLogic
Testing Production
RedHat
RedHat
19. Facilitate Release Pipeline
Commit UpgradeCI
Regression
Integration
Performance
User
Acceptance
Mobile
Staging
CI
Regression
Integration
Performance
User
Acceptance
Mobile
Staging
Dynamic Environments
25. 2525
Developer (every day): new version of X is ready!
• How many live instances of X are affected? Where?
• How to deliver the change (Docker, Chef, etc.)?
• What related services might be affected?
• What to do about them, and their dependents?
• How to keep all systems are running while all this is
happening?
• Also…
• How do I know the change made had been authorized?
• How do I discover app’s actual configuration?
• How do I compare configs of different instances?
• Where do I find configuration change logs?
Ops:
26. Use case 1
Launch new environments on-demand, as developer self-
service
27. 2727
Portal WISB
(what it should be)
WIRI
(what it really is)
Controller
Live
environment
1. Launch
new (A) {A, x, A->x, ….}
28. 2828
Portal WISB
(what it should be)
WIRI
(what it really is)
Controller
Live
environment
1. Launch
new (A) {A, x, A->x, ….}
29. 2929
Portal
3. Do this!
WISB
(what it should be)
WIRI
(what it really is)
Controller
Live
environment
1. Launch
new (A) {A, x, A->x, ….} {A1, ….}
2. Authorized?
30. 3030
Portal
3. Do this!
WISB
(what it should be)
WIRI
(what it really is)
Controller
4. Make new application instance, A1
X.1
Live
environment
A1
1. Launch
new (A)
5. Use cloud API, PaaS, chef, docker, etc.
{A, x, A->x, ….} {A1, ….}
2. Authorized?
31. 3131
Portal
3. Do this!
WISB
(what it should be)
WIRI
(what it really is)
Controller
4. Make new application instance, A1
X.1
Live
environment
A1
1. Launch
new (A)
6. Log existence of A1, and all of it’s
components, with their configurations
{A, x, A->x, ….} {A1, ….}
2. Authorized?
5. Use cloud API, PaaS, chef, docker, etc.
32. 3232
Portal
3. Do this!
WISB
(what it should be)
WIRI
(what it really is)
Controller
4. Make new application instance, A1
X.1
Live
environment
A1
1. Launch
new (A)
6. Log existence of A1, and all of it’s
components, with their configurations
7. Update
8. Done!
{A, x, A->x, ….} {A1, ….}
2. Authorized?
5. Use cloud API, PaaS, chef, docker, etc.
33. Use case 2
Publish new version of a component…
… new environments are configured correctly with new
blueprint
34. 3434
Portal
1. Publish
change X.2
3. Update
catalog
WISB
(what it should be)
WIRI
(what it really is)
Controller
4. Generate new
blueprint for A
X.1
Live
environment
A1
{A, x, A->x, ….} {A1, ….}
2. Authorized?
35. 3535
Portal
3. Do this!
WISB
(what it should be)
WIRI
(what it really is)
Controller
4. Make new application instance, A2
X.1
Live
environment
A1
X.2
A2
1. Launch
new (A)
6. Log existence of A2, and all of it’s
components, with their configurations
7. Update
8. Done!
{A, x, A->x, ….} {A1, A2, ….}
2. Authorized?
5. Use cloud API, PaaS, chef, docker, etc.
40. 4040
WISB
(what it should be)
WIRI
(what it really is)
Controller
X.2
Live
environment
A1
X.2
A2
{A, x, A->x, ….} {A1, A2, ….}
X -—-> image ID
x.1 —> image ID 1
x.2 —> image ID 2
x.3 —> image ID 3
Docker Hub
1. New (ID 3)
2. Detect,
trigger or
push
41. 4141
WISB
(what it should be)
WIRI
(what it really is)
Controller
X.3
Live
environment
A1
X.3
A2
4. Update state of A1, A2, etc.
3. Upgrade all affected instances
{A, x, A->x, ….} {A1, A2, ….}
X -—-> image ID
x.1 —> image ID 1
x.2 —> image ID 2
x.3 —> image ID 3
Docker Hub
1. New (ID 3)
2. Detect,
trigger or
push
X.2 -> X.3
X.2 -> X.3
43. 4343
WISB
(what it should be)
WIRI
(what it really is)
Controller
X.2
Live environment 1: test
A1
{A, x, A->x, ….} {A1, A2, ….}
Live environment 2: production
X.2
A2
Policy 1
Policy 2
Portal
44. 4444
WISB
(what it should be)
WIRI
(what it really is)
Controller
X.2
Live environment 1: test
A1
{A, x, A->x, ….} {A1, A2, ….}
Live environment 2: production
X.2
A2
Policy 1
Policy 2
Portal
1. Launch
A in “test”
3. Do this!
4. Make new application instance, A1
based on Policy 1
2. Authorized?
45. Qubell is reactive
Changes propagate as signals to connected services who
can (a) respond to that change, and (b) relay signals
further
46. 4646
WISB
(what it should be)
WIRI
(what it really is)
Controller
X.2
Live environment 1: test
A1
{A, x, A->x, ….} {A1, A2, ….}
Live environment 2: production
X.2
A2
Policy 1
Policy 2
Portal
Signals propagating through
the circuits of connected
application fabric
Triggers