PaaS Ecosystem
Overview
Dmitry Meytin
December 2015
What is PaaS?
NOT iPaaS, MPaaS nor APaaS
Code
Push and
Build
Deploy & Run
Monitor & Bug
Discovery
Fix the bug
Update &
Build
Gradually
Redeploy
Scale
Horizontally
& Vertically
Enjoy
automation
PaaS. What else?
Timeline
2007 2008 2015
Lattice
Swarm
Marathon
2006
Heroku-like PaaS
Heroku-like PaaS
• Buildpacks
• Containers
• Application vs Service
• 12-factor applications
• Multi-cloud/Specific Cloud
Buildpack magic
bin/detect bin/compile
• Installing plugins
• Building artifacts
• Caching artifacts
bin/release
• Addons
mysql:50mb
• Config_vars
PATH: "bin:/usr/bin:/bin"
• Procfile
hello param1 param2
Container Creation Process
Application vs Service
The Twelve-Factor App (1)
I. Codebase
One codebase tracked in revision control,
many deploys
II. Dependencies
Explicitly declare and isolate dependencies
Dependency
Library
V1 V2 V3
Codebase Codebase Codebase
Packaging
System
IV. Config
Store config in the environment
- File
- Environment variables
III. Backing Services
Treat backing services as attached resources
The Twelve-Factor App (2)
V. Build, release, run
Strictly separate build, release and run stages
VI. Processes
Execute the app as one or more stateless
processes
- Fully stateless
- Share-nothing
VII. Port binding
Export services via port binding
• The twelve-factor app is completely self-
contained
• There is no runtime injection of a webserver
into the execution environment
VIII. Concurrency
Scale out via the process model
Share-nothing,
horizontally partitionable
nature of twelve-factor
app processes
The Twelve-Factor App (3)
X. Dev/prod parity
Keep development, staging, and production as
similar as possible
- Minimize the time gap: a developer may
write code and have it deployed hours or
even just minutes later
- Minimize the personnel gap: developers
who wrote code are closely involved in
deploying it managing it in production
- Minimize the tools gap: keep development
and production as similar as possible
XI. Logs
Treat logs as event streams
• Each running process writes its event
stream, unbuffered, to
• Use log routers and log aggregators
• Build alert system according to user-
defined heuristics
XII. Admin processes
Run admin/management tasks as one-off
processes
• Running database migrations
• Running one-time scripts committed into the
app’s repo
IX. Disposability
Maximize robustness with fast startup and
graceful shutdown
- Processes should strive to minimize startup
time
- Processes shut down gracefully when they
receive a SIGTERM signal from the process
manager
- Handle unexpected, non-graceful
terminations (crash-based design)
STDOUT
CloudFoundry Architecture v2.0
(not Lattice/DIEGO)
Cloud Foundry Service Broker
Blueprint-like PaaS
Blueprint-based PaaS
• No code-to-binary step
• Multi-layered
• Legacy application support
• Service discovery
• VMs and/or Containers
• Rich life-cycle management
Blueprint Standards
• OASIS TOSCA (Physical Host + VMs +
Containers)
• Murano PL (VMs + Containers)
• Virtuozzo Application Packaging Standard
(Containers/VMs)
• Jelastic Packaging Standard (Containers)
• Kubernetes/Swarm/Marathon (Containers)
• HashiCorp Configuration Language
• (Containers)
• …
OASIS TOSCA
TOSCA Example
StackStorm
Cloudify
Container Evolution
Container Evolution
• High Adoption
• Network Management
• Volume Management (persistent data)
• Security
• Service discovery
• Hardware Acceleration (Intel Clear Containers)
• Live Migration!
μicroservices
Simple NetflixOSS style microservices
architecture on three AWS Availability
Zones
Container EvolutionContainer Evolution
Following Borg & Omega
Warehouse-Scale Computer OS
Kubernetes
Conceptual difference –
remediation vs self-organization
Self-Organization
More Tools
Vamp.io
Source-to-image
OpenShift v3.0
Fabric8
CloudFoundry DIEGO (DEA-GO)
HashiCorp Atlas
SERF
Nomad
Consul
Vault
HCL
Serf
Otto
• otto compile
• otto dev
• otto infra
• otto build
• otto deploy
Otto NomadTerraForms
Consul
Vault
DevOps Process
Orientation map, anyone?
P24E – Programmable
infrastructurE
Conclusions
• Don’t hesitate to try
• You still can create your own solution
Q&A

PaaS Ecosystem Overview

Editor's Notes

  • #19 Codebase - If there are multiple codebases, it’s not an app – it’s a distributed system Maven, pip, bundle loose coupling to the deploy they are attached to An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes:
  • #20 Release management and rollback loose coupling to the deploy they are attached to An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes:
  • #21 Release management and rollback loose coupling to the deploy they are attached to An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes:
  • #45 TPS- the process status reporter BBS - Bulletin Board System "User-facing" Components These "user-facing" components all live in cf-release: Cloud Controller (CC) provides an API for staging and running Apps. implements all the object modelling around Apps (permissions, buildpack selection, service binding, etc...). Developers interact with the cloud controller via the CLI Doppler/Loggregator Traffic Controller aggregates logs and streams them to developers Router routes incoming network traffic to processes within the CF installation this includes routing traffic to both developer apps running within Garden containers and CF components such as CC. CC-Bridge Components The CC-Bridge components interact with the Cloud Controller. They serve, primarily, to translate app-specific notions into the generic language of LRPs and Tasks: Stager receives staging requests from CC translates these requests into generic Tasks and submits the Tasks to the BBS this Task instructs the Cell (via the Task actions) to inject a platform-specific binary to perform the actual staging process (see below) sends a response to CC when a Task is completed (succesfully or otherwise). CC-Uploader mediates staging uploads from the Executor to CC, translating the Executor's simple HTTP POST into the complex multipart-form upload CC requires. Nsync splits its responsibilities between two independent processes: The nsync-listener listens for desired app requests and updates/creates the desired LRPs via the BBS. The nsync-bulker periodically polls CC for all desired apps to ensure the desired state known to Diego is up-to-date. TPS also splits its responsibilities between two independent processes: The tps-listener provides the CC with information about running LRP instances for cf apps and cf app Xrequests. The tps-watcher monitors ActualLRP activity for crashes and reports them to CC. Components on the Database VMs The Database VMs provide Diego's core components and clients a consistent API to the shared state and operations that manage Tasks and LRPs, as well as the data store for that shared state. BBS provides an RPC-style API over HTTP to both core Diego components (rep, auctioneer, converger) and external clients (receptor, SSH proxy, CC-bridge, route emitter). encapsulates access to the backing database and manages data migrations, encoding, and encryption. ETCD is Diego's consistent key-value data store. Components on the Cell These Diego components deal with running and maintaining generic Tasks and LRPs: Rep represents a Cell and mediates all communication with the BBS by: ensuring the set of Tasks and ActualLRPs in the BBS is in sync with the containers actually present on the Cell maintaining the presence of the Cell in the BBS. Should the Cell fail catastrophically, the Converger will automatically move the missing instances to other Cells. participates in auctions to accept Tasks/LRPs runs Tasks/LRPs by asking its in-process Executor to create a container and run generic action recipes in said container. Executor (now a logical process running inside the Rep) the Executor doesn't know about the Task vs LRP distinction. It is primarily responsible for implementing the generic executor actions detailed in the API documentation the Executor streams Stdout and Stderr to the metron-agent running on the Cell. These then get forwarded to Loggregator. Garden provides a platform-independent server/client to manage garden containers defines an interface to be implemented by container-runners (e.g. garden-linux) Metron Forwards application logs and application/Diego metrics to doppler Note that there is a specificity gradient across the Rep/Executor/Garden. The Rep is concerned with Tasks and LRPs and knows details about their lifecycles. The Executor knows nothing about Tasks/LRPs but merely knows how to manage a collection of containers and run actions in these containers. Garden, in turn, knows nothing about actions and simply provides a concrete implementation of a platform-specific containerization technology that can run arbitrary commands in containers. Only the Rep communicates with the BBS. Components on the Brain Auctioneer holds auctions for Tasks and ActualLRP instances. auctions are run using the auction package. Auction communication goes over HTTP and is between the Auctioneer and the Cell Reps. maintains a lock in the BBS such that only one auctioneer may handles auctions at a time. Converger maintains a lock in the BBS to ensure that only one converger performs convergence. This is primarily for performance considerations. Convergence should be idempotent. uses the converge methods in the runtime-schema/bbs to ensure eventual consistency and fault tolerance for Tasks and LRPs when converging LRPs, the converger identifies which actions need to take place to bring DesiredLRP state and ActualLRP state into accord. Two actions are possible: if an instance is missing, a start auction is sent. if an extra instance is identified, a stop message is sent to the Rep on the Cell hosting the instance. in addition, the converger watches out for any potentially missed messages. For example, if a Task has been in the PENDING state for too long it's possible that the request to hold an auction for the Task never made it to the Auctioneer. In this case the Converger is responsible for resending the auction message. periodically sends aggregate metrics about DesiredLRPs, ActualLRPs, and Tasks to Doppler. Components on the Access VMs File-Server serves static assets used by our various components. In particular, it serves the App Lifecycle binaries (see below). SSH Proxy brokers connections between SSH clients and SSH servers running inside instance containers Additional (shim-like) components Route-Emitter monitors DesiredLRP state and ActualLRP state via the BBS. When a change is detected, the Route-Emitter emits route registration/unregistration messages to the router periodically emits the entire routing table to the router. Platform-Specific Components Diego is largely platform-agnostic. All platform-specific concerns are delegated to two types of components: the garden backends and the app lifecycles. Garden Backends Garden contains a set of interfaces each platform-specific backend must implement. These interfaces contain methods to perform the following actions: create/delete containers apply resource limits to containers open and attach network ports to containers copy files into/out of containers run processes within containers, streaming back stdout and stderr data annotate containers with arbitrary metadata snapshot containers for down-timeless redeploys Current implementations: Garden-Linux provides a linux-specific implementation of a Garden interface. Garden-Windows provides a Windows-specific implementation of a Garden interface.