5. Enterprise software lifecycle
Enterprise-grade software development lifecycles are often
complicated
Team isolation (Dev vs. Ops)
IT architecture complexity
Pipeline length
Q/A
UAT
Integration
Performances
Security
Software factories are already in-place and up-and-running
They may need to be modified / adapted / simplified (?) to deal with
Docker
5
7. Docker promises
Docker intends to renew the new Dev / Ops relationship by
providing a portable ready-to-use application containers
Docker build recipes (Dockerfiles)
Docker registries (public & private)
Build once,
run anywhere
Developer
Configure once,
run anything
Ops
7
12. Docker on the dev workstation side
Images
repository
Great way to ease the workstation
deployment
Vagrant + Dockerfiles => use the dev OS
you like
Docker images
Pull
IDE
Container
with App
Dockerfile /
code
Developer
Workstation
VM
Image
Container
Great way to share
Internet registries (index.docker.io)
Ops can produce Docker images (if it
makes sense) on an enterprise registry
Other devs can help newbies to bootstrap
Great way to be iso-prod as soon as
possible (if you consider the container as
the standard delivery pattern)
Tip: have a look at fig (https://github.com/orchardup/fig)
12
13. Docker on the DevOps workstation
Use cases
Integration test
Deployment test
COTS
SaaS development (test full stack)
LXC + Docker nesting
Implement Servers + multi containers
DinD (Docker in Docker)
13
15. Two visions
Simple application designs
Allow all design
Very standardized (buildpacks)
All kind of apps (even legacy)
Fully automated
Maybe automated
Git push + cli
Nexus-like artifact storage
Infrastructure seen as a black box
Topology management required
gConf based (Puppet, chef…)
Deployment orchestration
(Capistrano?)
15
18. Docker PaaS with a software factory
Information System
Dev PaaS
Repo Git
Corp
Git
tag
UAT PaaS
Scan / pull
code
Jenkins
Prod PaaS
Build
Build apps
Unit tests
IDE
Developer
Workstation
Ops
18
20. 3 implementation examples
Dokku
Flynn
Single host based
Bash powered
Some nice plugins (DB, NoSQL, caches)
Easy to setup / test
lighweight
Multi-hosts, multi-tenant
Relies on a native distributed service registry (etcd)
Layers based
Work in progress
Multi-hosts
Highly relies on Chef-server (hosts and apps
management)
Enable easy hosts enrolment (using chef)
20
21. Heroku-Like Docker implementation conclusion
Docker is not visible from outside the PaaS solution
Artifacts (Docker images) must be rebuilt on each environment
Slow
Can’t take full advantage of local Docker registries
Break the «build once, run everywhere» good practice
Heroku Compliant with constraints
Procfile
Auto-detected code => buildpacks
Only patterns implemented on the PaaS side can be deployed
Engineering required on the «Ops»
Data persistence (on-disk or in-memory) : (No)SQL, Redis, Memcached…
Integration components (ESB, MQ, IAM)
H-A components / LB (Proxies, WAFs, WAMs)
21
23. Take (even more) advantage of Docker
Docker registry is a major feature of Docker
Docker registries can (partially) replace the former Nexus-like
artifact repository in your company
Don’t make any assumption about the Docker images content
Code?
Integration images (proxies, reverse-proxies…)
Appliance?
COTS?
Allow any kind of topology (but get ready to pay the price for it)
23
24. Docker registry
A local Git/Nexus like private Docker Images registry
RESTfull API powered
Some enterprise-grade features are yet missing:
Nexus-like proxy feature (to locally cache public images)
Stock H-A
Pretty Web GUI for users (search images, read images-associated
release notes)
Pretty Web GUI for admins (manage ACLs, images)
24
25. Build container ASAP
OPS provide images
Public
repository
Information System
Docker Images
Private
repository
Pull
Push
Tag
Dev
Container
with App
Container(s)
UAT
Image
Container
Container(s)
Test
Jenkins
Test
IDE
Developer
Workstation
Dockerfile /
code
Repo Git
Corp
Prod
Container(s)
Git
25
26. Build container ASAP - Pros / cons
KISS
Build artifact only once and run it everywhere
Not any control over the produced artifact (can even be tweaked
manually)
How to ensure the Git and Docker registry versions are consistently
managed altogether?
How to handle the Docker images deployment?
26
27. SF Rebuild pattern
Public
Repository
Images Docker
Information System
Private
repository
Repo Git
Corp
Dev
Container(s)
pull
Image
Container
Pull
Container
with App
Scan / pull
Push
Tag
Intégration
automatic
tests
UAT
Container(s)
Jenkins
Promote
Unit Tests
Git
Container(s)
IDE
Prod
Container(s)
Container(s)
Dockerfile
/ code
Build
Developer
Workstation
Software factory
Private
Repository
27
28. SF Rebuild pattern - pros / cons
Looks like actual SF, makes sense
Git is the real (code) reference from which artifacts are built
Not that much control over the produced artefact (from an ops
perspective)
How to handle the Docker images deployment?
28
30. Take Away
There is not a single Docker enterprise integration pattern
Play with several git remotes, Docker registries, environments
Adapt with your organization and processes
Choose between Code (Git) or Docker Images (registry) deployment
(for now)
Still some integration / topology management tools are missing
Maturity is coming, but not yet totally production-proof
Expect hybrid implementations to meet all expectations
Heroku-like apps (buildstacks) : simple
Dockerfile : free apps design, flexible integration, allow some quality
checks
Image URL : fast(er) and flexible integration
30
31. Take Away
Some projects to watch:
CI side
Drone (github.com/drone/drone)
Deploy side (PaaS style)
Deis (deis.io)
Dokku (github.com/progrium/dokku)
Flynn (flynn.io)
Deploy side (Open style)
CTL-C (ctl-c.io)
Shipyard (github.com/shipyard/shipyard)
31
35. Container deployment
Container deployment is about
Orchestration
Apps specific (layers, stacks, topology)
Can be tricky (rolling style, zero downtime deployment)
Good frameworks exist (Capistrano)
Must be versioned according to app version
Variabilization: handle environment specificities (config management)
Security (passwords, container zone location)
URL, links between containers
Sizing
H-A (enabled or not)
Reel / Mocked / pass-through components
35
36. Config Management
Several main ways
Use of Puppet / Chef / ansible / salt (within a Docker container) can
make sense
Both at build and startup time
Watchout complexity
Agent installation
Server / master dependencies (enrolment, role assignment)
Split cookbooks between setup and run times.
Most standard way: env variables (Heroku style)
Must tweak startup scripts
some applications are not very envvar-aware
Use a service directory (zookeeper, etcd)
Must tweak startup script, in a two-phase approach
Query the directory to find dependencies
Publish the offered services once started
36
38. Shipyard
Simple Agent / Server Web router + console to aggregate multi-hosts view
Can handle central or local registries to get images, even Dockerfile upload
Provide a global incoming flow routing (hipache redis-based reverse-proxy), simply
point a *.acme.com DNS record to the incoming front router
Can start new containers with all needed parameters
Env variables
Links between containers
LXC limitations (CPU shares, memory)
Very young
Some stability issues
ACL / security management
H-A ?
API available (not tested yet)
Available as a Docker container (of course!)
38