2. Docker revolution
• Docker has used the rapid provisioning + shared
underlying filesystem of containers to allow developers
to think operationally
• Developers can encode dependencies and deployment
practices into an image
• Images can be layered, allowing for swift development
• Images can be quickly deployed — and re-deployed
• Docker will do to apt what apt did to tar
3. Containers in production?
• Docker’s challenges are largely around production
deployment: security, network virtualization, persistence
• Joyent runs OS containers in the cloud via SmartOS
(our illumos derivative) — and we have run containers in
multi-tenant production since ~2006
• Core SmartOS facilities are container-aware and
optimized: Zones, ZFS, DTrace, Crossbow, SMF, etc.
• SmartOS containers are designed for production: line-
rate network virtualization, multi-tenant security,
production debuggability, etc.
• Could we somehow deploy Docker containers as
SmartOS zones?
4. Docker + SmartOS: Linux binaries?
• First (obvious) problem: while it has been designed to
be cross-platform, Docker is Linux-centric
• While Docker could be ported, the encyclopedia of
Docker images will likely forever remain Linux binaries
• SmartOS is Unix — but it isn’t Linux…
• Fortunately, Linux itself is really “just” the kernel —
which only has one interface: the system call table
• We resurrected (and finished) a Sun technology for
Linux system call emulation, LX-branded zones
• Technical details of our Linux emulation are beyond the
scope of this presentation...
7. Docker + SmartOS: Provisioning?
• With the binary problem tackled, focus turned to the
mechanics of integrating Docker with the SmartOS
facilities for provisioning
• Provisioning a SmartOS zone operates via the global
zone that represents the control plane of the machine
• docker is a single binary that functions as both client
and server — and with too much surface area to run in
the global zone, especially for a public cloud
• docker has also embedded Go- and Linux-isms that
we did not want in the global zone; we needed to find a
different approach...
12. Docker Remote API
• While docker is a single binary that can run on the
client or the server, it does not run in both at once…
• docker (the client) communicates with docker (the
server) via the Docker Remote API
• The Docker Remote API is expressive, modern and
robust (i.e. versioned), allowing for docker to
communicate with Docker backends that aren’t docker
• The clear approach was therefore to implement a
Docker Remote API endpoint for SmartDataCenter
13. Aside: SmartDataCenter
• Orchestration software for SmartOS-based clouds
• Unlike other cloud stacks, not designed to run arbitrary
hypervisors, sell legacy hardware or get 160 companies
to agree on something
• SmartDataCenter is designed to leverage the SmartOS
differentiators: ZFS, DTrace and (esp.) zones
• Runs both the Joyent Public Cloud and business-critical
on-premises clouds at well-known brands
• Born proprietary — but made entirely open source on
November 6, 2014: http://github.com/joyent/sdc
16. Docker Remote API for SmartDataCenter
• Implementing an SDC-wide endpoint for the Docker
remote API allows us to build in terms of our established
core services: UFDS, CNAPI, VMAPI, Image API, etc.
• Has the welcome side-effect of virtualizing the notion of
Docker host machine: Docker containers can be placed
anywhere within the data center
• From a developer perspective, one less thing to manage
• From an operations perspective, allows for a flexible
layer of management and control: Docker API endpoints
become a potential administrative nexus
• As such, virtualizing the Docker host is somewhat
analogous to the way ZFS virtualized the filesystem...
17. Docker Remote API: Challenges
• Some Docker constructs have (implicitly) encoded co-
locality of Docker containers on a physical machine
• Some of these constructs (e.g., --volumes-from) we
discourage but accommodate by co-scheduling
• Others (e.g., host directory-based volumes) we have
implemented via Manta, our (open source!) distributed
object storage service
• Moving forward, we are working with Docker to help
assure that the Docker Remote API doesn’t create new
implicit dependencies on physical locality
18. Docker Remote API: Networking
• Networking is an open area with respect to Docker
• We have taken a VXLAN-/kernel-based (and ARP-
inspired) approach to minimize latency, deliver line
bandwidth and operate at scale
• Our approach has the side effect of giving every
container a full, isolated, virtualized IP stack
• We use our in-kernel firewall support to impose the
limitations implied by Docker’s
• We are working with Docker to get the Remote API to be
flexible enough to accommodate constructs like ours
19. Joyent Triton: SmartOS + SDC + Docker
• Our Docker engine for SDC, sdc-docker, implements
the end points for the Docker Remote API
• It’s open source: http://github.com/joyent/sdc-docker;
you can install SDC (either on hardware or on VMware)
and check it out for yourself!
• We are explicit about our divergences from Docker:
https://github.com/joyent/sdc-docker/blob/master/docs/divergence.md
• We have stood this up in early access production under
the Joyent Triton banner
• A demo is worth a thousand slides...
20. The Remote API: Docker’s killer feature
• Triton’s radically different approach is a vivid
demonstration of the power of the Docker Remote API
• Triton’s early adopters have been particular interested in
the virtualization of the Docker host made possible by
the Docker Remote API
• It’s important that the Docker Remote API not imply
physical co-locality — and be flexible enough to
accommodate radically different implementations
• It’s critically important that upstack software not depend
on the physical co-locality of Docker containers
• By unlocking down-stack innovation, we believe
Docker’s Remote API to be its killer feature!
21. Thank you!
• @joshwilsdon, @trentmick, @cachafla, @orlandov and
Todd Whiteman for their work on sdc-docker
• Jerry Jelinek, @pfmooney, @jmclulow and @jperkin for
their work on LX branded zones
• @rmustacc, @wayfaringrob, @fredfkuo and @notmatt
for their work on SDC overlay networking
• The countless engineers who have worked on or with
illumos because they believed in OS-based virtualization