Stop killing kittens and melting ice caps
Run containers on bare metal already!
• Containers are not a new idea, having originated via ﬁlesystem
containers with chroot in Seventh Edition Unix
• chroot originated with Bill Joy, but speciﬁcs are blurry; according
to Kirk McKusick, via Poul-Henning Kamp and Robert Watson:
• Seeking to provide a security mechanism, FreeBSD extended
chroot into jails:
• To provide workload consolidation, Sun introduced complete
operating system virtualization with zones (née Project Kevlar)
• The (prioritized) design constraints for OS-based virtualization as
originally articulated by zones: Security, Isolation,
Virtualization, Granularity, Transparency
• Not among these: running foreign binaries or emulating other
• Despite its advantages in terms of tenancy and performance, OS-
based virtualization didn’t ﬁt the problem ca. early 2000s: needed
the ability to consolidate entire stacks (i.e. Windows)
• Since the 1960s, the preferred approach for operating legacy
stacks unmodiﬁed has been to virtualize the hardware
• A virtual machine is presented upon which each tenant runs an
operating system that they choose (but must also manage)
• Effective for running legacy stacks, but with a clear inefﬁciency:
there are as many operating systems on a machine as tenants:
• Operating systems are heavy and don’t play well with others with
respect to resources like DRAM, CPU, I/O devices, etc.!
• Still, hardware-level virtualization became de facto in the cloud
Containers at Joyent
• Joyent runs OS containers in the cloud via SmartOS — and we
have run containers in multi-tenant production since ~2006
• Adding support for hardware-based virtualization circa 2011
strengthened our resolve with respect to OS-based virtualization
• OS containers are lightweight and efﬁcient — which is especially
important as services become smaller and more numerous:
overhead and latency become increasingly important!
• We emphasized their operational characteristics — performance,
elasticity, tenancy — and for many years, we were a lone voice...
Containers as PaaS foundation?
• Some saw the power of OS containers to facilitate up-stack
• For example, dotCloud — a platform-as-a-service provider — built
their PaaS on OS containers
• Struggling as a PaaS, dotCloud pivoted — and open sourced
their container-based orchestration layer...
• Docker has used the rapid provisioning + shared underlying
ﬁlesystem of containers to allow developers to think operationally
• Developers can encode deployment procedures via an image
• Images can be reliably and reproducibly deployed as a container
• Images can be quickly deployed — and re-deployed
• Docker complements the library ethos of microservices
• Docker will do to apt what apt did to tar
Broader container revolution
• The Docker model has pointed to the future of containers
• Docker’s challenges today are largely operational: network
virtualization, persistence, security, etc.
• Security concerns are not due to Docker per se, but rather to the
architectural limitations of the Linux “container” substrate
• For multi-tenancy, state-of-the-art for Docker containers is to run
in hardware virtual machines as Docker hosts (!!)
• Deploying OS containers via Docker hosts in hardware
virtual machines negates their economic advantage!
• SmartOS has been container-native since its inception — and
running in multi-tenant, internet-facing production for many years
• Can we achieve an ideal world that combines the development
model of Docker with the container-native model of SmartOS?
• This would be the best of all worlds: agility of Docker coupled with
production-proven security and on-the-metal performance of
• But there were some obvious obstacles...
Docker + SmartOS: Linux binaries?
• First (obvious) problem: while it has been designed to be cross-
platform, Docker is Linux-centric — and the encyclopedia of
Docker images will likely forever remain Linux binaries
• SmartOS is Unix — but it isn’t Linux…
• Fortunately, Linux itself is really “just” the kernel — which only has
one interface: the system call table
• We resurrected (and ﬁnished) a Sun technology for Linux system
call emulation, LX-branded zones, the technical details of which
are beyond the scope of this presentation...
Docker + SmartOS: Provisioning?
• With the binary problem being tackled, focus turned to the
mechanics of integrating Docker with SmartOS provisioning
• Provisioning a SmartOS zone operates via the global zone that
represents the control plane of the machine
• docker is a single binary that functions as both client and server
— and with too much surface area to run in the global zone,
especially for a public cloud
• docker has also embedded Go- and Linux-isms that we did not
want in the global zone; we needed to ﬁnd a different approach...
Docker Remote API
• While docker is a single binary that can run on the client or the
server, it does not run in both at once…
• docker (the client) communicates with docker (the server) via
the Docker Remote API
• The Docker Remote API is expressive, modern and robust (i.e.
versioned), allowing for docker to communicate with Docker
backends that aren’t docker
• The clear approach was therefore to implement a Docker Remote
API endpoint for SmartDataCenter, our (open source!)
orchestration software for SmartOS
Triton: Docker + SmartOS
• In March, we launched Triton, which combines SmartOS and
SmartDataCenter with our Docker Remote API endpoint
• With Triton, the notion of a Docker host is virtualized: to the
Docker client, the datacenter is a large Docker host
• One never allocates VMs with Triton; all Triton containers are run
• All of the components to Triton are open source: you can
download and install SmartDataCenter and run it yourself
• Triton is currently general available on the Joyent Public Cloud!
• It is becoming broadly clear that containers are the future of
application development and deployment
• But the upstack ramiﬁcations are entirely unclear — there are
many rival frameworks for service discovery, composition, etc.
• The rival frameworks are all open source:
• Unlikely to be winner-take-all
• Productive mutation is not just possible but highly likely
• Triton takes a deliberately modular approach: the container as
general-purpose foundation, not prescriptive framework
Realizing the container revolution
• The container revolution extends beyond traditional computing —
it changes how we think of computing with respect to other
elements of the stack
• e.g. container-centric object storage allows us to encapsulate
computation as containers that can process data in situ — viz.
Joyent’s (open source!) Manta storage service
• Realizing the full container revolution requires us to break the
many-to-one relationship between containers and VMs!
Future of containers
• For nearly a decade, we have believed that OS-virtualized
containers represent the future of computing — and with the rise
of Docker, this is no longer controversial
• But to achieve the full promise of containers, they must run
directly on-the-metal — multi-tenant security is a constraint!
• The virtual machine is a vestigial abstraction; we must reject
container-based infrastructure that implicitly assumes it
• Triton represents our belief that containers needn’t compromise:
multi-tenant security, operational elasticity and on-the-metal