A new movement is taking cloud by storm; Docker is evolving the way services are deployed by organizations so that they can operate more efficiently at scale — both in the cloud and on bare metal. In the same way shipping containers revolutionized the cargo industry, cheap, zero-penalty Linux Containers (LXC) are like shrink-wrapped VMs but without the fat. What’s not obvious, however, is how to roll your own Docker deployments and all tools you’ll need to leverage along the way.
This discussion will cover:
• Principles of Immutable Infrastructure
• Docker Basics
• Docker for Dev & QA
• Docker in Production
• Business Drivers
• Answering the Question: Is Docker Ready for Prime Time?
3. Shipping Software is Difficult
Different Stages (test, qa, prod, etc)
More Dependencies = More Problems
Low Density with Poor Utilization
Repeatable Deployments
Lifecycle Management
Version Conflicts
A/B Testing
13. Configuration Management (Hell)
Complicated by Design, Fragile
Declarative ~ “Just Trust Me”
Not Easy to Ensure Consistency
No Guarantee QA == Production
PAINFULLY SLOW,
EXPENSIVE
14. Immutable Infrastructure (Heaven)
Build Once, Run Anywhere
Imperative (WYSIWYG)
vs Declarative (TRUST ME)
3 Layers
Persistent ~ data that changes like /var/lib/mysql
Immutable ~ should never change /usr/bin/mysql
Identity ~ configuration like /etc/mysql/my.cnf
Easier Rollbacks, Faster Deploys
15. Virtual Machines?
Necessary but Expensive to Operate
Microservices on Individual VMs Impractical
Too Rigid / One Size Does Not Fit All
Portability Issues with Hypervisors & Clouds
Slow to Boot & Clunky to Manage
Resource Hogs / Redundant Services
20. What if I told you….
There was a way to magically run
any application*, on
any distribution, from
any vendor, using
any cloud and
it would just work? =)
(*dependent on Linux
Kernel, CPU
architecture)
30. Linux Containers
A way to securely run processes
Looks like a <Shrink Wrapped> VM
Share the Same Kernel, which is usually OK
Penalty Free Execution
Easy to Ship, Instant Boot
Throttle CPU & Memory, I/O
32. Docker in a Nutshell
An abstraction for managing LXC (libcontainer)
Docker Daemon Runs / Connects Containers
Dockerfile DSL to package apps
Repositories to ship containers
Run anywhere you have Linux
Chroot on steroids
34. Docker Hub
Storage for Docker Containers
Maintains Lineage / All Versions
Public, Private & Self-Hosted Repositories
Like GitHub, but for Docker Images
36. Docker Analogs to Java Ecosystem
JVM is like the Linux Kernel + LXC
Jar files are like Docker Images
Maven is like Docker Client
pom.xml is like the Dockerfile
Artifactory is like the Docker Hub (Repository)
Tomcat is like the Docker Daemon
37. The Dockerfile
FROM ubuntu:14.04
MAINTAINER erik@cloudposse.com
ENV MYSQL_USER app
WORKDIR /
RUN apt-get update && apt-get -y install mysql-server
ADD ./start.sh /start.sh
VOLUME /var/lib/mysql/
EXPOSE 3306
USER nobody
ENTRYPOINT /bin/sh
CMD /run.sh
38. Docker Command Line
export DOCKER_HOST=tcp://192.168.59.103:2376
docker build --tag our-repo/mysql -f Dockerfile
docker push
docker pull our-repo/mysql
docker stats my-container
docker logs my-container
docker kill my-container
# a total of ~40 commands
39. Docker Command Line
export DOCKER_HOST=tcp://192.168.59.103:2376
docker run --name=mysql --restart=always
--memory=512m --cpu-shares=256
--blkio-weight=256 --memory-swappiness=20
--env="INNODB_CACHE_SIZE=256m"
--dns-search=qa.domain.local --dns=1.2.3.4
--volumes-from=mysql-data-vol
our-repo/mysql:latest
mysqld_safe
41. Development Possibilities
Dozens of Containers on a Laptop
“Docker Compose” Environments
Vagrant Docker Provider
Run Locally with Boot2Docker or Kitematic
Bake Image Ship it to QA
44. Production Possibilities
Run EXACTLY same image from QA
Rollback Easily / Assassinate
Reduce Errors from
Inconsistencies
Isolate Failures
of Microservices
45. Business Drivers
Maximize CapEx Investment / Higher Utilization
Reduce OpEx thru Increased Density
Move Faster with Reduced Risk
Conduct More A/B Tests
47. Production Ready? YES
Containers are definitely stable
Docker v1.5+ is stable
Tools exist that tie it all together
“Containerships” = GCE, Triton, CoreOS,
etc
Many large companies run Docker
49. But are you ready?
Do your apps run on Linux?
Use the 12-Factor methodology?
Know how to leverage cloud orchestration?
Have an expert devops team handy?
Excited to retool everything (again)?
Have Operational Competency?
52. Production Requirements
A Purpose Built Containership
Service Composition, Orchestration
Private Image Repository
Zero Downtime Deployments & Rollbacks
Cross-Container Networking
Log Management, Monitoring & Alerting
Data Persistence & Backups
Version Pinning
54. Docker Gotchas
You still need to be an expert sysadmin
Requires some configuration management
No Built-in Auto-scaling
Docker Hub incomplete
Security Concerns / Public Images / Lineage
55. The Future
Docker Swarm
Kubernetes, Mesos, Mesosphere
PaaSifcation like Deis, Flynn, Joyent Triton
Massive Vendor Adoption
App Container / Open Standards
56. Docker - The Real Deal
Reduce Configuration Management
Reliably Ship Less Data, Faster
Run Services with Greater Isolation
True Cross-cloud Portability
57. Don’t stop here...
Cloud Posse provides (kickass) advisory and
implementation assistance for medium to
large-scale cloud deployments.
Erik Osterman
erik@cloudposse.com
(310) 496-6556
Editor's Notes
Alright everyone, it’s time to get started.
All questions will be answered at the end.
I’ll also be sharing a link to the final slides
My name is Erik Osterman and today it’s my EXTREME pleasure to talk about Docker
It is something that I always dreamed of having, but seemed always out of reach
My background is in software development, principally web-based stuff and moved into cloud architecture out of necessity
Most recently I was the director of cloud architecture for CBS Interactive
Prior to that, I advised lots of startups
So what’s my objective? To convince you that Docker is the the evolution of cloud
It’s not a revolution
It’s logical next step you need to take.
Why? Because it will improve your operations while reducing risk.
What I’m about to cover is complicated. Don’t worry. I’ve lots of pretty pictures.
Also, since this is a Java group, there are some great analogs which will help you better understand all the moving pieces.
So what’s the problem?
It’s that software companies are still struggling with shipping software
Shipping software is difficult
There are a lot of moving pieces
Many pieces are outside of your control over.
Most solutions have been symptomatic / Not natural selection / Job preservation
We’ve been doing the same thing for decades => It’s called configuration management.
There are things we don’t do because it’s either too tedious or risky, but we should….
A/B testing
Continuous Integration
Version pinning
Staggered deployments
If we could deploy faster, roll back easier -- shipping software wouldn’t be so bad.
Today the most common prescription for scaling a website is a microservices architecture like this one
A microservices architecture is one where you break apart your application into individual components that can be individually scaled as necessary. That means both vertically as in throwing bigger badder machines at the problem or horizontally which is to add more machines and split up the computation.
Here’s what WordPress might look like in a microservices architecture
Chances are you run more than just wordpress.
You run all of this and then some. I mean, this is what you were already running a few years ago.
How do ya get all of this to work together?
There are dependencies
There are upgrades
One version breaks another, so pin versions. Deploy the software. And then realize you didn’t want Cassandra after all because elastic search has more of a je ne sais quoi
It’s going to drive you insane like it did to me.
On top of that, we have more flavors of cloud than ever.
And you probably have some diehard bare metal fans in your organizations holding onto bare servers for dear life
It shouldn’t really matter that much. Right?
You want compute capacity.
You need some place to run your software.
Let’s focus on delivering that one way or another.
Because what we have today, is the matrix from hell.
With every new software component we add,
the system’s complexity is multiplied by every place you’ll need to run it.
Until recently, there haven’t been any novel solutions.
In fact, it got so bad I quit my job
I never wanted to work with cloud again;
living on beach sounded pretty good to me
So I went traveling for a year around the world; I saw 14+ countries; 30+ cities. It was awesome
It was because cloud computing start to look like a Rube Goldberg machine than I would have liked
Here’s one designed to wipe the sweat from your face when amazon crashes.
They all solve the job, but does the job need to be solved at all?
tweaking configuration file snippets
ensuring packages are installed
shipping the kitchen sink
Heck, may shipping the entire kitchen along with the Chef
They are called configuration management tools
They are the traditional way software has been deployed forever.
But maybe the configuration of software isn’t the problem? Maybe it is the OS?
If we can solve the problem of delivering bundled “services”, things get a lot simpler
Case in point.
This is what a typical execution of configuration management software looks like.
This happens to be puppet.
This is not a criticism of Puppets.
I love Puppets and muppets.
But they they belong on stage.
The point is something so complex is fragile by nature.
We should cut the strings.
We want something anti-fragile - to quote nassim taleb
The declarative nature of Configuration Management it not bullet proof
Moreover, even when it works well enough, it’s painfully slow.
Imagine all the wasted compute cycles spent reevaluating configurations.
Why go to hell and back to ship software?
We want to go to Heaven.
That’s Immutable Infrastructure.
Build it once and be done with it;
There’s no incremental patching, so there’s less risk
It’s simple
2 actions: deploy/destroy
Any time you want to change something, you bake a new imageYes, that’s slow if you only have one server. But if you only have one server, then EVERYTHING I am about to talk about is probably overkill. If you have dozens, shipping a golden image is DEFINITELY faster.
the Difference: imperrative vs declarrative;
define the process and not the outcome
Not a new concept, but the tools until recently hadn’t caught up
The other dilemma we have is related to Virtual Machines
The realization that we could emulate servers with a Hypervisor was a BRILLIANT
They got us to where we are today. Kind of like training wheels. They were a necessary step of evolution.
But VMs are boxy.
Sure, not as bad as the bare metal, but they aren’t exactly vacuum sealed.
Suppose you have 10 VMs on a server.
Then imagine all the redundant processes like sshd, syslog, crond, and a dozen others taking up precious resources without much value
Redundant kernels, filesystems, page caches, etc.
Each machine image is huge, usually a couple of gigs.
We still depend on Configuration Management because VMs aren’t exactly “penalty free” to instantiate
They take minutes to boot and billed by the hour
I think milliseconds are a more appropriate unit of measurement.
There’s no reason for machines to linger if you don’t need them. For example, a crontab server should only run for the few seconds it takes to run the job and then exit. Free up the resources for another process.
But we don’t do this in reality because it’s too expensive.
Come to think of it, VMs are a lot like trucks.
Both have payloads.
Compared to trains and cargo ships, trucks are WAY more expensive to operate.
You shouldn’t need an engine, a chassis, gas, or a driver for every container
So you shouldn’t need a VM for every service.
You can fit a lot of containers on a ship, but the 405, my friend, is at capacity
I think you get it.
That’s why modern day cloud has come to this.
It’s a bunch of heavy machinery on even heavier machines
All we really care about is the payload.
If this was an MTA bus, we could all agree it’s absurd.
So can’t there be a better way?
Well thank god that someone was hard work, while I was sipping pina colada. True story.
you see… what if I told there was a way…….
and you get this - along with a badass architecture & design that your application begs for.
There’s a movement happening that might change your opinion of what’s possible.
It was all inspired by this - the shipping container.
Back in the ‘50’s they created an ISO Standard for Containers.
These are set measurements for how the containers are built,
Everything from how they are opened, locked, stacked, loaded.
Today there are a few standards, but principles remain the same.
If you need to move something, make fit inside of one of these containers. And you’re good to go.
Combined with railways, highways and ships, containers are moved all around the world
Trucks are just for the “last mile” of delivery.
Unfortunately, some companies haven’t yet figured that out.
They also invented orchestration.
Cranes like these load the containers on to the cargo ships and distribute the load
-----------------------
(fun fact: Some say these massive cranes in the Port of Oakland were the inspiration behind the Walkers aka All Terrain Armored Transport (AT-AT) in Star Wars, but Lucas denies it; I beg to differ)
MASSIVE cargo ships take thousands of containers at a time across open oceans
Shipping companies lease these containers to you along with space on ships
They work around the clock and reach every part of the world.
It makes for a very efficient process.
You see it’s been solved in other industries.
Let’s try to ship software like IKEA ships furniture.
We’ll call it - The IKEA pattern.
If it fits, it ships.
So what is the secret sauce? Hire me to find out ;P
It’s Linux Containers or LXC for short
They are way to ship pre-bundled linux operating systems that operate in isolation just like VMs
There’s something called the Duck Test…. you might have heard of.
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Well, containers quack like VMs
They provide all the essential benefits of VMs but without the overhead.
Containers are elastic or as I like to say shrink wrapped.
They take only as many resources as the underlying processes.
No pre allocation necessary. One size, fits all.
Best of all, you can start them up in a milliseconds
You can fire up a container for cronjob and be done with it
You can still throttle CPU, cap memory usage, limit I/O both network and disk, and much, much more.
VMs and containers are similar but different.
Technically speaking, they are entirely unrelated technologies
But effectively, they are used to accomplish much of the same things providing different assurances.
Think of a light switch. There are many kinds. Some are simple circuits. Others are software defined. Most of you live your lives perfectly happy not knowing exactly how your lightswitch works.
It’s not complicated.
Containers are designed to share as much as possible while maintaining reasonable isolation for most applications
e.g. linux kernel, linux page caches and entropy pools
The way I like to think about it, is that Docker containers virtualize the Linux operating system.
VMs on the other hand are designed to share as little as possible, just the bare metal.
therefore traditional VMs virtualize the hardware, which is quite a feat.
Because containers share as much as possible, they take fewer resources.
It’s that simple.
Containers are not designed to be a universal drop-in replacement for every single use-case, but they certainly fit the bill for most companies.
Because containers share the same machine, kernel exploits are pretty bad. I don’t have any consolation there.
But consider this - since containers allow you to isolate more of your runtime environment than you would have otherwise done if running virtual machines or bare metal, in practice your overall security posture will be strengthened.
A little known fact
containers have been since 2008
Google wrote the initial code for LXC to run their internal stuff and then gave it to Linus (Thanks!)
The problem is LXC by itself is too complicated and the implementation varies by Linux distributions.
So that’s why it’s taken so long for containers to become a mainstream technology.
The reason is it’s front-page news is docker.
Docker created the necessary abstractions to bring LXC to the masses.
Essentially it combines (3) things to create app Isolation
chroot, cgroups & namespaces
then they created a public repo system and it took off like wildfire.
Inside a container, processes only see other processes in the container. The first process is PID 1.
Containers only have access to their own filesystem. If you want to share files or directories, across containers, you mount them.
Process are jailed. They can’t break out or read the memory of other process.
Inside a container it feels a lot like a VM.
Here’s a kicker: inside a container, by default there’s not even a single process running, not even init. Best practice is to run only one process per container.
The docker daemon. It’s is a single process that typically runs once per machine or VM (not per container)
The Docker daemon is like the container hypervisor
but not really a hypervisor since the Kernel does most of the heavy lifting
libcontainer is responsible for the interaction with LXC; it’s kind of like libvirt for cloud
Dockerfiles are like GNU Makefiles and define how the image should be built.
They are super simple ~ max 30 seconds to learn.
The product is a fully baked image
Baked images are like AMIs, VMDKs, or JAR files
They are based on AUFS a layered filesystem.
If you’re familiar with journaling, it’s a lot like that.
Every change, results in a new layer.
Shipping a new image, involves simply ships the new layers.
Containers are running images. Think of them like instances.
You can attach to them much like you attach to a virtual machine.
Inside containers, you typically don’t run syslog, sshd, crond, chef agent, etc. That would be wasteful.
Repositories are not a new concept. They work as you can imagine. There’s even a public one called Docker Hub. Think of it like “GitHub for operating systems”
The docker client uses the API to communicate. It’s a simple RESTful protocol.
All communication is done over encrypted SSL sockets to control the docker daemon
And the docker client can be run locally on your laptop just as easily as on a server
Docker Hub is like GitHub for “operating systems”.
It’s VERY popular.
You can probably find an example of every opensource application somewhere on Docker Hub.
what’s cool about all of this is you can get pre-bundled images straight from the vendors
There are A LOT of repositories
And A LOT of people downloading from them.
But there’s a caveat: Most images are not “official” or ever audited. So proceed with caution.
Technically speaking, Containers, VMs, JVMs are in no way similar.
That is what they virtualize and how they accomplish it varies drastically.
But practically speaking, the way they are used is not that different.
In fact, Docker accomplishes many of the same goals that Java delivers on without limiting us to a particular language.
The JVM is called a virtual machine because it defines an abstract virtual CPU complete with registers and a stack.
It provides a sort of guarantee that if you feed it java byte code, it will execute that code anywhere you have a JVM.
Well, Docker is to Linux what the JVM is to java byte code. That is to say, if you have an application that executes under the Linux kernel, Docker let’s you do that irrespective of the Linux distribution. Of course, the machine architecture needs to be the same since after all, it’s not a virtual machine. It’s a virtual operating system.
Java gives you a convenient way of moving code around in “jar files” along with assets. In Docker we have images. They do the same thing, but they move an OS.
Many of you are probably familiar with Maven. Maven is used to download and assemble all the dependencies for a Java project. You can think of the Dockerfile as your pom.xml, only it’s not XML. Thank god.
Now to transport your Java apps and dependencies, you have a distribution layer provided by Artifactory. This is what the Docker Repository does for Docker Images. There’s even a public one called Docker Hub.
Once you’ve built your app, deployed to Artifactory, you’ll want to run it somewhere like under Tomcat if it’s a web app. Well, that’s what the Docker Daemon is good for. It executes your container.
I said the Dockerfile is simple. I meant it. Here’s what a Dockerfile looks like.
If you’ve ever written a bash script, you’ll learn how to write a Dockerfile in about 30 seconds.
It’s that easy.
You have several keywords like FROM, MAINTAINER, ENV, ADD, RUN (&&), CMD, USER, WORKDIR
The docker command line tool intuitive for any Linux admin
No crazy arcane syntax to worry about
Here we have an example of a new image being generated and pushed to the repository
It’s sort’a like “git commit” followed by a “git push origin master”
The Magic is that you can change DOCKER_HOST environment
It can be your local machine or a remote Docker cluster of thousand nodes
So now let’s entertain a new possibility
There are runtime environments to run Docker on your laptop
For OSX you have 2 easy options: Boot2Docker or Kitematic
They are trivial to install and take zero configuration
You can use Docker Compose (Fig) which has a simple YAML configuration
it describes how to run and link your containers
Or if you prefer, use Vagrant with the Docker provider, if that’s easier
The magic is that you can run dozens of containers on an average Laptop
That’s not possible with traditional VMs!
Here’s a quick glimpse of Kitematic. Incidentally, Docker acquired the company earlier this year.
Personally, I use Boot2Docker (as do most I know) because I live on the command line. I don’t care for a GUI.
Developers can ship the code exactly as they had developed it to QA
QA can then test that code to see if it’s kosher and ship it to production.
Containers are cheap, so take advantage of it.
Let a few canaries lose in production to see if they fly as expected.
If they don’t, just shoot ‘em down with “docker kill”
In production, you can have the peace of mind of knowing you’re running exactly the same code the developer tested and QA verified.
You’re reducing risk because containers are Imperative. Exactly what you defined.
To automate rollouts, just start new containers pinned at right version,and leave the old ones laying around “just in case”
The way you do a rollback is easy too. Kill the problematic containers and go back to your stalwarts.
The more containers and microservices you leverage, the more isolated your failures.
Get this: GILT? -
they treat every page of their website as a standalone application.
That’s SERIOUS isolation
But it means that they can change any part of the website without a full blown rollout.
That’s cool.
It’s taking microservices to the extreme
Use them to their maximum advantage.
It’s new way of thinking.
You cannot do that with VMs. It would never be cost efficient.
Because containers are so cheap, run more A/B tests
In fact, you can run those tests with totally different dependencies.
Test if certain libraries are faster.
Because deploying software is easier, it can be done more frequently.
The faster you can determine the results of a deployment,
the quicker you can minimize risk.Think “high frequency trading” ; the less market exposure you have, the safer you are.
Businesses want to minimize risk and maximize reward.
If a test is performing poorly, nuke it and you’re back to square one.
Now, CapEx
The way to maximize your CapEx investment is to better utilize existing hardware (your sunk cost)
Containers are dense by nature, so they are your best bet
Run more software on the same servers
Related to this, you can reduce OpEx
as a result of increased density of services, Fewer servers means less power/heat/network ports.
You get the idea.
Here’s a good way to visualize that.
The question on everyone’s mind is it Production ready?
YES, without a doubt it is.
Remember, the underlying technology is mature, even if Docker itself is pretty new.
LXC is used by Google
And Docker is up to v1.5 and in serious production use by large companies
But don’t take my word for it.
------------------------------------
With Version 1.5 of Docker they explicity addressed many of the features necessary for production
IPv6, --read-only, -f, stats command
1.7 added lots of ways to limit resource consumption. this is essential as you add more and more services to a machine.
Take theirs.
If you google some of these names along with the keyword docker, you’ll find great videos at meetups talking about how they cracked the nut.
Not all of them have gone off the deep end, but they’re committed to a future that includes Docker.
The real question to ask is -- are you ready?
If you don’t engineer for the cloud or use patterns such 12-factor apps, Docker isn’t going to do miracles for you.
Docker is designed to make cloud architecture easier
It is not miracle grow; you architecture won’t suddenly scale or grow like weeds
By far, the biggest reason you don’t see more companies running docker is a lack of operational proficiency.
As with any new technology, you have to hone your skills. New tools are required and processes put in place.
If you don’t already practice the advanced art of DevOps jujitsu, Docker will be overwhelming, especially outside the comfort of a sandbox.
Remember, Docker is just the engine block.
You still need this… the containership.
Building it, requires a strong opinion for how you want to run containers.
That’s your choice.
Most of what you read online are simple use-cases of single server installations.
They leave out all fun parts of doing it at scale which involves scheudling, orchestrestion, volume management and race conditions.
Taking the leap to production is big, but it does not require a leap of faith.
It’s important to note, these are not NEW concepts for cloud deployments.
It’s just that Docker itself does not solve these things.
Solving it would require docker to become very opinionated and that is not a good thing.
Docker is only a tool. It replaces the configuration management layer for operating systems.
Using docker, however, will make things easier because best of all you can reduce the size of your Goldberg apparatus.
Covering Docker in an hour is impossible.
The ecosystem has grown rapidly.
Everyone’s first thought is to go build their own PaaS.
STOP.
There’s a lot of legit software written to scale Docker across hosts.
Please research what’s out there. Ask someone for advice. Maybe ask me?
Namely, check out some of these things. They’ll get you on your way.
The major ones to look at are Apache Mesos by Twitter and Kubernetes by Google, CoreOS, Tectonic, and Mesosphere.
There are also some Security Concerns that you need to be aware of
Most of these security concerns are no different than using any off-the-shelf open-source software
Docker Hub is public community like GitHub.
Anyone can publish images, including bitcoin miners and botnet entrepreneurs
There is no oversight beyond public due-diligence and community support
Some images will be certified by Docker, but those are few-and-far between
Your safest bet is to use your own private repository and borrow from Dockerfiles on Docker Hub as needed
Kernel exploits are especially evil since you’ll root every machine
Depending on the network fabric that’s used, it can be difficult to limit connectivity between containers.
Probably the coolest thing to expect in the near future is Docker Swarm
adds the ability to stitch all your docker daemons into one HUGE virtual server
It’s what cloud always promised by NEVER delivered.
All docker client tools work the same! they don’t know the difference of who they are talking to
Much like Docker Swarm, keep your eye out for Triton by Joyent.
Joyent has more experience than any other company at running Containers at scale
They’ve adapted SmartOS to run Docker; for the record, SmartOS is actually really smart.
They are the only cloud provider that implements something that looks like Docker Swarm and has a ZFS as the backing store
With Triton, I anticipate Joyent becoming a major player in the container space. . . if they play their cards right.
Did I mention it’s open source? You can self host it. How awesome is that??
The other option is OpenStack Magnum by RackSpace, which is something similar to Triton.
It lets you run Containers on OpenStack
They pull this off using Kubernetes and Mesos ,
You’re going to continue to see MASSIVE vendor adoption.
This technology is here to stay.
It’s amazing that AWS, GCE, Azure, RackSpace and Joyent all jumped on board in less than a year
More amazing is that the boat didn’t capsize
You can bet VMware is watching VERY closely
There will be some big acquisitions this year.
To summarize, here’s what I want you to walk away with.
Docker is the real deal.
Docker gives you raw compute power, it’s up to you what to do with it.
It’ll let you do more with less
It’ll let you move faster with reduced risk
And best of all, you get it all without vendor lock-in