2. About Me
Brayden Winterton
● Computer Science Major at BYU
● Head of BYU Production Services
Development Team
How I use docker:
● Development environments
● CI/CD pipeline
● Clustered Production Environments
3. Survey
● Who here has heard of Docker?
● Who has used Docker before?
● Who uses Docker as part of their daily
workflow?
13. What is Docker?
Docker Engine:
● CLI
● Docker Daemon
● Runs the containers
Docker Hub:
● Sharing applications/containers
● Used in automating workflows
● Much like Github
15. Why developers love it:
● Devs can build apps using any language and toolchain
● Build once, run anywhere
● Docker creates a clean portable runtime for the app
● No more worries about missing dependencies
● Peace of mind knowing that apps will not conflict (nor
will their dependencies)
● Compatibility concerns go out the window
● Automated testing, building, packaging, etc becomes
much more simple.
● Fast, lightweight runtime environments
16. Why sysadmins love it:
● Standardized, repeatable environments
● No more “works on my machine”
● Abstract application from OS and Infrastructure
● Flexibility in where to run the application
● Deployment driven by business priorities
● Rapid scale-up and scale-down to respond to load
● Eliminate inconsistencies between environments
● Increase reliability and speed of CI/CD systems
● TL;DR: Deploy any app, to any infrastructure
17. Want to separate concerns?
Development:
● Worries about what’s
inside the container:
o Code
o Libraries
o Data
o Applications
● Assumes that all
environments look the
same
Operations:
● Worries about what’s
outside the container:
o Logging
o Monitoring
o Configurations
● All containers start, stop,
reload, and accept
configuration the same
way
18. Want to combine concerns?
● Give developers access to existing
resources
● Allow developers to deploy built containers
to infrastructure
o Either using Continuous Deployment or a
standardized method
● Make the developers wake up in the middle
of the night to fix it
Mange everything from Legacy applications and monolithic stacks to applications that are in development and microservice oriented architectures
What is this thing that everyone keeps talking about?
Once upon a time we had simple lamp stacks, web applications were extremely simple
Typically one server, or one server image replicated for load balancing.
Sometimes databases were even found on the same machines (Ugh)
Today, with the movement to microservices and distributed services, stacks are much more complex (userDB, static stack, frontend, api, etc.)
Not only are the stacks more complex, they need to be run on more diverse hardware.
Dev environments
Production environments
QA
On premise (woo, that’s hard)
etc.
This all leads to the Matrix from hell!
Something we have all faced
What runs where? Why doesn’t it run? What are it’s dependancies?
Can the Static frontend and the Normal Frontend run together?
This sucks. Tons of time to upkeep and to keep straight. Changes over time and as development iterates
We have had this same problem before, in the shipping world
How can we ship Item X? What methods of shipping will work? Which ones will damage the goods? What is the cheapest?
The solution? Shipping containers!
A standard container that everyone agrees on using and supporting.
Loaded and then sealed.
It can be transfered between shipping methods without causing an issue.
Isolates the goods from other things next to it.
Docker does for the application /devops world, what the shipping container did for the shipping world.
The application/stack is packed into the container.
That container can be run on any hardware platform without concerns for how it will interact with nearby applications or dependencies.
So now our matrix from hell looks like this!
Doesn’t matter what is run where, docker ensures that it will run.
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.
Consists of two parts:
Docker engine
This is what actually runs the containers/applications
used to build containers and modify existing ones
runs in the command line
uses namespaces, cgroups, and unionfilesystems to isolate containers.
Docker hub:
Like github for docker containers
build off of existing containers
grab generic containers to be used in your stack (such as mysql, postgress, graphite with statsd, etc)
Ok so this is cool and all….. but
Why should I use docker?
The reasons that developers love docker are endless
Let developers code their applications in whatever language using whatever toolchain they want (stack doesn’t matter)
apps are built once, and they will run on any environment. dev, local machine, vms, bare metal, production, qa, etc (granted that they have the 3.2+ kernel or 2,6,32+ on RHEL)
developers can know with 100% assurance what is inside their container, no surprises
developers dont’ have to worry about ops missing a dependency during deployment, it comes built in to the container.
Developers can rest assured that their apps will be completely isolated from everything around them, and that conflicting dependencies will not be an issue
OS, kernel version, libraries, etc are no longer a concern for the developer, again everything is packed in.
Testing, building, packaging is much more simple knowing that the application is portable and functioning, automation is much more simple, anything you can script, you can automate.
runtimes(containers) are fast and lightweight. Startup times are in the seconds, not minutes.
There are also a million reasons that sysadmins / ops loves docker:
Finally! Standard repeatable environments for everyone, qa, dev, testing, etc. across all teams (don’t need to keep the same environment for all groups)
Developers may not love it, but it gets rid of one of the age old go to excuses. If it works anywhere, it works everywhere.
Abstract your application from your OS and infrastructure, make is self-dependant. (change around infrastructure and os as much as you like
This allows for flexibility in where to run the application. Bare metal, public cloud, private cloud, etc.
deployment no longer needs to be restricted by limitations of the infrastructure. Let business needs drive your deployments
containers are so lightweight and start up so fast that scaling to load becomes extremely easy and fast
get rid of the issues of updated or changed dependencies between environments. (works stage, but fails in prod due to library version)
Increase the speed and reliability of your CI/CD systems, build the image and then run all tests and deployments using the already created image, image does not vary between steps
tl;dr
Looking to separate operations and development concerns? Docker helps out a ton!
Developers only have to worry about the inside of the containers
Operations worries about outside
only overlap happens with configuration, make sure ops and dev agree on a standard configuration management (env variables, key/value store such as etcd, etc.)
Looking to merge your team to a devops format?
Merge the concerns by allowing your development team to deploy automagically
Allow developers to deploy when their containers are built
If it works for the developer, it should work in production
Almost all issues found can be resolved by the developer updating the container and re deploying
very minimal ops intervention in the process and in fixes
ops can focus on creating ci/cd pipelines and providing deployment methods to the development team.
no more ops deployment headaches.
You may be thinking, wait this sounds alot like why we use vms! To separate and isolate applications.
The difference is: we get rid of the bloat from repeated Guest OS’s and from repeated bins/libs!
Why is it that these containers are lighter than vms?
First of all, we loose the repeated load of having a guest OS and all of the pieces that make that up
Another huge aspect of docker is the layered filesystem!
Each change to the filesystem creates a new cached layer.
Cached layers can be used across multiple copies of the container.
This allows for fast startup times
Modifications only adds a new layer to the filesystem .
Ok so many of you may be asking “Ok so how do I use this marvelous technology?” Lets take a look. First we are going to start with a basic example. A simple hello world example!
docker run -it ubuntu /bin/echo “Hello World!”
the container spun up, and then ran the echo command with “Hello World!”
Don’t believe that it was the container?
docker run ubuntu /usr/bin/dpkg --get-selections | wc -l
dpkg --get-selections | wc -l
Different number of packages! different systems!
Ok, so let’s say I don’t want it to take over my terminal! What now?
docker run -d ubuntu /bin/echo "Hello World!"
*WAIT! what was that big number printed out?
* We daemonized the process (forked it to the background) the number returned is the container hash number
we can use just the first few characters. Just like git.
Lets check the logs.
docker logs (container number)
see? We can see that the container did actually start and run the command we gave it!
Ok so let’s be honest, the “hello world” example was cute but probably not very helpful. Who ever runs hello world in production?
Lets see something real!
How about running a static frontend through a container? Lets use nginx!
docker run -d nginx
docker ps
no need to give nginx a command, it has a default one built into the container.
we can see the nginx container is running! But how do we get to it? The ports haven’t been mapped! Let’s map those ports!
docker kill (container)
docker run -d -p 8000:80 nginx
go to localhost:8000 to get the welcome nginx page!
Woot! we have nginx running now! docker allows us to map ports which the container has exposed to ports on the host.
the syntax is -p hostPort:containerPort
So that’s great and all, we have docker running nginx now. But what if we want to change the content that nginx is displaying?
The containers we have seen up to now have been completely isolated from the host machine’s filesystem. But there is a way to make a filesystem visible to the container. Volumes!
docker run -d -p 8000:80 -v /home/bcwinter/dev/dockerDemo:/usr/share/nginx/html nginx
vim index.html (add new <h2>And hello to you!</h2>)
Volumes create persistent connections between the container’s filesystem and the host’s filesystem
volumes can also be shared between containers (ie container A can have a volume from container B)
Now what if we wanted to load our code into the container? So that it wasn’t needed on the host machine through a volume?
Well, we can modify the image in two ways. Interactively, or through a dockerfile.
First let’s try this interactively.
docker run -it nginx /bin/bash
echo "<html><body><h1>Hello World</h1></body></html>" > /usr/share/nginx/html/index.html
exit
docker ps -l
docker commit -m="Edited the index.html" -a="Brayden Winterton" (containerid) bcwinter/nginx:v2
docker run -d -p 8000:80 bcwinter/nginx:v2 nginx -g ‘daemon off;’
docker kill
So that is interactively, when you open a container interatively, you open a writeable layer that layer can be changed and then commited, just like git
some downsides, not repeatable, several commands, can loose your “entrypoint” command
but there is a nicer, easier, more repeatable way, docker files!
cd ~/dev/dockerDemo
vim index.html
vim Dockerfile
docker build -t bcwinter/nginx:v3 .
docker run -d -p 8000:80 bcwinter/nginx:v3
localhost:8000
It works! The process is repeatable
Also due to the cached file system. Subsequent builds only rewrite layers that have changed.
Show a more complex dockerfile
vim exampleFile
Subsequent builds would only rebuild the add layers if the files had changed, other layers will not be rebuilt as they are cached
Docker files are the way to go, especially for CI/CD workflows.
Now as we talked about before, most of our stacks today are no longer just a simple LAMP stack, they consist of several moving parts. What if we wanted to link a database container and a frontend container for example?
Docker makes this possible with links!
Linking containers injects environment variables into the container as well as updating /etc/hosts for proper redirection of requests. as well as environment variables!
for example:
docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
docker run --link postgres:db ubuntu cat /etc/hosts
docker run --link postgres:db ubuntu env
as you can see! docker created an entry fo the host and created the necessary env variables.
This allows for configuration to be statically set and not have to be coded into the application. If the applicatoin requests something from the url ‘db’ it is always going to get the db, no matter where it is. These assumptions help to make the application portable and lightweight.
Orchestrations is a big name in the game of docker. There are many solutions out there
Kubernetes
Mesosphere
Flynn
Deis
Shipyard
etc.
One of the most simplistic to start out with and my favorite for single host deployments is fig.
Fig is simple yet powerful. Great at dependency
management, and great for managing several containers at once.
This is how I run my development environments.
fig example
show dockerfile
show fig.yaml
fig up
fig web env
fig stop
fig is a great start to learning how to orchestrate several containers.