16. Quo Vadis Dockerfile
• Install system libraries and configure them
• Bundle your gems
• Perform your gradle or maven build
• Install node, npm, webpack, perform a webpack build
• Set up a complicated process tree
• whatevs really, but most importantly…
19. A service is a composition
Images are created by Dockerfiles.
Containers are created from images.
Containers are networked with each other via docker-compose.yml
25. CI/CD automates
commitments.
Ops can begin to securely provide secrets.
QA can sign off on an image that can’t have its code changed.
Images can be scanned by security before distribution.
Monitoring can inject agents.
Operating environments (QA, staging) get updated with new code.
26. OHAI Jenkinsfile
This process is getting translated from humans as its implemented.
Jenkinsfile is in version control as well…
…alongside a docker-compose.yml very easily
29. Containers were born at
scale.
They have an opinionated approach to the matter.
Container images need to be designed accordingly.
Operations staff responsibilities change significantly.
37. Stacks of Frontiers
• Containerizing workloads by writing Dockerfiles
• Placing services into composition for smoke testing or runtime
• Installing Docker EE, Jenkins agents, etc. (automated or otherwise)
• Writing Jenkins pipelines and pipeline libraries
• Custom workflow integrations to satisfy stakeholders
• …and we’ve only talked about one technology among many!
39. Thanks
If you want a sequel, let me know.
Which part was most interesting?
Editor's Notes
Clients who recognize that delivering software forever is already difficult are looking for any technology that can help.
Software construction, especially as the engineering teams grow in size, get bogged down.
So much so that usually the team that runs the written software is kept completely separate. Operations, security, monitoring, testing, etc. all crucial parts of the lifecycle, are happening elsewhere, if at all.
Complexity not directly related to new application value is always under intense pressure.
Containers offer meaningful support in phases 3-5. Clients who recognize that delivering software forever is already difficult are looking for any technology that can help.
We’ll be looking at the Docker ecosystem exclusively to start; the container ecosystem is MUCH broader.
Machine turns non-Linux systems into hosts.
A swarm is a bunch of hosts joined together in a network to exhibit special properties.
Containers run on the swarm, and they are created from images distributed via registries public and private.
Containers have a big impact on developer responsibilities.
Look at the software you’re not installing
RUN commands introduce a different layer to the union file system
This is one of the major points of additional agency for developers and eliminates an entire class of difficult problems.
Joe Armstrong, inventor of erlang: ”You wanted a banana, but you got a gorilla, holding a banana, and an entire jungle”
Enumerates services and their interrelationships, including connectivity
Provides guidance regarding their operational state
Given a machine capable of running the software, many teams have relevant interests.
Conventions regarding image creation and distribution, or automated documentation of maintained images.
Specifically, pre-production environments can be self-serviced.
Docker operates a public registry, and allows you to have private ’images’ like the GitHub model.
Think of the Jenkinsfile as defining and securely operating your delivery process.
Simplest possible snipper of a Jenkinsfile
This is the nuttiest part for people who haven’t run software at scale.
If you have deployed to Heroku, you’ve experienced something similar with the read-only filesystem.The PID 1 we talked about earlier
Here’s a swarm. 3 Managers allow for 1 master node failure
In this diagram, we note that typically the registry service operates as a part of the swarm itself.
Here’s a simplified representation of the docker-compose.yml we looked at earlier.
In this diagram, the service has been placed atop the architecture. This work is done by the scheduler, which looks at the constraints of the service definition, the resources of the nodes, and figures out what goes where.
The scheduler itself notices when swarm members are not healthy…
And places their workloads on different hosts. This capability is what drives so much of the convention around image creation; quick boot times, no filesystem dependencies outside the container, simple process trees, etc.
Clients who recognize that delivering software forever is already difficult are looking for any technology that can help.
Clients who recognize that delivering software forever is already difficult are looking for any technology that can help.