Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.


Docker is an amazing tool, but unless you work with it every day, you're probably left with a ton of questions. What's a container? What's an image? What's the difference between Docker, Machine, Compose, and Swarm? Why the heck should I care? Well, Docker makes it easier than ever to deploy and scale your applications and services. In addition, it lets you simulate your production environment on your local machine without heavy virtual machines. In this talk, we'll explore the basics of Docker, create a custom image for a web application, create a group of containers, and look at how you can put your apps into production on various cloud providers. At the end of the talk, you'll have the knowledge you need to put this to use with your own applications.

  • Be the first to comment


  1. 1. Camping In Containers - Docker For Everyone Brian Hogan @bphogan on Twitter
  2. 2. Hi. I'm Brian. — Programmer ( — Author ( — Musician ( — Teacher — Technical Editor @ DigitalOcean Ask me anything.
  3. 3. Here's where we're going today. We'll talk about containers and images in a general sense, and we'll talk about Docker itself and how it fits in. Then we'll look at Docker-compose, docker-machine, and Docker's new Swarm mode. Finally, we'll talk about next steps. Roadmap — Containers and Images — Docker — docker-compose — docker-machine — Docker Swarm Mode — Where to go next
  4. 4. Disclaimers and Rules — This is a talk for people new to Docker. — This is based on my personal experience. — If I go too fast, or I made a mistake, speak up. — Ask questions any time. — If you want to argue, buy me a beer later.
  5. 5. Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. Containers Operating system virtualization for apps and dependencies.
  6. 6. Virtual machines virtualize the hardware so you can run multiple operating systems. Containers share the host OS kernel and, usually, the binaries and libraries, too. Containers are a lot more light-weight than virtual machines since they can share a lot of the same resources. Containers vs Virtual Machines — Containers virtualize the OS — Virtual machines virtualize the hardware
  7. 7. Docker is an open-source project that automates deployment of apps in containers. Docker provides a suite of tools for defining container images, creating containers, and deploying containers to production. Docker Automates deployment of apps in containers
  8. 8. The containeriztion market is getting crowded. Containers and Docker are not the same thing! Docker is just one way of implementing and managing containers. Other tools exist. — Linux Containers (LXC/LXD) — rkt — Other emerging standards
  9. 9. An image is an executable package for your software, including the software you need to run, all of the configuration files, and all the dependencies. If you have a web app, an image of the app will contain your code, your configuration, your web server, and even your web server's configuration. Images — "Container image" is an executable package for so!ware. — It's an immutable snapshot of a container. — A container is a running instance of an image.
  10. 10. We create new images from existing images. Think of it in layers. You have an Ubuntu image, but then you need Node. You make a Node image by basing it on the Ubuntu image. If you have an app, you might base your app's image on the Node image. Images Build Upon Other Images ubuntu:16.04 |___node:6 |__your_custom_app
  11. 11. Where do images come from? Other people make them available through Registries. Docker Hub is the main one for open source software. Docker Hub
  12. 12. Docker Hub is the most popular registry. The Docker Store is a more enterprise- friendly place to get "blessed" images. Publish your images for use in deployments Docker Registries — Docker Hub (free public registry, paid private registry) — The Docker Store — Create your own using the registry image
  13. 13. So why should you care about all of these things? Well, here are a few reasons. First, you'll be able to set up your development environments quickly. We'll look into how that works. You'll also be able to deploy your infrastructure faster, and you'll be able to version it. Finally, you'll be able to scale things out a lot more easily, using less resources. Why should I care??? — Quick development environment setups — Deploy production apps and dependencies in production — Create versions of your infrastructure — Scale out to meet demands
  14. 14. Docker Engine lightweight and powerful open source containerization technology combined with a work flow for building and containerizing your applications.
  15. 15. Docker for Windows and Docker for Mac provide a tiny virtualization layer. The containers need a Linux operating system kernel. These installers provide that while hooking into the native host OS components. Installing Docker — Docker for Windows docker-windows — Docker for Mac mac — Official repos for Linux — Your cloud provider probably has support
  16. 16. That takes care of the stuff you already know about. We're going to run through some basics that will show you what's possible, but also look at what not to do! Basics for Developers
  17. 17. One of the first things you can do with Docker is to use it to run a Linux command in an isolated environment. This command runs a new container using the ubuntu , then executes the uname -a command in the container. In this example, the ubuntu image wasn't found locally, so Docker downloads it first, then creates the container and runs the program. Docker containers stop running when the foreground process exits. Run a command in a new container $ docker run ubuntu uname -a Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu c62795f78da9: Downloading [========> ] 7.348 MB/45.56 MB d4fceeeb758e: Download complete 5c9125a401ae: Download complete 0062f774e994: Download complete 6b33fd031fac: Download complete Linux a941752d46a9 4.9.13-moby #1 SMP Sat Mar 25 02:48:44 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  18. 18. This launches a bash shell in the container and gives us a prompt we can interact with. The container will stay running until we type exit. The container exits when the foreground task ends. Run an interactive command in a new container $ docker run -i -t ubuntu /bin/bash — -i keeps STDIN running — -t attaches local terminal to the session — ubuntu is the image name — /bin/bash is the command we're running
  19. 19. To see running containers, we use the docker ps command. Right now there are no running containers. List Running Containers $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  20. 20. We use docker ps --all to show all containers we've created. We have two containers showing up here, both stopped. Containers hang around even after they've stopped. Unfortunately, if you don't know that, you'll find yourself with lots of extra containers hanging around. List all containers $ docker ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b503470265c2 ubuntu "/bin/bash" 2 hours ago Exited (127) 34 seconds ago reverent_franklin 23867bd9c77a ubuntu "uname -a" 2 hours ago Exited (0) 53 seconds ago peaceful_goodall
  21. 21. Clean up containers you don't need with the docker rm command. You can remove containers by their ID or their name. The names are generated by Docker if you don't specify a name. Remove containers Remove by name or by container ID CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b503470265c2 ubuntu "/bin/bash" 2 hours ago Exited (127) 34 seconds ago reverent_franklin 23867bd9c77a ubuntu "uname -a" 2 hours ago Exited (0) 53 seconds ago peaceful_goodall $ docker rm b503470265c2 $ docker rm peaceful_goodall
  22. 22. By using the --rm switch, we can have the container removed once the running process stops. Self-removing Containers $ docker run --rm -i -t ubuntu bin/bash -rm removes container when the process stops.
  23. 23. The containers we created previously had names assigned by Docker Engine. But we can, and should, name them when we create them. Name Containers $ docker run -i -t --name bash ubuntu bin/bash --name lets you specify the name.
  24. 24. You can execute commands in a running container with the docker exec command. This way you aren't spinning up a new container just to run one command. Issue Commands to Containers Run a container in the background (-d) $ docker run -d -i --name bash ubuntu bin/bash Execute commands in this container $ docker exec bash ls $ docker exec bash uname -a $ docker exec -i -t bash top
  25. 25. Need to get into the console rather than running stuff? You can do this a couple of ways. First, you can just run use docker attach. Use the sig-proxy option when you attach, so that you can safely detach with control c. Otherwise you might accidentally kill off whatever service is running in the container. Attach to Running Container Create another shell docker exec -i -t bash bin/bash Or attach: $ docker attach --sig-proxy=false bash
  26. 26. Use the docker start and docker stop commands to shut down or startup the contaienr again. Stop and Start Containers $ docker run -d -i -t --name bash ubuntu bin/bash $ docker stop bash ... $ docker start bash $ docker exec ...
  27. 27. One popular way you can use Docker is to run scripts in a clean environment for testing. Docker lets you mount a volume, which maps a folder to a folder inside of the container. Run local stuff in a Container — Create your files — Mount the working directory as a volume — Launch a container — Run your stuff
  28. 28. So let's create a simple Bash script that just prints out a string identifying that we're running from Docker, but we'll also print out the OS info using uname. Create your files $ touch File contents: echo "hello from Docker" uname -a
  29. 29. So we'll create a container that runs in the background and maps the current working directory to a folder called /myfiles on the serverr. Build a Container $ docker run --rm -d -i -t -v $PWD:/myfiles --name hello ubuntu bin/bash — -v maps local_folder to destination_folder
  30. 30. Once the container's running, we can use docker exec command to execute the script on the box. The neat thing is that we can modify this file locally and then keep running it in the container. Run your stuff $ docker exec hello bash /myfiles/ hello from Docker Linux dc90f485eb6f 4.9.36-moby #1 SMP Wed Jul 12 15:29:07 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  31. 31. To just run the script in a new container each time, just don't use the daemonize feature and specify the script directly. One-liner with no state docker run --rm -i -t -v $PWD:/myfiles --name hello ubuntu bin/bash /myfiles/
  32. 32. We use a dockerfile to create a custom image. We can install software using the package manager, we can also copy local files into the image and define the environment variables, ports, and volumes we want to use. Dockerfiles Create custom images — Install so!ware — Copy local files to the image — Set up ports, volumes, and environment variables
  33. 33. Here's an example of a Dockerfile that sets up vim, curl, git, and some build tools A Dockerfile for Development # base image FROM ubuntu:16.04 # update and install software RUN apt-get -yqq update RUN apt-get -yqq install vim curl git build-essential # copy over a local file ADD vimrc ~/.vimrc # startup command CMD ["bin/bash"]
  34. 34. We use the build command to create an image from the Dockerfile. We also namespace our images. Create Image from Dockerfile $ docker build -t napcs/devenv . Sending build context to Docker daemon 7.68 kB Step 1/5 : FROM ubuntu:16.04 ---> 6a2f32de169d Step 2/5 : RUN apt-get -yqq update ... ---> 1a8f00b704b1 Step 3/5 : RUN apt-get -yqq install vim curl git build-essential ... ---> d4e10d49f0c1 Step 4/5 : ADD vimrc /root/.vimrc ... ---> b662a9c211ed Step 5/5 : CMD bin/bash ... ---> 77970ae0dd60 Successfully built 77970ae0dd60
  35. 35. The docker images command shows you the available images. The new image we created from the dockerfile is there. List Available Images $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu 16.04 6a2f32de169d 3 months ago 117MB ... napcs/devenv latest 77970ae0dd60 22 seconds ago 436 MB
  36. 36. Want something more practical? Let's do something similar with Apache. Make a testing web server — Create a Dockerfile — Launch the container — Map the current folder as the webroot — Map the port
  37. 37. Let's create our own Dockerfile for this. We could just grab an existing image, but this gives us more experience with Dockerfiles. Once again we base the image off an Ubuntu base image. Then we install the packages. We expose port 80 from the container. Create a Dockerfile FROM ubuntu:16.04 # packages RUN apt-get -yqq update; RUN apt-get -yqq install apache2 # port EXPOSE 80 # startup command for Apache so it doesn't background CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
  38. 38. Build the image $ docker build -t napcs/apache . Sending build context to Docker daemon 3.072 kB Step 1/5 : FROM ubuntu:16.04 ---> 6a2f32de169d Step 2/5 : RUN apt-get -yqq update; ... ---> 6827312d57a3 Step 3/5 : RUN apt-get -yqq install apache2 ... ---> 27ff55c5eddb Step 4/5 : EXPOSE 80 ... ---> df2cebb281dc Step 5/5 : CMD /usr/sbin/apache2ctl -D FOREGROUND ... ---> 2793e761e2cd
  39. 39. Alright. The image is created. We'll create a simple web page, then fire up the container. We use the -p option to fire up the container and map the local port 8000 to port 80 in the container. Serve the Current Dir $ echo "<h1>hello world</h1>" > index.html $ docker run --rm -d -p 8000:80 -v $PWD:/var/www/html --name web napcs/apache -p binds host port to destination port.
  40. 40. We can now access localhost:8000 and see the web page. cURL can show us the headers, so you can clearly see we're using Apache. Test with cURL $ curl -i localhost:8000 HTTP/1.1 200 OK Date: Sat, 22 Apr 2017 04:16:27 GMT Server: Apache/2.4.18 (Ubuntu) Last-Modified: Wed, 22 Mar 2017 14:12:32 GMT ETag: "15-54b5259f23400" Accept-Ranges: bytes Content-Length: 21 Content-Type: text/html <h1>hello world</h1>
  41. 41. Storing data in a container is risky because if you lose the container, you lose the data. Data Persistence
  42. 42. We'll create a Redis container and map the ports to our host. Then we'll start up the redis CLI. Example: Redis Data Start Redis in a container: $ docker run -d --name redis -p 6379:6379 redis redis-server Connect with local client: $ redis-cli
  43. 43. So we save a value into Redis and we exit the cli. Then we stop and remove the container. Test Persistence Save a value: > set name "TCCC" > exit Stop and remove container $ docker stop redis $ docker rm redis
  44. 44. So, what happened to the data we just stuck in the container? What would we be able to do about this? Where's the data?
  45. 45. We can use an external volume. The redis container stores Redis data in a data folder in the root folder of the container. Like we did with Apache, we just use a volume to map that data over. Use External Volumes for Data $ mkdir data $ docker run --rm -d --name redis -v $PWD/data:/data -p 6379:6379 redis redis-server If we persist data in Redis now, it's saved outside of the container.
  46. 46. Review — What does the --rm option do? — Why would we use -i -t when running a container? — What does -p 3000:80 do? — What's the difference between a container and an image?
  47. 47. Docker Compose makes it easy to run a multi-container Docker app. Like a web app and its database. You could run the web app and database in the same container, but you wouldn't, because you'll want to scale out. Docker Compose Tool for running multi- container Docker apps.
  48. 48. Process — Define a Dockerfile for your app — Define the services that make up your app in a docker-compose file — Launch the services.
  49. 49. Our App — Node.JS and Redis app. — Every visit to the site increments a counter in a Redis database. — Run with forever so it stays running in the container and restarts if there's a crash.
  50. 50. Code const express = require('express'), http = require('http'), redis = require('redis').createClient('6379', 'redis'), app = express(), port = 3000; app.get('/', function(req, resp, next) { redis.incr('counter', function(err, counter) { if (err) return next(err); resp.send('<h1>This page has been viewed ' + counter + ' times</h1>'); }); }); http.createServer(app).listen(port, function() { console.log('Demo listening on port ' + port); });
  51. 51. Dockerize an app — Define the environment — Specify the files to copy into the image — Expose ports — Define startup commands
  52. 52. Build a dockerfile by using the node version 6 image as our base. We'll then install the forever npm package, which we can use to run Node apps and restart them if something fails. We then upload the package json file to a temp folder and install the packages. This way the packages will be cached. Then we move them into the actual app folder, copy our app up, expose the port, and define the startup command. Dockerfile FROM node:6 RUN npm install -g forever # Provides cached layer for node_modules ADD package.json /tmp/package.json RUN cd /tmp && npm install RUN mkdir -p /app RUN cp -a /tmp/node_modules /app/ # Define working directory WORKDIR /app ADD index.js /app # Port EXPOSE 3000 # Startup command CMD ["forever", "/app/index.js"]
  53. 53. A Compose file defines the services that make up your stack. We define the app and the database, and we can define how the volumes work and what ports get exposed. For the Node app, we tell docker compose to just build the local folder using the Dockerfile. For Redis, we tell it to use the Redis image. We tell Redis to use a data volume similar to what we did before. The Port part here is a shorthand notation - if the host and container port are the same, we can just list a single port. Compose file version: "3" services: app: build: . ports: - "3000:3000" redis: image: redis volumes: - ./data:/data ports: - "6379"
  54. 54. Starting the services is easy. We use docker compose up. Stopping is just as easy. Starting and Stopping the Services Start $ docker compose up Stop $ docker compose down
  55. 55. Docker Machine — Create and manage Docker environments on virtual machines — Create and manage Docker environments on remote machines — Treat a remote Docker Engine as if it were local
  56. 56. Example: Create machine with VirtualBox Create machine $ docker-machine create --driver virtualbox tccc Creating CA: /Users/brianhogan/.docker/machine/certs/ca.pem Creating client certificate: /Users/brianhogan/.docker/machine/certs/cert.pem Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env tccc
  57. 57. Use Docker Client with New Machine $ docker-machine env tccc export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://" export DOCKER_CERT_PATH="/Users/brianhogan/.docker/machine/machines/tccc" export DOCKER_MACHINE_NAME="tccc" # Run this command to configure your shell: # eval $(docker-machine env tccc) $ eval $(docker-machine env tccc)
  58. 58. This is an example of using docker machine to create a machine on DigitalOcean. There are other drivers you can use to create machines on other cloud providers. There's also a generic driver you can use if you need to set up machines using your own infrastructure. Example: Create a remote server on DigitalOcean $ docker-machine create --driver digitalocean --digitalocean-image ubuntu-16-04-x64 --digitalocean-access-token $DOTOKEN tccc — driver supports cloud providers. Drivers available for Azure and EC2, as well as others. — --digitalocean-image specifies the machine image to use. — $DOTOKEN is an env variable holding an API token — tccc is our machine name.
  59. 59. Using the remote machine locally $ docker-machine env tccc export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://" export DOCKER_CERT_PATH="/Users/brianhogan/.docker/machine/machines/tccc" export DOCKER_MACHINE_NAME="tccc" # Run this command to configure your shell: # eval $(docker-machine env tccc) $ eval $(docker-machine env tccc) You can tell your local Docker client to use your remote machine's docker host as if it were running locally.
  60. 60. Deploy App with Compose Switch to the remote machine $ eval $(docker-machine env tccc) Bring up app: $ docker-compose up -d Get the IP $ docker-machine ip tccc
  61. 61. When you switch your environment with docker-machine, all docker commands are sent to the remote docker engine. If you want to put things back to point to your local docker engine, you use the -u option. Return back to Local Docker Engine $ docker-machine env -u unset DOCKER_TLS_VERIFY unset DOCKER_HOST unset DOCKER_CERT_PATH unset DOCKER_MACHINE_NAME # Run this command to configure your shell: # eval $(docker-machine env -u) $ eval $(docker-machine env -u)
  62. 62. Review — What's the difference between docker-machine and docker-compose? — What does docker-machine env do? — How do you get the IP address of the remote machine?
  63. 63. Swarm Mode — Cluster management built in to Docker Engine — Decentralized design with service discovery and multi-host networking — Load balancing and scaling — Replaces Docker Swarm
  64. 64. Deploying to a Swarm Mode Cluster — Create the machines with docker-machine — Create at least one swarm master — Create at least one worker — Push your app image to a Docker Registry — Deploy with docker compose
  65. 65. Let's create three machines for our swarm. One of these will be the swarm "master", although I like to use the term "swarm boss" instead. The other two will be our workers. We'll create these on DigitalOcean's cloud.. Create a Swarm $ docker-machine create --driver digitalocean --digitalocean-image ubuntu-16-04-x64 --digitalocean-access-token $DOTOKEN tccc-boss $ docker-machine create --driver digitalocean --digitalocean-image ubuntu-16-04-x64 --digitalocean-access-token $DOTOKEN tccc-worker1 $ docker-machine create --driver digitalocean --digitalocean-image ubuntu-16-04-x64 --digitalocean-access-token $DOTOKEN tccc-worker2
  66. 66. First, we create the swarm manager. We have to specify its IP address but we can use docker-machine ip to do that. When we run this command, it'll give us the command to run for each worker. Create a Swarm Manager $ eval "$(docker-machine env tccc-boss)" $ docker swarm init --advertise-addr $(docker-machine ip tccc-boss) Swarm initialized: current node (s4p3vlului2hu8d2beixpu0wa) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-1e60k6k5jmzirfx4bkb3j1pzdsay9ocq5qix0bl91vnuv3ryp3-7o0g6sly1etsxtj33c1hxnoav To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
  67. 67. To connect each machine, switch your environment to the worker and paste in the command you were given. The token is unique to your swarm, and also identifies the machien as a worker. Swarm masters have a different token. Connect the Workers $ eval "$(docker-machine env tccc-worker1)" $ docker swarm join --token SWMTKN-1-0opi6iq7zh5brglm01m2beeueql9xt0k1tbfx5u6s519inghxr-8mu5o85bxupzp4ykf4shsiifm $ eval "$(docker-machine env tccc-worker2)" $ docker swarm join --token SWMTKN-1-0opi6iq7zh5brglm01m2beeueql9xt0k1tbfx5u6s519inghxr-8mu5o85bxupzp4ykf4shsiifm
  68. 68. To see the machines in the swarm, switch back to the swarm manager. Then use docker node ls. You'll see which one is the swarm manager. See The Swarm $ eval "$(docker-machine env tccc-worker2)" $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ofqi7n5u52efec1mjrjxikgxd tccc-worker2 Ready Active s4p3vlului2hu8d2beixpu0wa * tccc-boss Ready Active Leader sc0exb43oafmr307z71bway1s tccc-worker1 Ready Active
  69. 69. Swarm mode doesn't allow the build option. We have to specify an image. We don't want to push our image up to a public registry, so we'll just create a registry on our swarm and we'll push it there. Docker Hub is great for public things, but you may not want your app pushed up there because of the cost. So you can create a private registry and push your image there. Change Compose File version: '3' services: app: ++ image: build: . ports: - "3000:3000" redis: image: redis volumes: - ./data:/data ports: - "6379"
  70. 70. Let's test out the app. This will also build the image. You'll see a warning that says it can't deploy to the entire swarm, but that's ok. We're just making sure things build and start. Build with Compose to Test Image $ docker-compose up WARNING: The Docker Engine you're using is running in swarm mode. Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node. To deploy your application across the swarm, use `docker stack deploy`. Creating network "06swarm_default" with the default driver Building app Step 1/10 : FROM node:6 6: Pulling from library/node ... Creating 06swarm_app_1 Creating 06swarm_redis_1
  71. 71. There's an official registry image we can use, so we'll use this command to fire up a registry service on our swrm. Then we'll use Compose to push the image up. Create Private Registry $ docker service create --name registry --publish 5000:5000 registry:2 $ docker-compose push
  72. 72. Finally, we deploy the stack to the swarm using the compose file. Deploy $ docker stack deploy --compose-file docker-compose.yml noderedis Ignoring unsupported options: build Creating network noderedis_default Creating service noderedis_redis Creating service noderedis_app
  73. 73. We can see all of the services running on our swarm with the docker service list command. Our redis server and node app are listed, along with our registry. Review the Nodes $ docker service list ID NAME MODE REPLICAS IMAGE mhnl5pnm5spd noderedis_redis replicated 1/1 redis:latest v9iu32g5eqj1 noderedis_app replicated 1/1 w4x9fep7m5c8 registry replicated 1/1 registry:2
  74. 74. To see just the services in the stack we deployed, use the docker stack services command. Explore the stack $ docker stack services noderedis ID NAME MODE REPLICAS IMAGE mhnl5pnm5spd noderedis_redis replicated 1/1 redis:latest v9iu32g5eqj1 noderedis_app replicated 1/1
  75. 75. You can access the swarm using the ip address of the manager or any worker. The mesh network ensures it all works. Access the app $ curl http://<ip_address_of_manager> $ curl http://<ip_address_of_any_worker>
  76. 76. Using the docker service ps command you can see what node a service runs on. See where service runs $ docker service ps noderedis_app ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 7rttqwob67qm noderedis_app.1 tccc-worker1 Running Running 10 minutes ago
  77. 77. You may want to scale out your app to meet the demand. You can do that with the docker service scale command. ^ You scale up the numbrer of containers we use for our app. I have three machines, and I can scale this app up to three, but that isn't a limitation. I can scale the number of containers to a much higher number than machines. Scale out the app $ docker service scale noderedis_app=3 noderedis_app scaled to 3
  78. 78. With the app scaled out, look again at the processes. You can see what nodes each service is running on. Review the Nodes $ docker service ps noderedis_app ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 7rttqwob67qm noderedis_app.1 tccc-worker1 Running Running 12 minutes ago q2pjqf0z8o2w noderedis_app.2 tccc-boss Running Running 6 seconds ago 5oi656f6a0w5 noderedis_app.3 tccc-worker2 Running Running 7 seconds ago
  79. 79. All done? Tear it all down by removing the stack. Tear it Down $ docker stack rm noderedis
  80. 80. Review — How do you add workers to a swarm? — Can you use docker-compose to push an app to a swarm? — What must you do with your app image in order to deploy with docker-compose?
  81. 81. Container Orchestration — Scheduling — (placement, replication / scaling, resurrection, rescheduling, upgrades, downgrades) — Resource management (memory, CPU, volumes, ports, IPs, images) — Service management (i.e. orchestrating several containers together using labels, groups, namespaces, load balancing, readiness checking
  82. 82. Some options — Rancher — Kubernetes
  83. 83. Rancher Easily deploy and run containers in production on any infrastructure with the most complete container management platform.
  84. 84. Kubernetes Open-source system for automating deployment, scaling, and management of containerized applications.
  85. 85. Next Steps — These slides ( docker2017) — DigitalOcean Docker Tutorials (https:// type=tutorials) Questions?