A hands-on workshop that covers 18 best practices in 4 categories or in other words ✅️ Dos & Don'ts.
After a general introduction, we will have a look at the essential practices (aka must do), then move to the image practices, then we will go through the security practices, and finally, some general practices.
Please note, this workshop assumes that you have a basic knowledge of Docker.
Hands-on repo:
https://github.com/aabouzaid/docker-best-practices-workshop
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
Docker allows building, shipping, and running applications in portable containers. It packages an application with all its dependencies into a standardized unit for software development. Major cloud providers and companies support and use Docker in production. Containers are more lightweight and efficient than virtual machines, providing faster launch times and allowing thousands to run simultaneously on the same server. Docker simplifies distributing applications and ensures a consistent environment.
This document provides an introduction to Docker and containers. It discusses why containers are useful for software deployment given changes in the industry. Containers provide lightweight isolation of applications and their dependencies. Docker is a tool that manages containers running on the same operating system kernel. Key Docker components include the client, server, images, and containers. Popular use cases of Docker include Google running over a billion containers per week and Finnish Railways saving 50% of cloud costs with Docker.
- The document introduces Docker, explaining that it provides standardized packaging for software and dependencies to isolate applications and share the same operating system kernel.
- Key aspects of Docker are discussed, including images which are layered and can be version controlled, containers which start much faster than virtual machines, and Dockerfiles which provide build instructions for images.
- The document demonstrates Docker's build, ship, and run workflow through examples of building a simple image and running a container, as well as using Docker Compose to run multi-container applications like WordPress. It also introduces Docker Swarm for clustering multiple Docker hosts.
Docker allows for easy deployment and management of applications by wrapping them in containers. It provides benefits like running multiple isolated environments on a single server, easily moving applications between environments, and ensuring consistency across environments. The document discusses using Docker for development, production, and monitoring containers, and outlines specific benefits like reducing deployment time from days to minutes, optimizing hardware usage, reducing transfer sizes, and enhancing productivity. Future plans mentioned include using Kubernetes for container orchestration.
Docker is a tool that allows applications to run in isolated containers to make them portable and consistent across environments. It provides benefits like easy developer onboarding, eliminating application conflicts, and consistent deployments. Docker tools include the Docker Engine, Docker Client, Docker Compose, and Docker Hub. Key concepts are images which are templates for containers, and containers which are where the code runs based on an image. The document outlines how to build custom images from Dockerfiles, communicate between containers using linking or networks, and deploy containers using Docker Compose or in the cloud.
This document provides an overview of Docker and the author's experience. It discusses key Docker concepts like images, containers, the Dockerfile and Docker Engine. It also summarizes Docker benefits like portability, scalability and efficiency. Components like Docker Hub, Docker Machine and orchestration tools are briefly introduced. Security considerations and using Docker in production are also mentioned.
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
Docker allows building, shipping, and running applications in portable containers. It packages an application with all its dependencies into a standardized unit for software development. Major cloud providers and companies support and use Docker in production. Containers are more lightweight and efficient than virtual machines, providing faster launch times and allowing thousands to run simultaneously on the same server. Docker simplifies distributing applications and ensures a consistent environment.
This document provides an introduction to Docker and containers. It discusses why containers are useful for software deployment given changes in the industry. Containers provide lightweight isolation of applications and their dependencies. Docker is a tool that manages containers running on the same operating system kernel. Key Docker components include the client, server, images, and containers. Popular use cases of Docker include Google running over a billion containers per week and Finnish Railways saving 50% of cloud costs with Docker.
- The document introduces Docker, explaining that it provides standardized packaging for software and dependencies to isolate applications and share the same operating system kernel.
- Key aspects of Docker are discussed, including images which are layered and can be version controlled, containers which start much faster than virtual machines, and Dockerfiles which provide build instructions for images.
- The document demonstrates Docker's build, ship, and run workflow through examples of building a simple image and running a container, as well as using Docker Compose to run multi-container applications like WordPress. It also introduces Docker Swarm for clustering multiple Docker hosts.
Docker allows for easy deployment and management of applications by wrapping them in containers. It provides benefits like running multiple isolated environments on a single server, easily moving applications between environments, and ensuring consistency across environments. The document discusses using Docker for development, production, and monitoring containers, and outlines specific benefits like reducing deployment time from days to minutes, optimizing hardware usage, reducing transfer sizes, and enhancing productivity. Future plans mentioned include using Kubernetes for container orchestration.
Docker is a tool that allows applications to run in isolated containers to make them portable and consistent across environments. It provides benefits like easy developer onboarding, eliminating application conflicts, and consistent deployments. Docker tools include the Docker Engine, Docker Client, Docker Compose, and Docker Hub. Key concepts are images which are templates for containers, and containers which are where the code runs based on an image. The document outlines how to build custom images from Dockerfiles, communicate between containers using linking or networks, and deploy containers using Docker Compose or in the cloud.
This document provides an overview of Docker and the author's experience. It discusses key Docker concepts like images, containers, the Dockerfile and Docker Engine. It also summarizes Docker benefits like portability, scalability and efficiency. Components like Docker Hub, Docker Machine and orchestration tools are briefly introduced. Security considerations and using Docker in production are also mentioned.
Docker allows users to package applications with all their dependencies into standardized units called containers that can run on any Linux server. Containers are more lightweight than virtual machines because they share the host operating system and only require the additional libraries and binaries needed to run the application rather than a full guest operating system. Docker uses containers and an image format to deploy applications in a consistent manner across development, testing, and production. The document provides examples of how to define a Dockerfile to build an image, run containers from images using docker-compose, and common Docker commands.
Docker is an open platform for developing, shipping, and running applications. It allows packaging applications into standardized units for software called containers that can run on any infrastructure. The key components of Docker include images, containers, a client-server architecture using Docker Engine, and registries for storing images. Images act as templates for creating containers, which are run-time instances of images. Docker provides portability and isolation of applications using containers.
Docker Compose allows users to define and run multi-container Docker applications with a single command (docker up). It uses a YAML file to configure the application's services and Docker to automatically build images and link containers. With Compose, complex applications can be started and stopped with a single command, rather than multiple docker run commands. It also integrates with the Docker API, allowing it to work with tools like Docker Swarm for multi-host clusters.
Virtualization, Containers, Docker and scalable container management servicesabhishek chawla
In this presentation we take you through the concept of virtualization which includes the different types of virtualizations, understanding the Docker as a software containerization platform like Docker's Architecture, Building and running custom images in Docker containers, Scalable container management services which include overview of Amazon ECS & kubernetes and how at LimeTray we harnessed the power of kubernetes for scalable automated deployment of our microservices.
Docker is an open source containerization platform that allows applications to be easily deployed and run across various operating systems and cloud environments. It allows applications and their dependencies to be packaged into standardized executable units called containers that can be run anywhere. Containers are more portable and provide better isolation than virtual machines, making them useful for microservices architecture, continuous integration/deployment, and cloud-native applications.
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
Docker is a platform that allows users to build, ship, and run applications by using containers. It solves issues like dependency conflicts, portability, and consistency across development and production. Docker uses containers- isolated environments that package code and dependencies together- to deliver software quickly and reliably. Key Docker concepts include images (read-only templates for creating containers), volumes (for persistent data), registries (for sharing images), and compose files (for defining multi-container apps). Docker also provides networking and clustering functionality to connect containers across multiple hosts.
The document introduces Docker, a container platform. It discusses how Docker addresses issues with deploying different PHP projects that have varying version requirements by allowing each project to run isolated in its own container with specified dependencies. It then covers key Docker concepts like images, containers, linking, exposing ports, volumes, and Dockerfiles. The document highlights advantages of Docker like enabling applications to run anywhere without compatibility issues and making deployment more efficient.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
This document provides an introduction to Docker and discusses:
- The challenges of managing applications across different environments which Docker aims to solve through lightweight containers.
- An overview of Docker concepts including images, containers, the Docker workflow and networking.
- How Docker Compose allows defining and running multi-container applications and Docker Swarm enables orchestrating containers across a cluster.
- The open container ecosystem including the Open Container Initiative for standardization.
This document provides an introduction to Docker. It discusses why Docker is useful for isolation, being lightweight, simplicity, workflow, and community. It describes the Docker engine, daemon, and CLI. It explains how Docker Hub provides image storage and automated builds. It outlines the Docker installation process and common workflows like finding images, pulling, running, stopping, and removing containers and images. It promotes Docker for building local images and using host volumes.
Docker 101 is a series of workshops that aims to help developers (or interested people) to get started with docker.
The workshop 101 is were the audience has the first contact with docker, from installation to manage multiple containers.
- Installing docker
- managing images (docker rmi, docker pull)
- basic commands (docker info, docker ps, docker images, docker run, docker commit, docker inspect, docker exec, docker diff, docker stop, docker start)
- Docker registry
- container life cycle (running, paused, stopped, restarted)
- Dockerfile
Docker is an open platform for developing, shipping, and running applications. It allows separating applications from infrastructure and treating infrastructure like code. Docker provides lightweight containers that package code and dependencies together. The Docker architecture includes images that act as templates for containers, a client-server model with a daemon, and registries for storing images. Key components that enable containers are namespaces, cgroups, and capabilities. The Docker ecosystem includes services like Docker Hub, Docker Swarm for clustering, and Docker Compose for orchestration.
This document introduces Docker Compose, which allows defining and running multi-container Docker applications. It discusses that Docker Compose uses a YAML file to configure and run multi-service Docker apps. The 3 steps are to define services in a Dockerfile, define the app configuration in a Compose file, and run the containers with a single command. It also covers topics like networking, environment variables, and installing Docker Compose. Hands-on labs are provided to learn Compose through examples like WordPress.
The document describes the architecture of Docker containers. It discusses how Docker uses Linux kernel features like cgroups and namespaces to isolate processes and manage resources. It then explains the main components of Docker, including the Docker engine, images, containers, graph drivers, and the native execution driver which uses libcontainer to interface with the kernel.
Docker allows packaging applications and dependencies into containers to ensure applications work seamlessly across environments. Docker images are blueprints used to create containers, which are runnable instances of images. Dockerization is useful for standardizing environments and ensuring applications run the same on different machines through containerization. The document demonstrates creating an MSSQL server Linux container using Docker by running a command to specify environment variables and port mapping.
1. Docker allows creating lightweight virtual environments called containers that package code and dependencies together. Containers are more portable than virtual machines.
2. Docker uses images to build containers. Images are immutable templates and containers are instances of images that can be run. The Dockerfile defines how to build images.
3. Common Docker commands include docker pull to download images, docker run to create and start containers, docker exec to run commands in running containers, and docker commit to save container changes as new images.
This document discusses Docker Registry API V2, a new model for image distribution that addresses limitations in the previous V1 API. Key changes include making layers content-addressable using cryptographic digests for identification and verification. Images are now described by manifests containing layer digests. The registry stores content in repositories and no longer exposes internal image details. Early adoption shows V2 providing significantly better performance than V1 with 80% fewer requests and 60% less bandwidth used. Future goals include improving documentation, adding features like pull-through caching, and developing the Docker distribution components to provide a foundation for more advanced distribution models.
This document provides an introduction to Docker and discusses how it helps address challenges in the modern IT landscape. Some key points:
- Applications are increasingly being broken up into microservices and deployed across multiple servers and environments, making portability and scalability important.
- Docker containers help address these issues by allowing applications to run reliably across different infrastructures through package dependencies and resources together. This improves portability.
- Docker provides a platform for building, shipping and running applications. It helps bridge the needs of developers who want fast innovation and operations teams who need security and control.
Securing Containers From Day One | null Ahmedabad MeetupKumar Ashwin
Kumar Ashwin gives a presentation on securing containers from day one. The presentation covers what containers are, why we need them, the difference between virtual machines and containers, cgroups and namespaces, Docker basics, building optimized Docker images, and best practices for Dockerfile security. Some key points discussed include using minimal base images, ignoring unnecessary files, creating "golden images" as hardened base templates, not running as root, avoiding secrets in Dockerfiles, and using tools like Hadolint and Dockle to scan for issues.
Securing Containers From Day One | null Ahmedabad MeetupKumar Ashwin
Kumar Ashwin gives a presentation on securing containers from day one. The presentation covers what containers are, why we need them, the difference between virtual machines and containers, cgroups and namespaces, Docker basics, building optimized Docker images, and best practices for Dockerfile security. Some key points discussed include using minimal base images, ignoring unnecessary files, creating "golden images" as hardened base templates, not running as root, avoiding secrets in Dockerfiles, and using tools like Hadolint and Dockle to scan for issues.
Docker allows users to package applications with all their dependencies into standardized units called containers that can run on any Linux server. Containers are more lightweight than virtual machines because they share the host operating system and only require the additional libraries and binaries needed to run the application rather than a full guest operating system. Docker uses containers and an image format to deploy applications in a consistent manner across development, testing, and production. The document provides examples of how to define a Dockerfile to build an image, run containers from images using docker-compose, and common Docker commands.
Docker is an open platform for developing, shipping, and running applications. It allows packaging applications into standardized units for software called containers that can run on any infrastructure. The key components of Docker include images, containers, a client-server architecture using Docker Engine, and registries for storing images. Images act as templates for creating containers, which are run-time instances of images. Docker provides portability and isolation of applications using containers.
Docker Compose allows users to define and run multi-container Docker applications with a single command (docker up). It uses a YAML file to configure the application's services and Docker to automatically build images and link containers. With Compose, complex applications can be started and stopped with a single command, rather than multiple docker run commands. It also integrates with the Docker API, allowing it to work with tools like Docker Swarm for multi-host clusters.
Virtualization, Containers, Docker and scalable container management servicesabhishek chawla
In this presentation we take you through the concept of virtualization which includes the different types of virtualizations, understanding the Docker as a software containerization platform like Docker's Architecture, Building and running custom images in Docker containers, Scalable container management services which include overview of Amazon ECS & kubernetes and how at LimeTray we harnessed the power of kubernetes for scalable automated deployment of our microservices.
Docker is an open source containerization platform that allows applications to be easily deployed and run across various operating systems and cloud environments. It allows applications and their dependencies to be packaged into standardized executable units called containers that can be run anywhere. Containers are more portable and provide better isolation than virtual machines, making them useful for microservices architecture, continuous integration/deployment, and cloud-native applications.
Containers are not virtual machines - they have fundamentally different architectures and benefits. Docker allows users to build, ship, and run applications inside containers. It provides tools and a platform to manage the lifecycle of containerized applications, from development to production. Containers use layers and copy-on-write to provide efficient application isolation and delivery.
Docker is a platform that allows users to build, ship, and run applications by using containers. It solves issues like dependency conflicts, portability, and consistency across development and production. Docker uses containers- isolated environments that package code and dependencies together- to deliver software quickly and reliably. Key Docker concepts include images (read-only templates for creating containers), volumes (for persistent data), registries (for sharing images), and compose files (for defining multi-container apps). Docker also provides networking and clustering functionality to connect containers across multiple hosts.
The document introduces Docker, a container platform. It discusses how Docker addresses issues with deploying different PHP projects that have varying version requirements by allowing each project to run isolated in its own container with specified dependencies. It then covers key Docker concepts like images, containers, linking, exposing ports, volumes, and Dockerfiles. The document highlights advantages of Docker like enabling applications to run anywhere without compatibility issues and making deployment more efficient.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
This document provides an introduction to Docker and discusses:
- The challenges of managing applications across different environments which Docker aims to solve through lightweight containers.
- An overview of Docker concepts including images, containers, the Docker workflow and networking.
- How Docker Compose allows defining and running multi-container applications and Docker Swarm enables orchestrating containers across a cluster.
- The open container ecosystem including the Open Container Initiative for standardization.
This document provides an introduction to Docker. It discusses why Docker is useful for isolation, being lightweight, simplicity, workflow, and community. It describes the Docker engine, daemon, and CLI. It explains how Docker Hub provides image storage and automated builds. It outlines the Docker installation process and common workflows like finding images, pulling, running, stopping, and removing containers and images. It promotes Docker for building local images and using host volumes.
Docker 101 is a series of workshops that aims to help developers (or interested people) to get started with docker.
The workshop 101 is were the audience has the first contact with docker, from installation to manage multiple containers.
- Installing docker
- managing images (docker rmi, docker pull)
- basic commands (docker info, docker ps, docker images, docker run, docker commit, docker inspect, docker exec, docker diff, docker stop, docker start)
- Docker registry
- container life cycle (running, paused, stopped, restarted)
- Dockerfile
Docker is an open platform for developing, shipping, and running applications. It allows separating applications from infrastructure and treating infrastructure like code. Docker provides lightweight containers that package code and dependencies together. The Docker architecture includes images that act as templates for containers, a client-server model with a daemon, and registries for storing images. Key components that enable containers are namespaces, cgroups, and capabilities. The Docker ecosystem includes services like Docker Hub, Docker Swarm for clustering, and Docker Compose for orchestration.
This document introduces Docker Compose, which allows defining and running multi-container Docker applications. It discusses that Docker Compose uses a YAML file to configure and run multi-service Docker apps. The 3 steps are to define services in a Dockerfile, define the app configuration in a Compose file, and run the containers with a single command. It also covers topics like networking, environment variables, and installing Docker Compose. Hands-on labs are provided to learn Compose through examples like WordPress.
The document describes the architecture of Docker containers. It discusses how Docker uses Linux kernel features like cgroups and namespaces to isolate processes and manage resources. It then explains the main components of Docker, including the Docker engine, images, containers, graph drivers, and the native execution driver which uses libcontainer to interface with the kernel.
Docker allows packaging applications and dependencies into containers to ensure applications work seamlessly across environments. Docker images are blueprints used to create containers, which are runnable instances of images. Dockerization is useful for standardizing environments and ensuring applications run the same on different machines through containerization. The document demonstrates creating an MSSQL server Linux container using Docker by running a command to specify environment variables and port mapping.
1. Docker allows creating lightweight virtual environments called containers that package code and dependencies together. Containers are more portable than virtual machines.
2. Docker uses images to build containers. Images are immutable templates and containers are instances of images that can be run. The Dockerfile defines how to build images.
3. Common Docker commands include docker pull to download images, docker run to create and start containers, docker exec to run commands in running containers, and docker commit to save container changes as new images.
This document discusses Docker Registry API V2, a new model for image distribution that addresses limitations in the previous V1 API. Key changes include making layers content-addressable using cryptographic digests for identification and verification. Images are now described by manifests containing layer digests. The registry stores content in repositories and no longer exposes internal image details. Early adoption shows V2 providing significantly better performance than V1 with 80% fewer requests and 60% less bandwidth used. Future goals include improving documentation, adding features like pull-through caching, and developing the Docker distribution components to provide a foundation for more advanced distribution models.
This document provides an introduction to Docker and discusses how it helps address challenges in the modern IT landscape. Some key points:
- Applications are increasingly being broken up into microservices and deployed across multiple servers and environments, making portability and scalability important.
- Docker containers help address these issues by allowing applications to run reliably across different infrastructures through package dependencies and resources together. This improves portability.
- Docker provides a platform for building, shipping and running applications. It helps bridge the needs of developers who want fast innovation and operations teams who need security and control.
Securing Containers From Day One | null Ahmedabad MeetupKumar Ashwin
Kumar Ashwin gives a presentation on securing containers from day one. The presentation covers what containers are, why we need them, the difference between virtual machines and containers, cgroups and namespaces, Docker basics, building optimized Docker images, and best practices for Dockerfile security. Some key points discussed include using minimal base images, ignoring unnecessary files, creating "golden images" as hardened base templates, not running as root, avoiding secrets in Dockerfiles, and using tools like Hadolint and Dockle to scan for issues.
Securing Containers From Day One | null Ahmedabad MeetupKumar Ashwin
Kumar Ashwin gives a presentation on securing containers from day one. The presentation covers what containers are, why we need them, the difference between virtual machines and containers, cgroups and namespaces, Docker basics, building optimized Docker images, and best practices for Dockerfile security. Some key points discussed include using minimal base images, ignoring unnecessary files, creating "golden images" as hardened base templates, not running as root, avoiding secrets in Dockerfiles, and using tools like Hadolint and Dockle to scan for issues.
Justin Cormack - The 10 Container Security Tricks That Will Help You Sleep At...Codemotion
Containers, and the tooling around them, make some parts of application security that much easier. There are some simple things you can do to make a substantial difference to the security of your applications without making any big changes to what you do. This talk will give you some small changes you can make in a few hours that will make it that much more difficult to hack your applications.
Docker Indy Meetup - An Opinionated View of Building Docker Images and PipelinesMatt Bentley
An Opinionated View of Building Docker Images and Pipelines
https://github.com/mbentley/new-dockerfile-features
Docker Indy - May 29, 2018 - Matt Bentley
Numerous packaging & delivering applications are available in the global market, and out of all, Docker has created its prominent reputation amongst countless organizations around the globe.
TDC2016POA | Trilha Cloud Computing - Source-to-image - How to transform any ...tdc-globalcode
This document discusses Source to Image (S2I), an open source tool for building reproducible Docker images from source code. S2I uses builder images containing common build tools and assembles the source code into runnable applications within these builder images. It aims to make the build process easy to use, reproducible, and secure while minimizing build dependencies and layers. The document outlines the S2I build process, demonstrates incremental and layered builds, and describes options for customizing builds.
"Docker best practice", Станислав Коленкин (senior devops, DataArt)DataArt
This document provides best practices for using Docker containers, including:
- Using "dumb-init" or "supervisord" to run multiple services in a container.
- Using named volumes over host volumes whenever possible as named volumes can be directly controlled and backed up easily.
- Writing useful entrypoint scripts to address startup issues when linking containers.
- Avoiding using the root user when possible for security.
- Techniques for reducing Docker image sizes such as using smaller base images, removing cache files and temporary packages, and combining Dockerfile commands.
The document also discusses Docker security topics like authenticating images, dropping unnecessary privileges, limiting resource consumption, and reducing large attack surfaces
Preparing your dockerised application for production deploymentDave Ward
You’ve got your application dockerised for development. That process is working smoothly, and you’re gaining a lot of the benefits that docker gives you - environments are trivial to setup, independent of platform, and they are consistent for everyone on your team.
How do you go about taking the next step so that your application is deployed into a scalable and reliable production setup?
How do you create deployment artefacts which are built with consistency and transparency? How do you manage environment variables between staging and production environments? How do you perform actions / schedule processes in one environment and not another?
In this talk we will discuss what you need to do to get your dockerised application ready for deployment into a production environment.
Achieve DevOps harmony with Docker and Jenkins - or at least strive towards it. In this session we will explore the different patterns of using Docker and Jenkins together. We will look at running Jenkins itself with Docker and running Jenkins agents with Docker, to include: building code inside Docker containers and testing applications with Docker containers. This talk will also look at managing the entire SDLC of a Dockerized application with Jenkins - from building to testing to publishing to a private Docker repo and finally to deploying your containerized application.
DCEU 18: Building Your Development PipelineDocker, Inc.
This document discusses building a development pipeline using containers. It outlines using containers for building images, automated testing, security scanning, and deploying to production. Containers make environments consistent and reproducible. The pipeline includes building images, testing, security scanning, and promoting images to production. Methods discussed include using multi-stage builds to optimize images, leveraging Buildkit for faster builds, and parallel testing across containers. Automated tools are available to implement rolling updates and rollbacks during deployments.
Docker development Best Practices recommends using Docker Hub or CI/CD pipelines to build and tag Docker images on pull requests. Images should be signed by development, security, and testing teams before production. Different environments should be used for development and testing. Docker should be updated to the latest version for security features.
Docker container Best Practices include frequently backing up a single manager node for restoration. Cloud deployment of containers on AWS or Azure uses Kubernetes. Load balancers like NGINX help control Docker containers for availability and scalability.
Dockerfile Best Practices are to not use Dockerfiles as build scripts, define environment variables, commit Dockerfiles to repositories, be mindful of base image size, do not expose secrets
eZ Publish 5: from zero to automated deployment (and no regressions!) in one ...Gaetano Giunta
1. The workshop will cover Docker, managing environments, database changes, and automated deployments for eZPublish websites.
2. A Docker stack is proposed that includes containers for Apache, MySQL, Solr, PHP, and other tools to replicate a production environment for development. Configuration and code are mounted as volumes.
3. Managing environments involves storing settings in the code repository and using symlinks to deploy different configurations. Database changes should be managed via migration scripts rather than connecting directly to a shared database.
4. Automating deployments is important and involves tasks like updating code, the database, caches and reindexing content. The same deployment script should be used for development and production. Testing websites is also recommended.
This document provides an overview of developing applications in Docker. It defines key Docker terminology like Dockerfile, image, and container. It demonstrates how to build an image from a Dockerfile, run containers, and use Dockerfiles to package applications. Tips are given for optimizing images like using lightweight base images, combining commands, and removing temporary files. Volumes are demonstrated as a way to share files between the host and container during development.
This document discusses tools and best practices for auditing Docker images for security. It begins with an introduction to Docker security concepts like namespaces, cgroups, and capabilities. It then discusses tools like Docker Security Scanning, Clair, Docker Bench Security, and Lynis that can be used to audit images. The document provides checklists for building secure Dockerfiles and consuming images. It concludes with recommendations around signing images, pinning dependencies, and using content trust and least privilege configurations.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. This is a first introduction to Docker with relative basic commands.
This document discusses Docker, containers, and how Docker addresses challenges with complex application deployment. It provides examples of how Docker has helped companies reduce deployment times and improve infrastructure utilization. Key points covered include:
- Docker provides a platform to build, ship and run distributed applications using containers.
- Containers allow for decoupled services, fast iterative development, and scaling applications across multiple environments like development, testing, and production.
- Docker addresses the complexity of deploying applications with different dependencies and targets by using a standardized "container system" analogous to intermodal shipping containers.
- Companies using Docker have seen benefits like reducing deployment times from 9 months to 15 minutes and improving infrastructure utilization.
This document discusses Docker, containers, and containerization. It begins by explaining why containers and Docker have become popular, noting that modern applications are increasingly decoupled services that require fast, iterative development and deployment to multiple environments. It then discusses how deployment has become complex with diverse stacks, frameworks, databases and targets. Docker addresses this problem by providing a standardized way to package applications into containers that are portable and can run anywhere. The document provides examples of results organizations have seen from using Docker, such as significantly reduced deployment times and increased infrastructure efficiency. It also covers Docker concepts like images, containers, the Dockerfile and Docker Compose.
This document discusses best practices for writing Dockerfiles. It recommends writing Dockerfiles that produce smaller image sizes for better security, performance and efficiency. Specific tips include using slimmer base images like Alpine Linux, removing unnecessary files and dependencies, leveraging multistage builds, and optimizing build time through caching and parallelization with Buildkit. The document also stresses security practices like avoiding the root user, consistency through version pinning and official images, and reducing attack surface through minimal images.
This document provides an overview of how to deliver Python applications with Docker. It introduces Docker concepts like images, containers, layers and data volumes. It then demonstrates dockerizing a sample Django application, including creating a Dockerfile with instructions to install dependencies, copy code and prepare the app. Production deployment notes are also provided, discussing options for Docker registry, orchestration and good practices.
Platform Engineering: Manage your infrastructure using Kubernetes and CrossplaneAhmed AbouZaid
Discover how Crossplane could unify infrastructure management within Kubernetes. Crossplane extends the functionality of Kubernetes and allows you to create external infrastructure. You can create Cloud resources the same way you create Kubernetes resources! With Crossplane, Kubernetes is the new Linux!
Join this session to learn about its declarative, cloud-native, GitOps-friendly approach to code-driven infrastructure management.
This session was part of Jobstack 2023.
https://jobstack.talentsarena.net/
Kubernetes Security Best Practices - With tips for the CKS examAhmed AbouZaid
Agenda:
1. Introduction
2. Shift-left and DevSecOps
3. General Security Concepts
4. The 4C’s of Cloud Native Security
5. Kubernetes Security Starter Kit
6. CKS Exam Overview and Tips
Overview:
A dive into Kubernetes Security Best Practices in addition to tips for the Certified Kubernetes Security Specialist (CKS) exam.
The 1-3 sections are for everyone and they will cover the security in the container era. So it doesn’t matter what’s your title or background, they are a good start for anyone.
The 4-6 sections will dive more into Kubernetes security, so probably DevOps engineers and SREs will find that more interesting. But in general anyone interested in Kubernetes security is more than welcome.
Kubernetes is a container orchestration platform that provides a mechanism to manage the resources of containers in the cluster. That mechanism is known as "Requests and Limits".
Requests and limits play a key role not only in resource management but also in applications stability, capacity planning, scheduling the resources (i.e., on which node the pod will be running).
In this session we will cover:
- A quick review of Containers, Docker, and Kubernetes.
- Containers resource management in Kubernetes.
- Containers resource types in Kubernetes.
- 3 different ways to set requests and limits.
- The difference between capacity and allocatable resources.
- Tips and recap.
The main goal of this session is to answer what DevOps is, why it started, and how it will help you. It sheds some light on DevOps in action.
This session was part of Talents Arena's virtual tech job fair 2020.
How contributing to Open-source made me a better DevOpsAhmed AbouZaid
How participating in Open-source made me a better DevOps
And that actually started not just as a professional system engineer, but much earlier as a normal end-user also as a power user.
Developing Ansible Dynamic Inventory Script - Nov 2017Ahmed AbouZaid
A session about my experience with writing an external inventory script from scratch for "Netbox" (IPAM and DCIM tool from DigitalOcean network engineering team) and push it to upstream to became an official inventory script.
Repo:
https://github.com/AAbouZaid/netbox-as-ansible-inventory
The "Dynamic inventory" is one of nice features in Ansible, where you can use an external service as inventory for Ansible instead the basic text-based ini file. So you can use AWS EC2 as inventory of your hosts, or maybe OpenStack, or whatever ... you actually can use any source inventory for Ansible, and you can write your own "External Inventory Script".
A quick walk through InfluxDB and TICK Stack.
Telegraf (Collect), InfluxDB (Store), Chrongraf (Visualize), and Kapacitor (Process).
- What is time series data?
- Why TICK Stack?
- Where could TICK Stack be used?
Presentation of my TechTalk at eSapce (Every Thursday one of the departments make a session about something recently begun to use or a new technology, this was my session from SysOps team.) This is an introduction to Ansible, and how to get started with it ... and since then we moved to Ansible :-)
Ansible is a great tool for many purposes like: configuration management, contentious deployment, and multi-tier orchestration ... and more!
- http://tech.aabouzaid.com/
- http://espace.com.eg/
- http://ansible.com/
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Docker Best Practices Workshop
1. Docker Best Practices Workshop
How to work effectively with Docker
Ahmed AbouZaid, DevOps Engineer, Camunda
21.09.2021
2. 2
Ahmed AbouZaid
A passionate DevOps engineer, Cloud/Kubernetes
specialist, Free/Open source geek, and an author.
• I believe in self CI/CD (Continuous Improvements/Development)
also that “The whole is greater than the sum of its parts”
• DevOps transformation, automation, data, and metrics
are my preferred areas
• And I like to help both businesses and people to grow
Find me at:
tech.aabouzaid.com | linkedin.com/in/aabouzaid
About
September 2021, Kayaking in the Spree
✅ Do Kayaking 🚫 Don’t sit like that!
3. 3
Content
Quick Introduction
Essential Practices
• Use Dockerfile linter
• Check Docker language specific best practices
• Create a single application per Docker image
• Create configurable ephemeral containers
Image Practices
• Understanding Docker image
• Use optimal base image
• Pin versions everywhere
• Create image with the optimal size
• Use multi-stage whenever possible
• Avoid any unnecessary files
Security Practices
• Always use trusted images
• Never use untrusted resources
• Never store sensitive data in the image
• Use a non-root user
• Scan image vulnerabilities
Misc Practices
• Leverage Docker build cache
• Avoid system cache
• Create a unified image across envs
• Use ENTRYPOINT with CMD
Next steps
5. 5
Overview
In this workshop, in a hands-on approach,
we will cover 18 best practices in 4 categories
or in other words ✅ Dos & 🚫 Don'ts.
After a general introduction, we will have a
look on the essential practices (aka must do),
then move to the image practices, then
we will go through the security practices,
and finally, some general practices.
Please note, this workshop assumes that
you have a basic knowledge of Docker.
Timeline
• 30 min: Review the best practices
• 10 min: Questions
• 10 min: Break
• 20 min: Apply the best practices
• 20 min: Discussion
7. 7
Containers, Docker, and Kubernetes
Containers
Technology for packaging an application
along with its runtime dependencies
Docker
Docker is the de facto standard to build
and share containerized apps
Kubernetes
A cloud-native platform to manage
and orchestrate containers workloads
Image: o_m/Shutterstock
8. 8
Dockerfile, Docker Image, and Docker Container
Dockerfile
A text file contains a set of instructions that is used
to build a Docker image
Docker Image
A combination of layered filesystems stacked
on top of each other to create a customizable usable image
Docker Container
A runtime instance of a Docker image
10. 10
• First things first, use a Dockerfile linter!
Use hadolint!
• It will help you to apply best practice
by default
• By using hadolint, you will avoid
at least 50% of the Docker issues
• Use it via CLI or integrate it with IDE,
e.g. VS Code hadolint extension
1.1 Use Dockerfile linter
11. 11
• There are Docker general best practices that work
for all languages
• Usually each language group (e.g., interpreted,
native, JVM) has common best practices
• Some languages have their own best practices
• Check if the language that you use has language
specific best practices
1.2 Check Docker language-specific best practices
12. 12
• A Docker image with a single application
is more:
• Maintainable
• Scalable
• Secure
• Reusable
• Portable
• Multiple processes within container
usually a nightmare in development
as well as in operations
1.3 Create a single application per Docker image
Image: Docker.com - What is a Container?
13. 13
• “An ephemeral container can be stopped and destroyed, then rebuilt and
replaced with an absolute minimum set up and configuration”
• Avoid dynamic configuration at runtime whenever possible
• Set configuration defaults but don’t store env related configuration
• Follow “The Twelve-Factor App” methodology as much as possible
1.4 Create configurable ephemeral containers
15. 15
• Docker image is made of layers
• Docker image layers are immutable (Read-only)
• Each instruction in Dockerfile is a layer in Docker image
• The previous layers cannot be changed by next instructions
• Removing files from previous layer just hide them but they are still there
Understanding Docker image
Only “ADD”, “COPY”, “RUN”
can create filesystem layers
(which increase image size)
ℹ Note
16. 16
• Use official images or from well-known identities
• Use the smallest base image that fits your use case
• Avoid using generic images when good language specific images are available
2.1 Use optimal base image
✅ Do 🚫 Don’t
FROM python:3.8.10-alpine3.14 FROM alpine:3.14
RUN apk add 'python3=3.8.10-r0'
17. 17
• Never use base image without a tag or with ‘latest’ tag
• Avoid pinning to major version
• In most cases pinning minor version should be fine
• Pin up to patch version for critical components
• Also pin the version of the dependances
2.2 Pin versions everywhere
✅ Do 🚫 Don’t
FROM python:3.8
RUN pip install Flask==2.0.0
FROM python
RUN pip install Flask
18. 18
• As a rule of thumb, smaller Docker images are better
• However, be aware of:
• Too small base image means increase in the build time (CI)
• Too big base image means increase in the deploy time (CD)
• Try to find the sweet spot to balance between build and deploy time
according to your needs and use cases
2.3 Create image with the optimal size
✅ Do (or not) 🚫 Don’t
FROM node:14.17.6-alpine3.14
RUN apk add --no-cache curl
FROM alpine:3.14
RUN apk add --no-cache 'nodejs=14.17.6-r0' curl
Build time: 2s (3 builds avg, no layers cache)
Image size: 120MB
Build time: 6s (3 builds avg, no layers cache)
Image size: 46.3MB
19. 19
• Multi-stage feature allows you to build
smaller and cleaner images by splitting
the build image from the runtime image
• It’s super useful for languages that
create artifacts like Golang, Java, etc.
• Also it’s helpful to run various tests
during the development
• Additionally, it’s better for security
because it reduces the attack surface
2.4 Use multi-stage whenever possible
✅ Do
# Build stage.
FROM maven:3.6-openjdk-17 AS builder
[...]
RUN mvn clean package
# Runtime stage.
FROM openjdk:17-jdk-alpine3.14
COPY --from=builder /myapp.jar /opt/
ENTRYPOINT ["java", "-jar", "/opt/myapp.jar"]
20. 20
• Every extra file could increase build time, image size, or even both!
• Specify the files and paths that need to be part of the image
• Use “.dockerignore” to filter any unnecessary files
• If necessary, restructure your repo/code to have only needed files
in seperate folders
2.5 Avoid any unnecessary files
✅ Do 🚫 Don’t
FROM python
# Only needed files are added to the image
COPY myapp.py /opt
ENTRYPOINT ["python", "/opt/myapp.py"]
FROM python
# The whole repo/context is added to the image
COPY . /opt
ENTRYPOINT ["python", "/opt/myapp.py"]
22. 22
• Use image from trusted repositories
• Use official images whenever possible
• If no official image, use only images from well-known identities
• For critical components, don’t use public Docker repositories
• Sign your images with Docker Content Trust (DCT)
3.1 Always use trusted images
✅ Do 🚫 Don’t
FROM openjdk:12 FROM coolestGuyInTheTown/openjdk:12
23. 23
• Using a trusted image doesn’t help if untrusted resources are used in the image itself
• Always use resources from trusted sources
• When a Git resource is used, always use Git hash because Git tags are mutable
• In general, try to minimize number of external resources used in the image
✅ Do 🚫 Don’t
FROM alpine
# You know what you get exactly
ARG HELPER_SCRIPT_URL=
https://raw.githubusercontent.com/trusted-user/
awesome-scripts/5330224/some-helper-script.sh
# Or better:
COPY scripts/some-helper-script.sh /tmp
FROM alpine
# The resource could be changed anytime!
ARG HELPER_SCRIPT_URL=
https://raw.githubusercontent.com/random-user/
awesome-scripts/master/some-helper-script.sh
3.2 Never use untrusted resources
24. 24
• Any data saved in one of the layers cannot be removed in the next layer!
It will be only hidden and could be easily retrieved
• For runtime secrets, use env vars to access the sensitive data
• For build time secrets, use Docker BuildKit which allows to access sensitive data
securely during the build time (never use ARG for build time secrets)
3.3 Never store sensitive data in the image
✅ Do 🚫 Don’t
RUN --mount=type=secret,id=GITHUB_NPM_TOKEN
npm set //npm.pkg.github.com/:_authToken
$GITHUB_NPM_TOKEN && npm install
# This file will be stored in the image
COPY .npmrc .
RUN npm install && rm .npmrc
# Also build args will be stored in the image
ARG GITHUB_NPM_TOKEN
RUN npm set //npm.pkg.github.com/:_authToken
$GITHUB_NPM_TOKEN && npm install
$ export GITHUB_NPM_TOKEN=top_secret
$ export DOCKER_BUILDKIT=1
$ docker build --secret id=GITHUB_NPM_TOKEN .
25. 25
• By default, Docker will use “root” to execute the container commands
• Using root user is a bad practice and considered a security risk
• Always (or whenever possible) set “USER” instruction to a non-root user
• Remember that the user must already exist in the Docker image system
to be used with the “USER” instruction
3.4 Use a non-root user
✅ Do 🚫 Don’t
FROM alpine
USER nobody
CMD ["whoami"]
FROM alpine
# The root user will be used to execute commands
CMD ["whoami"]
Output: nobody Output: root
26. 26
• Docker images vulnerability scanning tools mainly aim to detect exploits
in the image libraries
• There are many solutions and tools like Trivy, Snyk, and even integrated
with cloud like GCR (Google Container Registry)
• Scan your images during development as well as in production
• Depends on your use case, scan your images with every build or at least daily
3.5 Scan image vulnerabilities
28. 28
• As mentioned before, Docker image
consists of a stack of immutable layers
• Each instruction of the Dockerfile is an
independent layer
• When a layer is generated it’s cached
locally to be reused again
• However, if there is a change
in one layer, its cache is invalidated
together with all next layers
4.1 Leverage Docker build cache
29. 29
• In Dockerfile, put less frequently changing instructions at the top of the file
and more likely changing instructions at the end of the file
• Docker build cache is super helpful in the local development as well as in CI/CD
(when the build is done on a single machine or with distributed caching layer)
4.1 Leverage Docker build cache (continued)
✅ Do 🚫 Don’t
FROM alpine
# The ENV and RUN layers will be reused
# even when the source code changed
ENV LOG_LEVEL=info
RUN apk add python3
COPY myapp.py /opt
FROM alpine
# Any change in the source code will invalidate
# the cache of all next layers
COPY myapp.py /opt
RUN apk add python3
ENV LOG_LEVEL=info
30. 30
4.2 Avoid system cache
• Systems use caching to speed up things that used frequently
• Each system is caching different things, for example package manager metadata
• In Docker images build, system caches usually don’t add any value
since containers are immutable and each command run in a single layer
• As a rule of thumb, avoid system caches because they increase image size
• Remember that each system has different options to disable caches
✅ Do 🚫 Don’t
FROM alpine
RUN apk add --no-cache curl
FROM alpine
RUN apk add curl
31. 31
• In general, try to build your image
the same way for all envs (e.g., dev,
stage, and prod)
• Try to make your image env-agnostic
so it works seamlessly across envs
• Utilize multi-stage whenever possible
and use “prod” as a base for other envs
• For the advanced/complex use cases,
use Docker BuildKit which gives you
more control over builds
✅ Do
FROM alpine As base
RUN apk add curl
FROM base As prod
RUN apk add python3
FROM prod As dev
RUN apk add python3-dev
# Build dev image (build the whole file)
$ docker build -t myapp:dev .
# Build prod image (stop at the prod stage)
$ docker build --target prod -t myapp:v1 .
4.3 Create a unified image across envs
32. 32
• Both “ENTRYPOINT” and “CMD” are Dockerfile instructions
which used to control the default command within the Docker image
• Either of “ENTRYPOINT” and “CMD” could be used independently
• However, using both of them at the same time makes things easier
to customize containers behaviour, especially in Kubernetes
• As a rule of thumb, if your application customizable via arguments
use “ENTRYPOINT” for the main command and “CMD” for default arguments
4.4 Use ENTRYPOINT with CMD
✅ Do
FROM alpine
ENTRYPOINT ["echo"]
CMD ["-e", "HellonWorld"]
34. 34
• Find the last Docker image you have created and refactor it according to
the best practices in this workshop
• Integrate hadolint (Dockerfiles linter) with your local IDE and your team CI pipeline
• Find out some interesting Docker scenarios on Katakoda and get hands-on
• Advanced topics:
• Sign your Docker images with Docker Content Trust (DCT)
• Take a look on BuildKit which is a Dockerfile-agnostic builder toolkit
More details: Faster Builds and Smaller Images Using BuildKit
• Do you know that Docker is not only the container management system?
Read more about Docker Alternative Container Tools
Next steps
36. 36
References
• Intro Guide to Dockerfile Best Practices - Docker Blog
• Best practices for writing Dockerfiles - Docker Documentation
• Image-building best practices - Docker Documentation
• Best practices for building containers - Google Cloud Architecture Center
• Top 20 Dockerfile best practices for security - Sysdig
• On Docker Articles - vsupalov.com