This document provides best practices for using Docker containers, including:
- Using "dumb-init" or "supervisord" to run multiple services in a container.
- Using named volumes over host volumes whenever possible as named volumes can be directly controlled and backed up easily.
- Writing useful entrypoint scripts to address startup issues when linking containers.
- Avoiding using the root user when possible for security.
- Techniques for reducing Docker image sizes such as using smaller base images, removing cache files and temporary packages, and combining Dockerfile commands.
The document also discusses Docker security topics like authenticating images, dropping unnecessary privileges, limiting resource consumption, and reducing large attack surfaces
Rebuild provides a solution for reproducible build environments using containers. It leverages containers to create isolated environments that can be easily shared and deployed. Users can create environments from base images or file systems, modify them, commit changes, and publish environments to internal or public registries. This allows developers to have consistent environments that work independently of their local machine configuration. Rebuild provides a simple CLI and supports multiple registries, environments on any OS, and debugging tools.
How Secure Is Your Container? ContainerCon Berlin 2016Phil Estes
A conference talk at ContainerCon Europe in Berlin, Germany, given on October 5th, 2016. This is a slightly modified version of my talk first used at Docker London in July 2016.
Old school presentation (2010) about Continuous Integration using Hudson, Maven, Mercurial to build a Java project with unit tests and other quality checks.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
The document provides an overview of containers and Docker. It discusses why containers are important for organizing software, improving portability, and protecting infrastructure. It describes key Docker concepts like images, containers, Dockerfile for building images, and tools like Docker Compose and Docker Swarm for defining and running multi-container apps. The document recommends reading "The Art of War" and scanning systems without being detected before potentially more intrusive activities. It also briefly introduces network security pillars and buffer overflows as an attack technique.
Tokyo OpenStack Summit 2015: Unraveling Docker SecurityPhil Estes
A Docker security talk that Salman Baset and Phil Estes presented at the Tokyo OpenStack Summit on October 29th, 2015. In this talk we provided an overview of the security constraints available to Docker cloud operators and users and then walked through a "lessons learned" from experiences operating IBM's public Bluemix container cloud based on Docker container technology.
This document discusses container security and provides information on various related topics. It begins with an overview of container security risks such as escapes and application vulnerabilities. It then covers security controls for containers like namespaces, control groups, and capabilities. Next, it discusses access control models and Linux security modules like SELinux and AppArmor that can provide container isolation. The document concludes with some third-party security offerings and emerging technologies that aim to enhance container security.
Linux containers provide isolation between applications using namespaces and cgroups. While containers appear similar to VMs, they do not fully isolate applications and some security risks remain. To improve container security, Docker recommends: 1) not running containers as root, 2) dropping capabilities like CAP_SYS_ADMIN, 3) enabling user namespaces, and 4) using security modules like SELinux. However, containers cannot fully isolate applications that need full hardware or kernel access, so virtual machines may be needed in some cases.
Rebuild provides a solution for reproducible build environments using containers. It leverages containers to create isolated environments that can be easily shared and deployed. Users can create environments from base images or file systems, modify them, commit changes, and publish environments to internal or public registries. This allows developers to have consistent environments that work independently of their local machine configuration. Rebuild provides a simple CLI and supports multiple registries, environments on any OS, and debugging tools.
How Secure Is Your Container? ContainerCon Berlin 2016Phil Estes
A conference talk at ContainerCon Europe in Berlin, Germany, given on October 5th, 2016. This is a slightly modified version of my talk first used at Docker London in July 2016.
Old school presentation (2010) about Continuous Integration using Hudson, Maven, Mercurial to build a Java project with unit tests and other quality checks.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
The document provides an overview of containers and Docker. It discusses why containers are important for organizing software, improving portability, and protecting infrastructure. It describes key Docker concepts like images, containers, Dockerfile for building images, and tools like Docker Compose and Docker Swarm for defining and running multi-container apps. The document recommends reading "The Art of War" and scanning systems without being detected before potentially more intrusive activities. It also briefly introduces network security pillars and buffer overflows as an attack technique.
Tokyo OpenStack Summit 2015: Unraveling Docker SecurityPhil Estes
A Docker security talk that Salman Baset and Phil Estes presented at the Tokyo OpenStack Summit on October 29th, 2015. In this talk we provided an overview of the security constraints available to Docker cloud operators and users and then walked through a "lessons learned" from experiences operating IBM's public Bluemix container cloud based on Docker container technology.
This document discusses container security and provides information on various related topics. It begins with an overview of container security risks such as escapes and application vulnerabilities. It then covers security controls for containers like namespaces, control groups, and capabilities. Next, it discusses access control models and Linux security modules like SELinux and AppArmor that can provide container isolation. The document concludes with some third-party security offerings and emerging technologies that aim to enhance container security.
Linux containers provide isolation between applications using namespaces and cgroups. While containers appear similar to VMs, they do not fully isolate applications and some security risks remain. To improve container security, Docker recommends: 1) not running containers as root, 2) dropping capabilities like CAP_SYS_ADMIN, 3) enabling user namespaces, and 4) using security modules like SELinux. However, containers cannot fully isolate applications that need full hardware or kernel access, so virtual machines may be needed in some cases.
Docker containers provide isolation and security by default through mechanisms like namespaces, cgroups, capabilities. Auditing tools check for vulnerabilities and configuration best practices to harden Docker hosts and images. Images should be signed, dependencies pinned, and a least privilege model used to minimize attack surface.
Docker, Linux Containers, and Security: Does It Add Up?Jérôme Petazzoni
Containers are becoming increasingly popular. They have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting an new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
In this presentation, we will:
- Review the actual security risks, in particular for multi-tenant environments running arbitrary applications and code
- Discuss how to mitigate those risks
- Focus on containers as implemented by Docker and the libcontainer project, but the discussion also stands for plain containers as implemented by LXC
A brief introduction to Docker Container technology done at Gurgaon Docker Container Meetup on 30-Jan-2016.
Includes command to launch a simple 2 container linked application that hosts a Etherlite web application.
This document summarizes a presentation on testing Docker security. It discusses security mechanisms like namespaces and cgroups that Docker uses. It covers best practices like running containers as non-root users, using read-only containers and volumes, and dropping unnecessary privileges. Tools are presented for auditing the Docker host and images for vulnerabilities, like Docker Bench Security, Lynis, Docker Security Scanning, and Anchore. The document demonstrates using these tools.
Container security involves securing containers at both the host and application level. At the host level, Linux technologies like namespaces, cgroups, SELinux, and seccomp provide isolation between containers. Container images are also scanned for vulnerabilities. The OpenShift platform provides additional security features like role-based access control, network policies, encrypted communications, and controls over privileged containers and storage. Application security best practices within containers include using HTTPS, securing secrets, and API management tools.
The document discusses container security, providing advantages and disadvantages of containers as well as threats. It outlines different approaches to container security including host-based methods using namespaces, control groups, and capabilities as well as container-based scanning and digital signatures. Third-party security tools are also mentioned. The document concludes with examples of using containers for microservices and network policies for protection.
This document discusses tools and best practices for auditing Docker images for security. It begins with an introduction to Docker security concepts like namespaces, cgroups, and capabilities. It then discusses tools like Docker Security Scanning, Clair, Docker Bench Security, and Lynis that can be used to audit images. The document provides checklists for building secure Dockerfiles and consuming images. It concludes with recommendations around signing images, pinning dependencies, and using content trust and least privilege configurations.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
Container Security: How We Got Here and Where We're GoingPhil Estes
A talk given on Wednesday, Nov. 16th at DefragCon (DefragX) on a historical perspective on container security with a look to where we're going in the future.
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
The document discusses the relative security of containers versus virtual machines. Several experts provide their views:
- Containers are weaker than VMs from a security perspective, according to one source. However, others argue containers can be just as secure as VMs if implemented properly.
- While VMs may be more secure currently, containers are catching up on security, according to a Docker engineer.
- One source argues that no software, including virtualization layers, will be perfectly secure since developers will always write code with security holes.
- For Google, security is the top priority for VMs since they provide stronger isolation than alternative options like containers or bare metal servers.
- Consumer Windows OS lacks
This presentation by Andrew Aslinger discusses best practices and pitfalls of integrating Docker into Continuous Delivery Pipelines. Learn how Andrew and his team used Docker to replace Chef to simplify their development and migration processes.
Manideep Konakandla is a security researcher who has extensively studied container security. He gives an overview of container security risks across the container pipeline. This includes securing images during building and distribution, hardening the container runtime environment, and other considerations for enterprises deploying containers like implementing security controls on daemons and hosts. He outlines best practices for minimizing risks at different stages and emphasizes the importance of maintaining up-to-date software and implementing custom security measures according to organizational needs.
Breaking and fixing_your_dockerized_environments_owasp_appsec_usa2016Manideep Konakandla
The document provides an agenda for a presentation on breaking and securing Docker container environments. The presentation covers introducing containers and Docker, risks areas for containers like images and runtimes, how to break and secure images, runtimes, daemons, and hosts. It also discusses securing the entire container pipeline including communication and registries. The presentation concludes with discussing the future of container security and references.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Rooting Out Root: User namespaces in DockerPhil Estes
This talk on the progress to bring user namespace support into Docker was presented by Phil Estes at LinuxCon/ContainerCon 2015 on Wednesday, Aug. 19th, 2015
1. Docker allows creating lightweight virtual environments called containers that package code and dependencies together. Containers are more portable than virtual machines.
2. Docker uses images to build containers. Images are immutable templates and containers are instances of images that can be run. The Dockerfile defines how to build images.
3. Common Docker commands include docker pull to download images, docker run to create and start containers, docker exec to run commands in running containers, and docker commit to save container changes as new images.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
Docker development Best Practices recommends using Docker Hub or CI/CD pipelines to build and tag Docker images on pull requests. Images should be signed by development, security, and testing teams before production. Different environments should be used for development and testing. Docker should be updated to the latest version for security features.
Docker container Best Practices include frequently backing up a single manager node for restoration. Cloud deployment of containers on AWS or Azure uses Kubernetes. Load balancers like NGINX help control Docker containers for availability and scalability.
Dockerfile Best Practices are to not use Dockerfiles as build scripts, define environment variables, commit Dockerfiles to repositories, be mindful of base image size, do not expose secrets
Numerous packaging & delivering applications are available in the global market, and out of all, Docker has created its prominent reputation amongst countless organizations around the globe.
Docker containers provide isolation and security by default through mechanisms like namespaces, cgroups, capabilities. Auditing tools check for vulnerabilities and configuration best practices to harden Docker hosts and images. Images should be signed, dependencies pinned, and a least privilege model used to minimize attack surface.
Docker, Linux Containers, and Security: Does It Add Up?Jérôme Petazzoni
Containers are becoming increasingly popular. They have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting an new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
In this presentation, we will:
- Review the actual security risks, in particular for multi-tenant environments running arbitrary applications and code
- Discuss how to mitigate those risks
- Focus on containers as implemented by Docker and the libcontainer project, but the discussion also stands for plain containers as implemented by LXC
A brief introduction to Docker Container technology done at Gurgaon Docker Container Meetup on 30-Jan-2016.
Includes command to launch a simple 2 container linked application that hosts a Etherlite web application.
This document summarizes a presentation on testing Docker security. It discusses security mechanisms like namespaces and cgroups that Docker uses. It covers best practices like running containers as non-root users, using read-only containers and volumes, and dropping unnecessary privileges. Tools are presented for auditing the Docker host and images for vulnerabilities, like Docker Bench Security, Lynis, Docker Security Scanning, and Anchore. The document demonstrates using these tools.
Container security involves securing containers at both the host and application level. At the host level, Linux technologies like namespaces, cgroups, SELinux, and seccomp provide isolation between containers. Container images are also scanned for vulnerabilities. The OpenShift platform provides additional security features like role-based access control, network policies, encrypted communications, and controls over privileged containers and storage. Application security best practices within containers include using HTTPS, securing secrets, and API management tools.
The document discusses container security, providing advantages and disadvantages of containers as well as threats. It outlines different approaches to container security including host-based methods using namespaces, control groups, and capabilities as well as container-based scanning and digital signatures. Third-party security tools are also mentioned. The document concludes with examples of using containers for microservices and network policies for protection.
This document discusses tools and best practices for auditing Docker images for security. It begins with an introduction to Docker security concepts like namespaces, cgroups, and capabilities. It then discusses tools like Docker Security Scanning, Clair, Docker Bench Security, and Lynis that can be used to audit images. The document provides checklists for building secure Dockerfiles and consuming images. It concludes with recommendations around signing images, pinning dependencies, and using content trust and least privilege configurations.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
Container Security: How We Got Here and Where We're GoingPhil Estes
A talk given on Wednesday, Nov. 16th at DefragCon (DefragX) on a historical perspective on container security with a look to where we're going in the future.
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
The document discusses the relative security of containers versus virtual machines. Several experts provide their views:
- Containers are weaker than VMs from a security perspective, according to one source. However, others argue containers can be just as secure as VMs if implemented properly.
- While VMs may be more secure currently, containers are catching up on security, according to a Docker engineer.
- One source argues that no software, including virtualization layers, will be perfectly secure since developers will always write code with security holes.
- For Google, security is the top priority for VMs since they provide stronger isolation than alternative options like containers or bare metal servers.
- Consumer Windows OS lacks
This presentation by Andrew Aslinger discusses best practices and pitfalls of integrating Docker into Continuous Delivery Pipelines. Learn how Andrew and his team used Docker to replace Chef to simplify their development and migration processes.
Manideep Konakandla is a security researcher who has extensively studied container security. He gives an overview of container security risks across the container pipeline. This includes securing images during building and distribution, hardening the container runtime environment, and other considerations for enterprises deploying containers like implementing security controls on daemons and hosts. He outlines best practices for minimizing risks at different stages and emphasizes the importance of maintaining up-to-date software and implementing custom security measures according to organizational needs.
Breaking and fixing_your_dockerized_environments_owasp_appsec_usa2016Manideep Konakandla
The document provides an agenda for a presentation on breaking and securing Docker container environments. The presentation covers introducing containers and Docker, risks areas for containers like images and runtimes, how to break and secure images, runtimes, daemons, and hosts. It also discusses securing the entire container pipeline including communication and registries. The presentation concludes with discussing the future of container security and references.
Server virtualization is a fundamental technological innovation that is used extensively in IT enterprises. Server virtualization enables creation of multiple virtual machines on single underlying physical machine. It is realized either in form of hypervisors or containers. Hypervisor is an extra layer of abstraction between the hardware and virtual machines that emulates underlying hardware. In contrast, the more recent container-based virtualization technology runs on host kernel without additional layer of abstraction. Thus container technology is expected to provide near native performance compared to hypervisor based technology. We have conducted a series of experiments to measure and compare the performance of workloads over hypervisor based virtual machines, Docker containers and native bare metal machine. We use a standard benchmark workload suite that stresses CPU, memory, disk IO and system. The results obtained show that Docker containers provide better or similar performance compared to traditional hypervisor based virtual machines in almost all the tests. However as expected the native system still provides the best performance as compared to either containers or hypervisors.
Rooting Out Root: User namespaces in DockerPhil Estes
This talk on the progress to bring user namespace support into Docker was presented by Phil Estes at LinuxCon/ContainerCon 2015 on Wednesday, Aug. 19th, 2015
1. Docker allows creating lightweight virtual environments called containers that package code and dependencies together. Containers are more portable than virtual machines.
2. Docker uses images to build containers. Images are immutable templates and containers are instances of images that can be run. The Dockerfile defines how to build images.
3. Common Docker commands include docker pull to download images, docker run to create and start containers, docker exec to run commands in running containers, and docker commit to save container changes as new images.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
Docker development Best Practices recommends using Docker Hub or CI/CD pipelines to build and tag Docker images on pull requests. Images should be signed by development, security, and testing teams before production. Different environments should be used for development and testing. Docker should be updated to the latest version for security features.
Docker container Best Practices include frequently backing up a single manager node for restoration. Cloud deployment of containers on AWS or Azure uses Kubernetes. Load balancers like NGINX help control Docker containers for availability and scalability.
Dockerfile Best Practices are to not use Dockerfiles as build scripts, define environment variables, commit Dockerfiles to repositories, be mindful of base image size, do not expose secrets
Numerous packaging & delivering applications are available in the global market, and out of all, Docker has created its prominent reputation amongst countless organizations around the globe.
Docker is a containerization platform that packages applications and dependencies into containers that can run on any infrastructure. Containers are more lightweight than virtual machines and provide operating-system-level virtualization. The key Docker components are the Docker Engine (including the daemon and client), images, containers, registries, and networks. Dockerfiles define how to build images automatically by running commands. Images act as templates for containers, which are lightweight and portable environments for applications.
Top 6 Practices to Harden Docker Images to Enhance Security9 series
Dockers can be considered equivalent to containers. Different verses of tools and platforms of containers are being used to develop containers to work more profitably. However, there are so many principles for protecting applications based on the container by collaborating with other secured applications.
This document discusses Docker, containers, and how Docker addresses challenges with complex application deployment. It provides examples of how Docker has helped companies reduce deployment times and improve infrastructure utilization. Key points covered include:
- Docker provides a platform to build, ship and run distributed applications using containers.
- Containers allow for decoupled services, fast iterative development, and scaling applications across multiple environments like development, testing, and production.
- Docker addresses the complexity of deploying applications with different dependencies and targets by using a standardized "container system" analogous to intermodal shipping containers.
- Companies using Docker have seen benefits like reducing deployment times from 9 months to 15 minutes and improving infrastructure utilization.
This document discusses Docker, containers, and containerization. It begins by explaining why containers and Docker have become popular, noting that modern applications are increasingly decoupled services that require fast, iterative development and deployment to multiple environments. It then discusses how deployment has become complex with diverse stacks, frameworks, databases and targets. Docker addresses this problem by providing a standardized way to package applications into containers that are portable and can run anywhere. The document provides examples of results organizations have seen from using Docker, such as significantly reduced deployment times and increased infrastructure efficiency. It also covers Docker concepts like images, containers, the Dockerfile and Docker Compose.
Best Practices for Developing & Deploying Java Applications with DockerEric Smalling
This document provides a summary of best practices for developing and deploying Java applications with Docker. It begins with an introduction and overview of Docker terminology. It then demonstrates how to build a simple Java web application as a Docker image and run it as a container. The document also covers deploying applications to clusters as services and stacks, and techniques for application management, configuration, monitoring, troubleshooting and logging in Docker environments.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
This document provides an overview of Docker for web developers. It defines containers and Docker, discusses the benefits of Docker like faster deployment and portability. It explains key Docker concepts like images, containers, Dockerfile for building images, Docker platform, and commands for managing images and containers. The document also describes what happens behind the scenes when a container is run, and how to install and use Docker on Linux, Windows and Mac.
The document provides an overview of Docker for web developers. It defines containers and Docker, explaining that Docker allows developers to package applications into standardized units for development, shipment and deployment. It covers Docker concepts like images, containers, Dockerfiles and registries. It also discusses how to install Docker, manage images and containers, configure networking, mount volumes, and allow communication between containers. The goal is to explain the key Docker concepts and components to help developers understand and use Docker.
Introduction to Docker and Monitoring with InfluxDataInfluxData
In this webinar, Gary Forgheti, Technical Alliance Engineer at Docker, and Gunnar Aasen, Partner Engineering, provide an introduction to Docker and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
DCSF 19 Building Your Development Pipeline Docker, Inc.
Oliver Pomeroy, Docker & Laura Tacho, Cloudbees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges; Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and how-to's, Olly and Laura will guide you through common situations and decisions related to your pipelines. We'll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
DCEU 18: Building Your Development PipelineDocker, Inc.
This document discusses building a development pipeline using containers. It outlines using containers for building images, automated testing, security scanning, and deploying to production. Containers make environments consistent and reproducible. The pipeline includes building images, testing, security scanning, and promoting images to production. Methods discussed include using multi-stage builds to optimize images, leveraging Buildkit for faster builds, and parallel testing across containers. Automated tools are available to implement rolling updates and rollbacks during deployments.
Docker allows developers to package applications with all of their dependencies into standardized units called containers that can run on any infrastructure regardless of the underlying operating system. It provides isolation and security so that many containers can run simultaneously on a single host. The document discusses how to set up both new and existing Magento projects using Docker, including downloading necessary files, importing databases, and using important Docker commands.
This document provides an overview of Docker containers. It defines containers as lightweight sandboxed processes that share the same kernel as the host operating system. The key benefits of containers are that they have lower overhead than virtual machines and allow for the easy sharing and distribution of applications. The document discusses Docker images, containers, the client-server architecture, and basic Docker commands. It also covers use cases, the layered filesystem model, and security considerations when using containers.
What's Docker and How to use?
This presentation and demo will help you understand the basic concepts of Docker and the use cases.
Reference: https://github.com/snese/docker101-examples
Docker security: Rolling out Trust in your containerRonak Kogta
This document discusses various security aspects of Docker containers. It covers topics like Docker isolation, limiting privileges through capabilities and namespaces, filesystem security using SELinux/AppArmor, image signing with Docker Content Trust and Notary to ensure integrity, and tools like DockerBench for security best practices. The document emphasizes that with Docker, every process should only access necessary resources and taking a least privilege approach is important for security.
This document provides an overview of Docker, explaining that Docker is an engine that sits between the OS and containers to enable rapid application deployment. It describes Docker components like images, containers, and repositories. Images are templates used to deploy containers, with images built from Dockerfiles that define layers. The document highlights that containers are stateless, and various strategies for handling configuration files. It also notes drawbacks like containers being read-only, and tips like using base images and keeping the firewall on.
The document provides an overview and agenda for Docker in Action. It discusses key Docker concepts like images and containers, the Docker architecture involving clients, daemons and registries, and daily Docker operations like building new images, deploying code updates, and viewing logs. Installation instructions are also included for Windows, Linux and macOS.
Similar to "Docker best practice", Станислав Коленкин (senior devops, DataArt) (20)
DataArt Custom Software Engineering with a Human ApproachDataArt
DataArt is a global software engineering firm that takes a uniquely human approach to solving problems. With over 20 years of experience, teams of highly-trained engineers around the world, deep industry sector knowledge and ongoing technology research, we help clients create custom software that improves their operations and opens new markets. Powered by our People First principle, we work with clients at any scale and on any platform, and adapt alongside them as they evolve.
DataArt is a global software engineering firm that takes a uniquely human approach to solving problems. With over 20 years of experience, teams of highly-trained engineers around the world, deep industry sector knowledge, and ongoing technology research, we help clients create custom software that improves their operations and opens new markets. Powered by our People First principle, we work with clients at any scale and on any platform, and adapt alongside them as they evolve.
DataArt Financial Services and Capital MarketsDataArt
DataArt is a global software engineering firm that takes a uniquely human approach to solving problems. With over 20 years of experience, teams of highly-trained engineers around the world, deep industry sector knowledge, and ongoing technology research, we help clients create custom software that improves their operations and opens new markets. Powered by our People First principle, we work with clients at any scale and on any platform, and adapt alongside them as they evolve.
We integrate our engineering excellence with deeply human values that drive our business and our approach to relationships: curiosity, empathy, trust, honesty, and intuition. These qualities help us deliver high-value, high-quality solutions that our clients depend on, and lifetime partnerships they believe in.
DataArt has earned the trust of some of the world’s leading brands and most discerning clients, including Nasdaq, Travelport, Ocado, Centrica/Hive, Paddy Power Betfair, IWG, Univision, Meetup and Apple Leisure Group among others. DataArt brings together expertise of over 3000 professionals in 20 locations in the US, Europe, and Latin America.
Мы ежедневно посещаем десятки и сотни сайтов и периодически видим рекламу, зачастую даже не задумываясь, откуда она вообще берется. Почему именно эта реклама показана вам именно здесь? И какая роль JS во всем этом?
Рассмотрим:
• поговорим о жизненном цикле рекламного баннера и проследим его путь от рекламодателя до браузера;
• узнаем, кто же постоянно следит за нами в интернете, как много информации о нас им доступно;
• определим способы выявления некачественного трафика;
• разберемся, зачем нужно контролировать качество просмотров;
• обсудим, почему нельзя так просто взять и просмотреть всю статистику по рекламе в одном месте (или все-таки можно?).
Алексей Уманский, JS Developer, AnyMind Group. Опыт работы в IT – четыре года. Участвовал в тревел- и gamedev-проектах: разрабатывал крупный сервис по покупке авиабилетов, создавал систему игровых автоматов для онлайн казино. Последний год работал в Таиланде над продуктами в области Digital Marketing: онлайн биржа для influencer-ов и сервис по управлению рекламой на сайте, а так же сбору статистики по ней.
What's new in Android, Igor Malytsky ( Google Post I|O Tour)DataArt
This document summarizes new features and changes in Android development tools, Jetpack libraries, UI/UX, and more. It discusses expanded Kotlin and Jetpack support, new IDE features like navigation editor and resource manager, evolution of Architecture Components like ViewBinding, and new UI elements in Android like gesture navigation and bubbles. Google is also working on new tools for CameraX, benchmarking, and continued updates to Play Store, Machine Learning, and other platforms.
DevOps Workshop:Что бывает, когда DevOps приходит на проектDataArt
Александр Снеговой, DevOps Software Engineer в DataArt.
Более шести лет в IT. Сертифицированный AWS Solutions Architect Associate. Докладчик на международных научных конференциях. Религиозный фанат Docker.
Оксана Харчук, Senior QA Engineer.
Презентация:
Коммуникация в жизни QA. Как выстроить эффективные коммуникации тестировщику с бизнес аналитиком, разработчиком, менеджером и клиентом.
Нельзя просто так взять и договориться, или как мы работали со сложными людьмиDataArt
Эллина Азадова, QA Lead в DataArt Kherson.
Презентация:
Реальные примеры из своей практики, как работать со сложными людьми: интровертами, экстравертами, излишне эмоциональными и с постоянно пессимистически настроенными.
Дмитрий Клипинин, DevOps Engineer в GlobalLogic, более 10 лет опыта работы в IT, сертифицированный специалист Microsoft по технологиям Active Directory и SQL Server.
Презентация:
1. Эволюция системного администратора.
2. DevOps-практики.
3. Основные DevOps-инструменты.
Александр Снеговой, DevOps Software Engineer в DataArt Kherson. Более шести лет в IT. Сертифицированный AWS Solutions Architect Associate. Докладчик на международных научных конференциях. Религиозный фанат Docker.
Презентация:
1. Докеризация приложения.
2. Настройка CI/CD.
3. Развертывание инфраструктуры в AWS с помощью Terraform.
The document discusses Docker and Selenoid, with Docker being a tool to run applications in isolated containers and Selenoid being a tool for running Selenium tests in isolated Docker containers. Selenoid provides benefits over Selenium Grid like better resource usage, easier installation, and support for running each test in a separate container. The document also provides instructions for installing and running Selenoid using Docker or without Docker on different operating systems.
Volodymyr Zdvizhkov is a senior automation engineer who has experience with several UI testing frameworks including Selenium IDE, Selenium WebDriver, Selenium Grid, Selenide, and Selenoid. The document discusses the features of these frameworks and provides tips for writing effective automated tests such as using page object models and soft assertions. It emphasizes that Selenide allows writing concise, expressive, and stable UI tests in Java through its fluent API and automatic screenshot capturing on failures.
A. Sirota "Building an Automation Solution based on Appium"DataArt
This document provides an overview of building an automation solution using Appium. It discusses tools for mobile test automation, common pain points in testing, tips for running tests on real devices versus emulators, integrating mobile testing into a CI/CD pipeline, and using cloud services for testing. Examples are provided for testing a QR code scanning app and verifying call quality between two devices. Links to additional Appium documentation and cloud testing services are also included.
IT talk: Как я перестал бояться и полюбил TestNGDataArt
TestNG is a testing framework that provides features like parameterized tests, test factories, flexible parallel execution, and a rich extension model. The document discusses TestNG tips and tricks, common issues and workarounds, and the future of TestNG. It recommends using TestNG-Foundation to order listeners and run multiple annotation transformers. ExtendNG can help run before/after methods for specific groups. Test-Data-Supplier makes data providers more readable. While TestNG continues improving, JUnit 5 is an emerging rival testing framework.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
How to Manage Reception Report in Odoo 17Celine George
A business may deal with both sales and purchases occasionally. They buy things from vendors and then sell them to their customers. Such dealings can be confusing at times. Because multiple clients may inquire about the same product at the same time, after purchasing those products, customers must be assigned to them. Odoo has a tool called Reception Report that can be used to complete this assignment. By enabling this, a reception report comes automatically after confirming a receipt, from which we can assign products to orders.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
"Docker best practice", Станислав Коленкин (senior devops, DataArt)
1.
2. New York USA
London UK
Munich Germany
Zug Switzerland
Docker Containers. Best practices.
by Stanislav Kolenkin, Senior DevOps.
3. What is container? Docker?
3
• Hardware can’t be replicated, Software can be.
• Container is a software App with all required to execute it
• Packed container always runs the same, regardless of the environment
• There is Unix and Windows versions
• No conflicts
4. Dockerfile
4
• One container - one process
• That process ID is always 1
• On docker stop -> SIGTERM (graceful exit)
• After 10 seconds -> SIGKILL (hard exit), signal handlers won’t fire
5. Dockerfile
5
1 - Use dumb-init
“dumb-init is a simple process supervisor and init system designed to run as
PID 1 inside minimal container environments (such as Docker).”
2 - Use supervisord when you need to run several services in a single
container
supervisord is a quite useful bit when it comes to running more than one
processes in your docker container
6. Dockerfile
6
Use named volumes over host volumes, whenever possible
Docker knows two types of volumes: named volumes and host volumes.
Use named volumes over other wherever possible:
1. NV can be directly controlled (created, removed) via `docker volume`
2. NV are independent of any paths on the host
3. NV can be backed up and restored easily
4. NV will automatically be created by your docker-compose file
7. Dockerfile
7
Write a useful entrypoint script
Writing entry point scripts is a high art and usually requires wizardry in shell
scripting. Especially when linking containers (e.g. a web app and a
database) in a docker-compose file, there’s a lot of potential problems to
solve for a good startup script:
1. Check preconditions before you fire up your main process
2. Make use commands and args in your entrypoint script
3. Use exec in your entrypoint script
8. Dockerfile
8
Don’t use root user, when possible
● It is Secure
● When your service facing internet, it is a requirement
● It will not grant some malicious script privileged execution
When using dev containers, use a welcome message
9. Layers
Before we can talk about how to trim down the size of your images, we
need to discuss layers. The concept of image layers involves all sorts of
low-level technical details about things like root filesystems, copy-on-write
and union mounts -- luckily those topics have been covered pretty well
elsewhere so I won't rehash those details here. For our purposes, the
important thing to understand is that each instruction in your Dockerfile
results in a new image layer being created.
9
11. Reduce Docker Image Sizes
11
Clean your apt/yum cache and do it in a right way.
Use a smaller base image:
- Image size equals the sum of the sizes of the images that make up it
- Each additional instruction in Dockerfile increases the size of the
image.
Don’t install debug tools like vim/curl.
Use -- no-install-recommends on apt-get install
But how do I debug?
For more detailed description and actions, refer to Appendix section 2
12. Reduce Docker Image Sizes
12
In the end of this section it is needed to say that in my experience the
optimal number of layers is 12.
Yes, the overlay2 driver supports 128 layers but it's better to avoid it.
Especially you will feel it when you use the rolling update mechanism in
Kubernetes. Many layers will affect the imaging speed and launch speed.
Use the following utilities for writing your Dockerfiles:
● FromLatest.io
● imagelayers.io
13. Difference between save and export
13
Docker is based on so called images. These images are comparable to
virtual machine images and contain files, configurations and installed
programs. And just like virtual machine images you can start instances of
them. A running instance of an image is called container. You can make
changes to a container (e.g. delete a file), but these changes will not affect
the image. However, you can create a new image from a running container
(and all it changes) using docker commit <container-id> <image-name>.
For more detailed description and actions, refer to Appendix section 3.
14. Multi-stage builds
14
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the
daemon and client. Multistage builds are useful to anyone who has
struggled to optimize Dockerfiles while keeping them easy to read and
maintain.
With a statically compiled language like Golang people tended to derive their
Dockerfiles from the Golang "SDK" image, add source, do a build then push
it to the Docker Hub. Unfortunately the size of the resulting image was quite
large - at least 670mb.
15. Multi-stage builds
15
A workaround which is informally called the builder pattern involves using
two Docker images - one to perform a build and another to ship the results
of the first build without the penalty of the build-chain and tooling in the first
image.
As a result of this approach, we will have images without extra packages
and, accordingly, of smaller size. Also we do not need copy file to the host
system from one image and then to the current image.
For more detailed description and actions, refer to Appendix section 4.
16. Docker Security
16
Below are five common scenarios where deploying Docker images open up
new kinds of security issues you might not have considered, and some great
tools and advice that you can use to ensure you aren’t leaving the barn
doors open when you deploy.
1 - Image Authenticity
2 - Excess Privileges
3 - System Security
4 - Limit Available Resource Consumption
5 - Large Attack Surfaces
17. Docker Security
17
Image Authenticity included the following points:
1.1 - Use Private or Trusted Repositories
1.2 - Use Docker Content Trust
1.3 - Docker Bench Security (MacOS doesn’t support)
18. Docker Security
18
1.1 Use Private or Trusted Repositories
You can use private and trusted repositories such as Docker Hub’s official
repositories.
Docker Cloud and Docker Hub can scan images in private repositories to
verify that they are free from known security vulnerabilities or exposures,
and report the results of the scan for each image tag.
As a result, by using the official repositories, you can know that your
containers are safe to use and don’t contain malicious code.
19. Docker Security
19
1.2 Use Docker Content Trust
Before a publisher pushes an image to a remote registry, Docker Engine
signs the image locally with the publisher’s private key. When you later pull
this image, Docker Engine uses the publisher’s public key to verify that the
image you are about to run is exactly what the publisher created, has not
been tampered with and is up to date.
To summarize, the service protects against image forgery, replay attacks,
and key compromises. I strongly encourage you to check out the article, as
well as the official documentation.
20. Docker Security
20
1.3 Docker Bench Security
Checks for dozens of common best practices around deploying Docker
containers in production.
The tool was based on the recommendations in the CIS Docker 1.13
Benchmark, and run checks against the following six areas:
1. Host configuration
2. Docker daemon configuration
3. Docker daemon configuration files
4. Container images and build files
5. Container runtime
6. Docker security operations
22. Docker Security
22
2 Excess Privileges
With respect to Docker I’m specifically focused on two points:
2.1 - Containers running in privileged mode
2.2 - Excess privileges used by containers
Starting with the first point, you can run a Docker container with the --
privileged switch.
What this does is give extended privileges to this container. But it is not
good.
gives all capabilities to the container, and it also lifts all the limitations
enforced by the device cgroup controller. In other words, the container can
then do almost everything that the host can do. This flag exists to allow
special use-cases, like running Docker within Docker.
23. Docker Security
23
Container breakout to the host: Containers might run as a root user,
making it possible to use privilege escalation to break the “containment” and
access the host’s operating system.
● Kernel vulnerabilities.
● Bad configuration.
● Mounted filesystems.
● Mounted Docker socket.
24. Docker Security
24
Drop Unnecessary Privileges and Capabilities
● Privileges
● CAP_SYS_ADMIN
● For the container privilege were the equivalents of the normal user, create
an isolated user namespace for your containers. If possible, avoid
containers with uid 0.
● trusted repository.
● /var/run/docker.sock, / proc, / dev, etc
For more detailed description and actions, refer to Appendix section 5
25. Docker Security
25
3 - System Security
In a compromised system, isolation and other security mechanisms of
containers are unlikely to help. In addition, the system is designed in such a
way that the containers use the host's core. For many reasons you already
know, this increases the efficiency of work, but from the security point of
view, this feature is a threat that must be dealt with.
26. Docker Security
26
Approaches to the security of the host:
● Make sure the configuration of the host and the Docker engine are secure
(access is limited and provided only to authenticated users, the
communication channel is encrypted, etc.). To test the configuration for
compliance with the best practices, I recommend using the Docker bench
audit tool.
27. Docker Security
27
● Update the system in a timely manner, subscribe to the security mailing
list for the operating system and other installed software, especially if it is
installed from third-party repositories (for example, container orchestration
systems, one of which you've probably already installed).
● Use minimal host-based systems specifically designed for use with
containers, such as CoreOS, Red Hat Atomic, RancherOS, etc. This will
reduce the attack surface, as well as take advantage of such convenient
features as, for example, performing system services in containers.
28. Docker Security
28
● To prevent undesirable operations on both the host and containers, you
can use the Mandatory Access Control system. This will help you with
tools such as Seccomp, AppArmor or SELinux.
29. Docker Security
29
4 - Limit Available Resource Consumption
On average, containers are much more numerous than virtual
machines. They are lightweight, which allows you to run a lot of containers,
even on a very modest hardware. This is definitely an advantage, but the
reverse side of the coin is a serious competition for the resources of the
host. Errors in the software, design flaws and hacker attacks can lead to
denial of service. To prevent them, you must properly configure resource
limits.
For more detailed description and actions, refer to Appendix section 6
30. Docker Security
30
5 - Large Attack Surfaces
Containers are isolated black boxes. If they perform their functions, it's
easy to forget which programs of which versions are running inside. The
container can perfectly handle its duties from the viewpoint of view, while
using vulnerable software. These vulnerabilities can be fixed for a long time
in the upstream, but not in your local image. If you do not take appropriate
measures, problems of this kind may long remain unnoticed.
31. Docker Security
31
● Advanced troubleshooting with sysdig
● More advanced troubleshooting with perf
● slabtop
For more detailed description and actions, refer to Appendix section 7
33. Appendix 1: Layers
33
Let's look at an example Dockerfile to see this in action:
FROM ubuntu:latest
LABEL mantainer="Stanislav Kolenkin stas.kolenkin@gmail.com"
RUN apt-get update
RUN apt-get install -y python python-pip wget
34. Appendix 1: Layers
34
Let's build this image and check number of layers:
docker build -t . test
….
done.
---> ce11bc61d46c
Removing intermediate container 9b0787022031
Successfully built ce11bc61d46c
Successfully tagged test:latest
38. Appendix 1: Layers
38
Let’s give it a shot and optimize our Image:
cat Dockerfile
FROM ubuntu:latest
LABEL mantainer="Stanislav Kolenkin stas.kolenkin@gmail.com"
RUN apt-get update && apt-get install -y python python-pip wget vim
40. Appendix 1: Layers
40
Let's look at the output of the docker history command and check number of
layers:
We often talk about layers and images as if they are different things. But, in
fact, every layer is already an image, and the layer of the image is just a
collection of other images.
41. Appendix 1: Layers
41
We can run container the following:
docker run -it sample:latest /bin/bash
or
docker run -it d355ed3537e9 /bin/bash
Both are images based on which containers can be launched. The only
difference is that the first one is named, and the second one is not. This
ability to run containers from any layer can be very useful when debugging
your Dockerfile
43. Appendix 2: Reduce Docker Image Sizes
43
Clean your apt/yum cache and do it in a right way:
Cleaning command should be put in the same layer where package
installation commands reside.
FROM ubuntu:latest
LABEL mantainer="Stanislav Kolenkin stas.kolenkin@gmail.com"
RUN apt-get update &&
apt-get install -y python python-pip wget vim &&
apt-get remove python-pip &&
rm -rf /var/lib/apt/lists/*
44. Appendix 2: Reduce Docker Image Sizes
Use a smaller base image
- Image size equals the sum of the sizes of the images that make up it
- Each additional instruction in Dockerfile increases the size of the image.
Ubuntu image will set you to 128MB on the outset. Consider using a
smaller base image. For each apt-get install or yum install line you add in
your Dockerfile you will be increasing the size of the image by that library
size. Realize that you probably don’t need many of those libraries you are
installing.
Consider using an alpine base image (only 5MB in size). Most likely,
there are alpine tags for the programming language you are using. For
example, Python has 2.7-alpine(~50MB) and 3.5-alpine(~65MB).
44
45. Appendix 2: Reduce Docker Image Sizes
Consider using an alpine base image (only 5MB in size). Most likely,
there are alpine tags for the programming language you are using. For
example, Python has 2.7-alpine(~50MB) and 3.5-alpine(~65MB).
45
48. Appendix 2: Reduce Docker Image Sizes
48
Don’t install debug tools like vim/curl:
Many developers installs vim and curl in their Dockerfile for debug
purposes. Do so only if application depends on it. This defeats the
purpose of using a small base image.
But how do I debug?
One technique is to have a development Dockerfile and a production
Dockerfile. During development, have all of the tools you need, and then
when deploying to production remove the development tools.
49. Appendix 2: Reduce Docker Image Sizes
49
Use — no-install-recommends on apt-get install
Adding — no-install-recommends to apt-get install -y can help
dramatically reduce the size by avoiding installing packages that aren’t
technically dependencies but are recommended to be installed
alongside packages.
--no-install-recommends in the apt-get
51. Appendix 2: Reduce Docker Image Sizes
51
Add rm -rf /var/lib/apt/lists/* to same layer as apt-get installs
Add rm -rf /var/lib/apt/lists/* at the end of the apt-get -y install to clean up
after install packages.
For yum, add yum clean all
Also, if you are install wget or curl in order to download some package,
remember to combine them all in one RUN statement. Then at the end
of the run statement, apt-get remove curl or wget once you no longer
need them. This advice goes for any package that you only need
temporarily.
52. Appendix 2: Reduce Docker Image Sizes
52
Flatten docker image/containers
So it is only possible to “flatten” a Docker container, not an image. So we
need to start a container from an image first. Then we can export and import
the container in one line:
docker run -it ubuntu bash -c "exit"
docker ps -a | grep ubuntu
bda68042f324 ubuntu "bash -c exit"
2 seconds ago Exited (0) 8 seconds ago keen_turing
55. Appendix 2: Reduce Docker Image Sizes
55
The image size is now 85.8MB
● The image has 1 layer.
Complicated to use the cache mechanism… as it’s based on layers and
RUN instructions.
57. Appendix 2: Reduce Docker Image Sizes
57
docker-squash is a utility to squash multiple docker layers into one in order
to create an image with fewer and smaller layers.
It retains Dockerfile commands such as PORT, ENV, etc.. so that squashed
images work the same as they were originally built. In addition, deleted files
in later layers are actually purged from the image when squashed.
It's designed to support a workflow where you would squash the image just
before pushing it to a registry. Before squashing the image, you would
remove any build time dependencies, extra files (apt caches, logs, private
keys, etc..) that you would not want to deploy. The defaults also preserve
your base image so that its contents are not repeatedly transferred when
pushing and pulling images.
59. Appendix 2: Reduce Docker Image Sizes
59
Run docker-squash and check size.
60. Appendix 2: Reduce Docker Image Sizes
60
Note that docker-engine since version 1.13 in the experimental mode
contains the ability to assemble already compressed images. To do this, add
the -squash option to the build command, but performing a docker-squash
would usually give a smaller size of the image.
61. Appendix 2: Reduce Docker Image Sizes
61
Use the following utilities for writing your Dockerfiles:
● FromLatest.io
● imagelayers.io
66. Appendix 3: Difference between save and export
66
Now we have two different images (busybox and busybox-1) and we have a
container made from busybox which also contains the change (the new
folder /home/test). Let’s see how we can persist our changes.
Export
Export is used to persist a container (not an image). So we need the
container id which we can see like this:
docker ps -a
docker export CONTAINER-ID > export.tar
The result is a TAR-file which should be around 1.2 MB big (slightly smaller
than the one from save).
67. Appendix 3: Difference between save and export
67
SAVE
Save is used to persist an image (not a container). It is needed to set image
name. Like this:
docker images
docker save busybox-1 > /home/save.tar
The result is a TAR-file which should be around 1.3 MB big (slightly bigger
than the one from export).
68. Appendix 3: Difference between save and export
68
The difference
Now after we created our TAR-files, let’s see what we have. First of all we
clean up a little bit – we remove all containers and images we have right
now:
docker ps -a |grep busybox
docker rm CONTAINER-ID
docker images|grep busybox
docker rmi busybox-1
docker rmi busybox
70. Appendix 3: Difference between save and export
70
Start with export from the container that was done earlier.
Import it like this:
cat export.tar | sudo docker import - busybox-1-export:latest
docker images |grep busybox
docker run busybox-1-export [ -d /home/test ] && echo 'Directory found' ||
echo 'Directory not found'
72. Appendix 3: Difference between save and export
72
We start with our export we did from the container. We can import it like this:
docker load < save.tar
docker images |grep busybox-1
docker run busybox-1 [ -d /home/test ] && echo 'Directory found' || echo
'Directory not found'
74. Appendix 3: Difference between save and export
74
So what’s the difference between both?
Well, as we saw the exported version is slightly smaller. That is because it is
flattened, which means it lost its history and meta-data. We can see this by
the following command:
alias dockviz="docker run -it --rm -v
/var/run/docker.sock:/var/run/docker.sock nate/dockviz"
dockviz images -t |grep busybox
75. Appendix 3: Difference between save and export
75
Exported-imported image has lost all of its history whereas the saved-loaded
image still have its history and layers. This means that you cannot do any
rollback to a previous layer if you export-import it while you can still do this if
you save-load the whole (complete) image.
76. Appendix 4: Multi-stage builds
76
Dockerfile.multi
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
78. Appendix 4: Multi-stage builds
78
As you can see, the two preparatory stages here use golang to build the
application, but the resulting image will be compact and without golang.
79. Appendix 4: Multi-stage builds
79
By default, the stages are unnamed. In this example, we will assign a name to the form and
use it in the COPY statement.
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
80. Appendix 4: Multi-stage builds
80
As a result of this approach, we will have images without extra packages
and, accordingly, of smaller size. Also we do not need copy file to the host
system from one image and then to the current image.
81. Appendix 5: Docker Security
81
You can add or remove privileges use of the --cap-drop and --cap-add
flags.
For in-depth coverage of these options, refer to the “Runtime privilege and
Linux capabilities” section of the documentation.
If you create a container without a namespace, then by default the
processes running inside the container will work from the host's point of view
on behalf of the superuser.
82. Appendix 6: Docker Security
82
Standart limits:
--blkio-weight uint16 Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
--blkio-weight-device list Block IO weight (relative device weight) (default [])
--device-read-bps list Limit read rate (bytes per second) from a device (default [])
--device-read-iops list Limit read rate (IO per second) from a device (default [])
--device-write-bps list Limit write rate (bytes per second) to a device (default [])
--device-write-iops list Limit write rate (IO per second) to a device (default [])
--kernel-memory bytes Kernel memory limit
--label-file list Read in a line delimited file of labels
-m, --memory bytes Memory limit
--memory-reservation bytes Memory soft limit
--memory-swap bytes Swap limit equal to memory plus swap: '-1' to enable unlimited swap
--pids-limit int Tune container pids limit (set -1 for unlimited)
--ulimit ulimit Ulimit options (default [])
86. Appendix 7: Debugging
86
More advanced troubleshooting with perf
At this point, it’s worth switching troubleshooting tools once again and go
one level deeper, this time using perf, the performance tracking tool shipped
with the kernel. Its interface is a bit hostile, but it does a wonderful job at
profiling the kernel activity.
To get a clue about where those lstat() system calls are spending their time,
we can just grab the pid of the worker process and pass it to perf top, which
has been previously set up with kernel debugging symbols
Perf will then instrument the execution of the worker process, and will show
us which functions, either in user space or in kernel space (executing on
behalf of the process) the majority of time is spent in.
87. Appendix 7: Debugging
87
slabtop
The Linux kernel needs to allocate memory for temporary objects such as
task or device structures and inodes. The caching memory allocator
manages caches of these types of objects. The modern Linux kernel
implements this caching memory allocator to hold the caches called the
slabs. Different types of slab caches are maintained by the slab allocator.
This article concentrates on the slabtop command which shows real-time
kernel slab cache information.