SlideShare a Scribd company logo
1 of 180
Tips for generating docker
complaints with the dev(sec)ops
Thierry GAYET - 01/2024
GOAL
The purpose of this presentation will to provide
several useful information on the good way to
make docker containers.
THE DOCKER ARCHITECTURE
ARCHITECTURE
SEQUENCE
STATES
STATES
GENERATING A DOCKER’S IMAGE
WORKFLOW
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
COPY . /app
RUN make /app
CMD python /app/app.py
Each instruction creates one layer :
● FROM creates a layer from the ubuntu:22.04 Docker image.
● COPY adds files from your Docker client's current directory.
● RUN builds your application with make.
● CMD specifies what command to run within the container.
[EXAMPLE] DOCKERFILE
REFERENCES
Never use an undefined tag such as :
FROM ubuntu
or
FROM ubuntu:latest
This makes generation inaccurate and non-reproducible because the tag can change !
So always use a specific tag such as :
FROM ubuntu:22.04
Officials tags are available on docker hub : https://hub.docker.com/_/ubuntu/tags
REFERENCES
https://hub.docker.com/_/ubuntu/tags
DYNAMIC FILE GENERATION
WITHIN A DOCKERFILE
DYNAMIC FILE GENERATION ON A DOCKERFILE
It can ve useful to generate dynamically some files :
# syntax=docker/dockerfile:1
FROM golang:1.21
WORKDIR /src
COPY << EOF ./main.go
package main
import "fmt"
func main() {
fmt.Println("hello, world")
}
EOF
RUN go build -o /bin/hello ./main.go
Understanding docker’s layers
Layers
The order of Dockerfile instructions matters. A Docker build consists of a series of ordered build instructions. Each instruction in a
Dockerfile roughly translates to an image layer. The following diagram illustrates how a Dockerfile translates into a stack of layers
in a container image.
Because of the current order of the Dockerfile instructions, the builder must download the Go modules again, despite none of the
packages having changed since the last time.
Cached layers
When you run a build, the builder attempts to reuse layers from earlier builds. If a layer of an image is unchanged, then the builder
picks it up from the build cache. If a layer has changed since the last build, that layer, and all layers that follow, must be rebuilt.
The Dockerfile from the previous section copies all project files to the container (COPY . .) and then downloads application
dependencies in the following step (RUN go mod download). If you were to change any of the project files, then that would
invalidate the cache for the COPY layer. It also invalidates the cache for all of the layers that follow.
Update the instruction order
You can avoid this redundancy by reordering the instructions in the Dockerfile. Change the order of the instructions so that
downloading and installing dependencies occur before the source code is copied over to the container.
In that way, the builder can reuse the "dependencies" layer from the cache, even when you make changes to your source code.
Go uses two files, called go.mod and go.sum, to track dependencies for a project. These files are to Go, what package.json and
package-lock.json are to JavaScript.
For Go to know which dependencies to download, you need to copy the go.mod and go.sum files to the container. Add another COPY
instruction before RUN go mod download, this time copying only the go.mod and go.sum files.
# syntax=docker/dockerfile:1
FROM golang:1.21-alpine
WORKDIR /src
- COPY . .
+ COPY go.mod go.sum .
RUN go mod download
+ COPY . .
RUN go build -o /bin/client ./cmd/client
RUN go build -o /bin/server ./cmd/server
ENTRYPOINT [ "/bin/server" ]
Ordering your Dockerfile instructions appropriately helps you avoid unnecessary work at build time.
https://kodekloud.com/blog/docker-image-layers/
Now if you edit your source code, building the image won't cause the builder to download the dependencies each time.
The COPY . . instruction appears after the package management instructions, so the builder can reuse the RUN go mod download
layer.
ADD NON ROOT USER
ADD NON ROOT USER
By default, Docker containers run as the root user, which can pose security risks if the container becomes
compromised.
Also, running as root can be an issue when sharing folders between the host and the docker container.
To reduce these risks, we can run a Docker container with a custom non-root user that matches your host
Linux user's user ID (UID) and group ID (GID), ensuring seamless permission handling for mounted folders.
Running a docker build command that uses (mainly) a non-root user might force us to use sudo for some
commands.
The same is valid for running the docker itself using unattended scripts. You may need elevated privileges
for specific tasks.
Granting password-less sudo permissions to a non-root user allows you to perform administrative tasks
without the risk of running the entire container as the root user.
Step 1: Adjust the Dockerfile to Accept UID and GID as Arguments
Modify your Dockerfile to accept the host's UID and GID as arguments. This way, you can create a
user in the container with a matching UID and GID.
Add the following lines to your Dockerfile:
FROM ubuntu
ARG UID
ARG GID
# Update the package list, install sudo, create a non-root user, and grant password-less sudo permissions
RUN apt update && 
apt install -y sudo && 
addgroup --gid $GID nonroot && 
adduser --uid $UID --gid $GID --disabled-password --gecos "" nonroot && 
echo 'nonroot ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
# Set the non-root user as the default user
USER nonroot
ADD NON ROOT USER
Step 2: Set the Working Directory
Set the working directory where the non-root user can access it. Add the following line to
your Dockerfile:
# Set the working directory
WORKDIR /home/nonroot/app
This sets the working directory to '/home/nonroot/app', where the non-root user has read
and write permissions.
ADD NON ROOT USER
Step 3: Copy Files and Set Permissions
Ensure the non-root user has the necessary permissions to access the copied files.
Add the following lines to your Dockerfile:
# Copy files into the container and set the appropriate permissions
COPY --chown=nonroot:nonroot . /home/nonroot/app
RUN chmod -R 755 /home/nonroot/app
ADD NON ROOT USER
ADD NON ROOT USER
Step 4: Build and Run the Docker Container with UID and GID Parameters
Now you can build the Docker image and run the container with the custom non-root user.
Pass your host's UID and GID as build arguments to create a user with matching permissions.
Use the following commands to build and run your container:
# Get your host's UID and GID
export HOST_UID=$(id -u)
export HOST_GID=$(id -g)
# Build the Docker image
docker build --build-arg UID=$HOST_UID --build-arg GID=$HOST_GID -t your-image-name .
# Run the Docker container
docker run -it --rm --name your-container-name your-image-name id
The docker output will be :
uid=1000(nonroot) gid=1000(nonroot) groups=1000(nonroot)
ADD NON ROOT USER
Optional - Adding Docker Compose for Running a Custom Non-Root User Container
Docker Compose is a tool for defining and running multi-container applications using a YAML file to configure
the application's services, networks, and volumes.
It simplifies managing containers, especially when working with multiple services.
This section will discuss how to use Docker Compose to run a Docker container with a custom non-root user
that matches your host's UID and GID.
Create a docker-compose.yml file in your project directory with the following content:
version: '3.8'
services:
your_service_name:
build:
context: .
args:
UID: ${HOST_UID}
GID: ${HOST_GID}
image: your-image-name
container_name: your-container-name
volumes:
- ./app:/home/nonroot/app
ADD NON ROOT USER
This YAML file defines a service, your_service_name, using the Dockerfile in the current directory.
The build section passes the UID and GID build arguments from the host environment variables
HOST_UID and HOST_GID.
The volumes section maps a local directory (./app) to the container's working directory
(/home/nonroot/app), ensuring seamless permission handling for the mounted folder.
First, to run the container using Docker Compose set the HOST_UID and HOST_GID environment variables
in your host system.
The following command will build the docker (if needed), start it, print the user ID, and remove the
container:
HOST_UID=$(id -u) HOST_GID=$(id -g) docker compose run --rm your_service_name id
Running a Docker container with a custom non-root user that matches your host's UID and GID
ensures seamless permission handling for mounted folders while maintaining security.
Optimizing the Dockerfile and combining RUN commands can reduce the image size and
improve performance.
Following these steps will help you create and run a Docker container with a non-root user that
aligns with your host's permissions, reducing the risk of potential security breaches and
permission issues.
Always prioritize security when deploying applications and containers to ensure a safe and
stable environment.
Integrating Docker Compose into your workflow simplifies container management and improves
the overall development experience, allowing you to focus on building your application.
ADD NON ROOT USER
● you should chmod outside the image before you COPY to avoid duplicating all the files in a new layer
(explore them with a tool like Dive to detect such waste; also note that while not documented you can --
chmod during COPY with BuildKit enabled, but this applies to files and directories, and most if the time you
don't want files to be executables)
● apps shouldn't be given permission to modify themselves; while not as important as on a non-containerized
system, a vulnerability in the app could lead to it modifying its own code and configuration files, which
could allow RCEs.
We've seen that for config files with log4j and logback a year ago. Only "data" files should be writeable.
ADD NON ROOT USER
MULTISTAGE BUILD
MULTISTAGE BUILD
The goal of using a multistage is multiple:
● have several nested build levels callable separately
● be able to have several levels of internal builds to reduce the final size of an image by copying an
intermediate build to the final image
To build containerized applications in a consistent manner, it is common to use multi-stage builds. This has
both operational and security advantages.
In a multi-stage build, you create an intermediate container that contains all the tools you need to compile
or generate the final artifact. At the last stage, only the generated artifacts are copied to the final image,
without any development dependencies or temporary build files.
A well-designed multi-stage build contains only the minimal binary files and dependencies required for the
final image, with no build tools or intermediate files. This significantly reduces the attack surface.
In addition, a multi-stage build gives you more control over the files and artifacts that go into a container
image, making it more difficult for attackers or insiders to add malicious or untested artifacts without
MULTISTAGE BUILD
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and
client.
MULTISTAGE BUILD
Why we need multi-stage build?
One of the most challenging things about building images is keeping the image size down. For that
we have to be careful while moving from one environment to another environment and we needed
to keep tracks of artifacts, traditionally these can be achieved using shell scripts.
Apart from that, Maintaining two or more dockerfile for application is not ideal. Multi-stage build
simplifies this situation.
MULTISTAGE BUILD
What is multi-stage build?
Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping
them easy to read and maintain.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new stage of the build. You can
selectively copy artifacts from one stage to another, leaving behind everything you don’t want in
the final image.
COPY --from=0 /src/app .
MULTISTAGE BUILD
In the above instruction, we are using stage 0 to copy artifacts and leaving everything else behind.
But, numbering stage, let’s just say not easy to read. We can name our build stage as
FROM nginx:latest AS dev
COPY --from=dev /src/app .
MULTISTAGE BUILD
Command Guide — Visual Studio Code Intelligence :
Control over a build — Stop at a specific build stage
When you build your image, you don’t necessarily need to build the entire Dockerfile including every
stage. You can specify a target build stage. This is useful when Debugging a specific build stage.
$ docker build --target test .
This will build image till mention target stage and stop.
When using multi-stage builds, you are not limited to copying from stages you created earlier in your
Dockerfile. The Docker client pulls the image from the registry (like docker hub) if necessary and
copies the artifact from there.
MULTISTAGE BUILD
EXAMPLES #1 :
FROM maven:3.5.2-jdk-9 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:9
COPY --from=build /usr/src/app/target/flighttracker-1.0.0-SNAPSHOT.jar
/usr/app/flighttracker-1.0.0-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/app/flighttracker-1.0.0-SNAPSHOT.jar"]
MULTISTAGE BUILD
EXAMPLES #2 :
FROM node:12.13.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
MULTISTAGE BUILD
EXAMPLES #3 :
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:12 AS development
# Build steps go here
FROM development as builder
WORKDIR /app
COPY src/ *.json ./
RUN yarn install 
&& yarn compile 
# Just install prod dependencies
&& yarn install --prod
# Actual production environment setup goes here
FROM node:12-slim AS production
WORKDIR /app
COPY --from=builder /app/out/ ./out/
COPY --from=builder /app/node_modules/ ./node_modules/
COPY --from=builder /app/package.json .
EXPOSE 3000
ENTRYPOINT [ "/bin/bash", "-c" ]
CMD [ "npm start" ]
MULTISTAGE BUILD
MULTISTAGE BUILD
EXAMPLES #4 :
# Stage 1: Build
FROM python:3.10 AS build
# Install
RUN apt update && 
apt install -y sudo
# Add non-root user
ARG USERNAME=nonroot
RUN groupadd --gid 1000 $USERNAME && 
useradd --uid 1000 --gid 1000 -m $USERNAME
## Make sure to reflect new user in PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER $USERNAME
## Pip dependencies
# Upgrade pip
RUN pip install --upgrade pip
# Install production dependencies
COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && 
rm /tmp/requirements.txt
MULTISTAGE BUILD
# Stage 2: Development
FROM build AS development
# Install development dependencies
COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt
RUN pip install -r /tmp/requirements-dev.txt && 
rm /tmp/requirements-dev.txt
# Stage 3: Production
FROM build AS production
# No additional steps are needed, as the production dependencies are
already installed
docker build --target development : build an image with both production and
development dependencies while
docker build --target production : build an image with only the production
dependencies.
TOOLTIPS TO WRITE DOCKERFILE
Use multi-stage builds
Multi-stage builds let you reduce the size of your final image, by creating a cleaner separation between the building of your
image and the final output.
Split your Dockerfile instructions into distinct stages to make sure that the resulting output only contains the files that's needed
to run the application.
Using multiple stages can also let you build more efficiently by executing build steps in parallel.
See Multi-stage builds for more information.
Exclude with .dockerignore
To exclude files not relevant to the build, without restructuring your source repository, use a .dockerignore file.
This file supports exclusion patterns similar to .gitignore files.
For information on creating one, see Dockerignore file.
Create ephemeral containers
The image defined by your Dockerfile should generate containers that are as ephemeral as possible.
Ephemeral means that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up
and configuration.
Refer to Processes under The Twelve-factor App methodology to get a feel for the motivations of running containers in such a
stateless fashion.
Don't install unnecessary packages
Avoid installing extra or unnecessary packages just because they might be nice to have. For example, you don’t need to
include a text editor in a database image.
When you avoid installing extra or unnecessary packages, your images have reduced complexity, reduced dependencies,
reduced file sizes, and reduced build times.
Decouple applications
Each container should have only one concern. Decoupling applications into multiple containers makes it easier to scale horizontally
and reuse containers.
For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the
web application, database, and an in-memory cache in a decoupled manner.
Limiting each container to one process is a good rule of thumb, but it's not a hard and fast rule.
For example, not only can containers be spawned with an init process, some programs might spawn additional processes of their
own accord.
For instance, Celery can spawn multiple worker processes, and Apache can create one process per request.
Use your best judgment to keep containers as clean and modular as possible.
If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.
Sort multi-line arguments
Whenever possible, sort multi-line arguments alphanumerically to make maintenance easier. This helps to avoid duplication of
packages and make the list much easier to update. This also makes PRs a lot easier to read and review. Adding a space before a
backslash () helps as well.
Here’s an example from the buildpack-deps image:
RUN apt-get update && apt-get install -y 
bzr 
cvs 
git 
mercurial 
subversion 
&& rm -rf /var/lib/apt/lists/*
Leverage build cache
When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. For each
instruction, Docker checks whether it can reuse the instruction from the build cache.
The basic rules of build cache invalidation are as follows:
● Starting with a parent image that's already in the cache, the next instruction is compared against all child images derived from
that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
● In most cases, simply comparing the instruction in the Dockerfile with one of the child images is sufficient. However, certain
instructions require more examination and explanation.
● For the ADD and COPY instructions, the modification time and size file metadata is used to determine whether cache is valid.
During cache lookup, cache is invalidated if the file metadata has changed for any of the files involved.
● Aside from the ADD and COPY commands, cache checking doesn't look at the files in the container to determine a cache match.
For example, when processing a RUN apt-get -y update command the files updated in the container aren't examined to
determine if a cache hit exists. In that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands generate new images and the cache isn't used.
If your build contains several layers and you want to ensure the build cache is reusable, order the instructions from less frequently
changed to more frequently changed where possible.
For more information about the Docker build cache and how to optimize your builds, see cache management.
Pin base image versions
Image tags are mutable, meaning a publisher can update a tag to point to a new image. This is a useful because it lets publishers
update tags to point to newer versions of an image. And as an image consumer, it means you automatically get the new version
when you re-build your image.
For example, if you specify FROM alpine:3.19 in your Dockerfile, 3.19 resolves to the latest patch version for 3.19.
content_copy
# syntax=docker/dockerfile:1
FROM alpine:3.19
At one point in time, the 3.19 tag might point to version 3.19.1 of the image. If you rebuild the image 3 months later, the same tag
might point to a different version, such as 3.19.4. This publishing workflow is best practice, and most publishers use this tagging
strategy, but it isn't enforced.
The downside with this is that you're not guaranteed to get the same for every build. This could result in breaking changes, and it
means you also don't have an audit trail of the exact image versions that you're using.
To fully secure your supply chain integrity, you can pin the image version to a specific digest. By pinning your images to a digest,
you're guaranteed to always use the same image version, even if a publisher replaces the tag with a new image. For example, the
following Dockerfile pins the Alpine image to the same tag as earlier, 3.19, but this time with a digest reference as well.
FROM alpine:3.19@sha256:13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd
With this Dockerfile, even if the publisher updates the 3.19 tag, your builds would still use the pinned image version:
13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd.
While this helps you avoid unexpected changes, it's also more tedious to have to look up and include the image digest for base
image versions manually each time you want to update it. And you're opting out of automated security fixes, which is likely
something you want to get.
Docker Scout has a built-in Outdated base images policy that checks for whether the base image version you're using is in fact the
latest version. This policy also checks if pinned digests in your Dockerfile correspond to the correct version. If a publisher updates
an image that you've pinned, the policy evaluation returns a non-compliant status, indicating that you should update your image.
Docker Scout also supports an automated remediation workflow for keeping your base images up-to-date. When a new image
digest is available, Docker Scout can automatically raise a pull request on your repository to update your Dockerfiles to use the
latest version. This is better than using a tag that changes the version automatically, because you're in control and you have an
audit trail of when and how the change occurred.
For more information about automatically updating your base images with Docker Scout, see Remediation
USEFUL TOOLS FOR
CONTINUOUS INTEGRATION (CI) &
CONTINUOUS DEPLOYEMENT (CD)
AND MORE
The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers
in production.
The tests are all automated, and are based on the CIS Docker Benchmark v1.6.0.
● https://github.com/docker/docker-bench-security
Lynis is a security auditing tool for systems based on UNIX like Linux, macOS, BSD, and others. It performs an in-depth security
scan and runs on the system itself. The primary goal is to test security defenses and provide tips for further system hardening. It will
also scan for general system information, vulnerable software packages, and possible configuration issues. Lynis was commonly
used by system administrators and auditors to assess the security defenses of their systems. Besides the "blue team," nowadays
penetration testers also have Lynis in their toolkit.
We believe software should be simple, updated on a regular basis, and open. You should be able to trust, understand, and have
the option to change the software. Many agree with us, as the software is being used by thousands every day to protect their
systems.
https://github.com/CISOfy/lynis
https://cisofy.com/lynis/
HADOLINT : Haskell Dockerfile Linter
https://github.com/hadolint/hadolint
A smarter Dockerfile linter that helps you build best practice Docker images.
The linter parses the Dockerfile into an AST and performs rules on top of the AST. It stands on the shoulders of ShellCheck to lint
the Bash code inside RUN instructions.
You can run hadolint locally to lint your Dockerfile.
hadolint <Dockerfile>
hadolint --ignore DL3003 --ignore DL3006 <Dockerfile> # exclude specific rules
hadolint --trusted-registry my-company.com:500 <Dockerfile> # Warn when using untrusted FROM images
Docker comes to the rescue, providing an easy way how to run hadolint on most platforms. Just pipe your Dockerfile to docker
run:
docker run --rm -i hadolint/hadolint < Dockerfile
# OR
docker run --rm -i ghcr.io/hadolint/hadolint < Dockerfile
Dockle - Container Image Linter for Security, Helping build the Best-Practice Docker Image, Easy to start
https://github.com/goodwithtech/dockle
Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can
find those issues.
https://github.com/aquasecurity/trivy
Targets (what Trivy can scan):
● Container Image
● Filesystem
● Git Repository (remote)
● Virtual Machine Image
● Kubernetes
● AWS
Scanners (what Trivy can find there):
● OS packages and software dependencies in use (SBOM)
● Known vulnerabilities (CVEs)
● IaC issues and misconfigurations
● Sensitive information and secrets
● Software licenses
DOCKER CLEAN
A simple Shell script to clean up the Docker Daemon.
GIT REPO : https://github.com/ZZROTDesign/docker-clean
INSTALL :
$ curl -s https://raw.githubusercontent.com/ZZROTDesign/docker-
clean/v2.0.4/docker-clean | sudo tee /usr/local/bin/docker-clean > /dev/null
&& sudo chmod +x /usr/local/bin/docker-clean
USAGE :
$ docker-clean –all
DIVE TOOL
A tool for exploring a docker image, layer contents, and discovering ways to shrink the size of your Docker/OCI image.
GIT REPO :
https://github.com/wagoodman/dive
https://github.com/wagoodman/dive/releases
INSTALL :
$ curl -L
https://github.com/wagoodman/dive/releases/download/v0.11.0/dive_0.11.0_darwin_amd64.tar.gz -o
/tmp/dive_0.11.0_darwin_amd64.tar.gz && tar zxvf /tmp/dive_0.11.0_darwin_amd64.tar.gz -C /tmp &&
sudo mv /tmp/dive /usr/bin/dive
$ curl -L https://github.com/wagoodman/dive/releases/download/v0.11.0/dive_0.11.0_linux_arm64.deb -o
/tmp/dive_0.11.0_linux_arm64.deb && sudo dpkg -i /tmp/dive_0.11.0_linux_arm64.deb
DIVE TOOL
USAGE :
To analyze a Docker image simply run dive with an image tag/id/digest: :
$ dive <your-image-tag>
or you can dive with docker command directly
$ alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock
wagoodman/dive"
$ dive <your-image-tag>
# for example
$ dive nginx:latest
or if you want to build your image then jump straight into analyzing it:
$ dive build -t <some-tag> .
DIVE TOOL
Additionally you can run this in your CI pipeline to ensure you're keeping wasted space to a minimum (this skips the UI):
$ CI=true dive <your-image>
DIVE TOOL
DIVE TOOL
LAZY DOCKER
USAGE : The lazier way to manage everything docker
A simple terminal UI for both docker and docker-compose, written in Go with the gocui library.
URL : https://github.com/jesseduffield/lazydocker
STARTER
Starter is an open-source command line tool to generate a Dockerfile and a service.yml file from arbitrary
source code. The service.yml file is a Cloud 66 service definition file which is used to define the service
configuration on a stack.
Starter works in the same way as BuildPacks do, but only generates the above mentioned files; the image
compile step happens on BuildGrid. Starter does not require any additional third party tools or frameworks to
work (it's compiled as a Go executable).
GIT REPO :
https://www.startwithdocker.com/
https://www.youtube.com/watch?v=50-0IQNGd3g
https://github.com/cloud66-oss/starter/releases/
https://github.com/cloud66-oss/starter#quick-start
INSTALL :
$ curl -L https://github.com/cloud66-oss/starter/releases/download/1.4.3/linux_amd64_1.4.3 -o  /tmp/starter
&& sudo mv /tmp/starter /usr/bin/starter
USAGE :
$ cd /my/project
$ starter -g dockerfile,service,docker-compose
This will analyze the project in the current folder and generate the three files: Dockerfile, docker-compose.yml and service.yml in the same folder, prompting for
information when required.
Cloud 66 Starter ~ (c) 2016 Cloud 66
Detecting framework for the project at /Users/awesome/work/boom
Found ruby application
Enter ruby version: [latest]
----> Found config/database.yml
Found mysql, confirm? [Y/n]
Found redis, confirm? [Y/n]
Found elasticsearch, confirm? [Y/n]
Add any other databases? [y/N]
----> Analyzing dependencies
----> Parsing Procfile
----> Found Procfile item web
----> Found Procfile item worker
----> Found unicorn
This command will be run after each build: '/bin/sh -c "RAILS_ENV=_env:RAILS_ENV bundle exec rake db:schema:load"', confirm? [Y/n]
This command will be run after each deployment: '/bin/sh -c "RAILS_ENV=_env:RAILS_ENV bundle exec rake db:migrate"', confirm? [Y/n]
----> Writing Dockerfile…
----> Writing docker-compose.yml…
----> Writing service.yml
Done
STARTER
CADVISOR
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance
characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports
information about running containers. Specifically, for each container it keeps resource isolation parameters, historical
resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container
and machine-wide.
cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We
strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is
based on lmctfy's so containers are inherently nested hierarchically.
GIT REPO :
https://github.com/google/cadvisor
https://github.com/google/cadvisor/blob/master/docs/web.md
CADVISOR
To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to
get started.
You can run a single cAdvisor to monitor the whole machine. Simply run:
VERSION=v0.36.0 # use the latest release version from https://github.com/google/cadvisor/releases
sudo docker run 
--volume=/:/rootfs:ro 
--volume=/var/run:/var/run:ro 
--volume=/sys:/sys:ro 
--volume=/var/lib/docker/:/var/lib/docker:ro 
--volume=/dev/disk/:/dev/disk:ro 
--publish=8080:8080 
--detach=true 
--name=cadvisor 
--privileged 
--device=/dev/kmsg 
gcr.io/cadvisor/cadvisor:$VERSION
cAdvisor is now running (in the background) on http://localhost:8080. The setup includes directories with Docker
state cAdvisor needs to observe.
VSCODE PLUGINS
https://learn.microsoft.com/fr-fr/visualstudio/docker/tutorials/docker-tutorial
https://medium.com/@geralexgr/visual-studio-extensions-for-devops-engineers-
277c14307f7#:~:text=As%20a%20DevOps%20engineer%20you,help%20you%20succeed%20in%20that.
https://code.visualstudio.com/docs/containers/overview
https://code.visualstudio.com/docs/devcontainers/containers
https://code.visualstudio.com/docs/azure/kubernetes
MONITOR A DOCKER COMPOSITION
Centralized logging in a Dockerized environment, especially when using Docker Compose for container orchestration, offers several
important benefits for monitoring, troubleshooting, and maintaining the health of your applications. Here are some key reasons why
log centralization is crucial in a Docker composition:
Visibility Across Containers :
In a Docker composition, your application may consist of multiple interconnected containers. Centralized logging allows you to
aggregate and view logs from all containers in a single location. This consolidated view simplifies troubleshooting and
debugging by providing a holistic understanding of the application's behavior.
Distributed Environment Monitoring :
Docker Compose often involves deploying applications across multiple hosts or nodes. Centralized logging enables you to
monitor the logs of containers distributed across different machines. This is especially valuable in microservices
architectures where various services run independently.
Troubleshooting and Diagnostics :
Centralized logs serve as a valuable tool for troubleshooting and diagnostics. When an issue arises, having logs centralized
allows you to quickly identify and analyze problems without the need to access individual containers or nodes. It accelerates
the root cause analysis process.
Security and Auditing :
Centralized logging is crucial for security monitoring and auditing purposes. By aggregating logs in a centralized location, security
events and anomalies can be easily identified. This aids in detecting and responding to security incidents, ensuring that any
unauthorized access or suspicious activities are promptly addressed.
Scalability and Performance Monitoring :
As your Dockerized applications scale, monitoring and analyzing logs become more challenging. Centralized logging solutions
can efficiently handle large volumes of logs and provide tools for searching, filtering, and analyzing logs at scale. This is
essential for monitoring performance and identifying potential bottlenecks.
Log Retention and Compliance :
Centralized logging allows for consistent log retention policies. You can configure centralized logging systems to store logs for
specific durations, ensuring compliance with regulatory requirements. This is important for auditing and meeting data
retention standards.
Integration with Monitoring Tools :
Centralized logs can be integrated seamlessly with various monitoring and analytics tools. This integration enhances your ability
to create dashboards, alerts, and notifications based on log data, facilitating proactive monitoring and alerting.
Streamlining DevOps Processes :
In a DevOps environment, where collaboration between development and operations is crucial, centralized logging streamlines
communication. Developers and operations teams can share a common view of application behavior and collaborate effectively during
the development, deployment, and maintenance phases.
Efficient Log Management :
Centralized logging systems often come with features for log aggregation, parsing, and indexing. These capabilities make log management
more efficient, allowing you to search, analyze, and extract valuable insights from logs easily.
Cost and Resource Optimization :
Centralized logging can help optimize resource utilization by offloading log storage and analysis to dedicated systems. This ensures that
containers focus on their primary tasks without incurring unnecessary overhead related to local log management.
Popular centralized logging solutions include the ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog, and others. The choice of a
specific solution depends on your requirements and the scale of your Dockerized environment.
See the following NN6 summary :
https://blog.eleven-labs.com/fr/monitorer-ses-containers-docker/
https://docs.google.com/presentation/d/1qQQznJBX9hHfZyEkcNRfDM3ZhI4MbOAWl0mY2NKFFfE/edit?usp=drive_
link
https://www.youtube.com/watch?v=44A_2oWnEII
DEVSECOPS & CYBERSECURITY
DEVSECOPS
DEVSECOPS
"Security as Code" (SaC) in DevSecOps refers to the practice of integrating security controls, policies, and processes directly into the
software development and deployment pipelines. The concept is analogous to the broader "Infrastructure as Code" (IaC) approach,
where infrastructure provisioning and management are automated through code.
Key aspects of Security as Code in DevSecOps include:
Automation :
Security as Code involves automating security processes and controls throughout the development and deployment lifecycle.
This includes automating security testing, vulnerability scanning, compliance checks, and other security activities.
Integration into CI/CD Pipelines :
Security controls are integrated directly into continuous integration/continuous deployment (CI/CD) pipelines. This ensures that
security assessments and validations are performed automatically at each stage of the software development lifecycle.
Code Review and Analysis :
Security as Code emphasizes the integration of security reviews and analyses directly into the code review process. Security
checks are conducted alongside regular code reviews to identify and address security issues early in development.
Policy as Code :
Security policies and compliance requirements are codified, meaning that the rules and requirements for secure development and
deployment are expressed as code. This allows for versioning, tracking changes, and maintaining a clear audit trail.
Infrastructure as Code (IaC) Security :
In addition to application security, Security as Code extends to the security of the underlying infrastructure. Security controls are
applied to infrastructure components using IaC principles, ensuring that the entire technology stack is secure.
Automated Security Testing :
Automated security testing tools are utilized to assess code for vulnerabilities, misconfigurations, and compliance with security
policies. These tools can include static application security testing (SAST), dynamic application security testing (DAST), and
other specialized security scanners.
Continuous Monitoring :
Continuous security monitoring is part of the Security as Code approach. This involves the use of monitoring tools and
automated processes to detect and respond to security incidents in real-time.
Scalability and Consistency :
By treating security controls as code, organizations can achieve scalability and consistency. Security policies are applied
uniformly across different projects, environments, and teams, reducing the risk of human error and ensuring a consistent
security posture.
Collaboration between Security and Development :
Security as Code promotes collaboration between security teams and development teams. Security requirements are clearly
defined in code, making it easier for developers to understand and implement security controls without hindering the
development process.
Shift-Left Security :
Security as Code embraces the "shift-left" approach, meaning that security considerations are moved earlier in the development
lifecycle. This shift-left strategy helps catch and address security issues as early as possible, reducing the cost and effort of
remediation.
SAST, or Static Application Security Testing, is a key component in DevSecOps practices aimed at enhancing the security of software
development processes. It is a type of security testing that is performed without executing the code. Instead, it analyzes the application's source
code, bytecode, or binary code to identify potential security vulnerabilities, weaknesses, or coding errors.
Here are some key points about SAST in the context of DevSecOps:
Early Detection of Vulnerabilities :
SAST is typically conducted early in the development lifecycle, during the coding and build phases. This allows security issues to be
identified and addressed at an early stage, reducing the cost and effort required for fixing vulnerabilities later in the development
process.
Automation and Integration :
In the DevSecOps methodology, automation is crucial for continuous integration and delivery. SAST tools are integrated into the
development pipeline to automatically analyze code as it is committed, providing rapid feedback to developers about potential security
issues.
Identification of Code-level Security Flaws :
SAST tools analyze the codebase for common security issues, such as SQL injection, cross-site scripting (XSS), buffer overflows, and other
vulnerabilities. By scanning the source code, SAST tools can identify patterns and indicators that may pose security risks.
Code Review Assistance :
SAST tools can assist developers during code reviews by highlighting security-related issues. This helps developers understand and address
security concerns while reviewing and refining their code.
Language and Platform Support :
SAST tools support various programming languages and frameworks. They are designed to identify vulnerabilities specific to the languages
and platforms used in the application development, making them versatile across different technology stacks.
False Positives and Tuning :
SAST tools may generate false positives, where they flag code as insecure even though it is not. Tuning and customization of SAST tools are
often necessary to reduce false positives and improve the accuracy of results.
Complementing Dynamic Testing :
While SAST focuses on analyzing the source code, dynamic application security testing (DAST) complements SAST by assessing the
application in runtime. Both SAST and DAST contribute to a comprehensive security testing strategy in DevSecOps.
DAST, or Dynamic Application Security Testing, is an essential component of DevSecOps practices focused on enhancing the
security of software applications.
DAST involves testing an application in its running state to identify vulnerabilities, weaknesses, and security issues from the
perspective of an attacker.
Here are key points about DAST in the context of DevSecOps:
Runtime Testing :
DAST tests the application while it is running or deployed in an environment. Instead of analyzing the source code like SAST
(Static Application Security Testing), DAST interacts with the application dynamically to identify vulnerabilities that may be
exploited during actual usage.
Simulation of Real-World Attacks :
DAST simulates real-world attack scenarios by sending malicious requests to the application, probing for vulnerabilities in the
network, web services, APIs, and other entry points. It helps identify issues that may not be evident from static analysis alone.
Automation and Continuous Testing :
In DevSecOps, DAST is often automated and integrated into the continuous integration/continuous deployment (CI/CD) pipeline.
This enables ongoing testing throughout the development lifecycle, providing quick feedback to developers about potential
security weaknesses.
Scanning Web Applications and APIs :
DAST tools specialize in scanning web applications, APIs, and other web services. They analyze the application's responses to different
inputs, identify security vulnerabilities like injection attacks, cross-site scripting (XSS), and other issues that might arise during real-world
usage.
Identification of Configuration Issues :
DAST also helps identify configuration issues in the deployed environment that might expose security vulnerabilities. This includes issues
related to server configurations, network settings, and authentication mechanisms.
False Positives and Reporting :
Similar to SAST, DAST tools may produce false positives. Adjustments and tuning are often required to reduce false positives and enhance
the accuracy of results. DAST tools provide reports with identified vulnerabilities, severity levels, and recommendations for remediation.
Complementing SAST :
While SAST (Static Application Security Testing) focuses on identifying vulnerabilities in the source code, DAST complements this by
detecting vulnerabilities that might only be apparent during runtime. Together, they provide a more comprehensive approach to
application security.
Integration with Security Orchestration :
DAST tools are often integrated with security orchestration platforms to coordinate and automate security testing activities. This integration
facilitates better collaboration between security teams and development teams.
A penetration test (pentest) in the context of DevSecOps refers to the process of systematically assessing the security of a system,
application, or infrastructure by simulating a real-world attack.
The objective is to identify and exploit vulnerabilities to determine the system's resilience to security threats. Integrating penetration
testing into the DevSecOps pipeline is essential for identifying and addressing security issues early in the development lifecycle.
Here are key aspects of penetration testing in DevSecOps:
Automated and Continuous Testing :
In the DevSecOps model, peanetration testing is often automated and integrated into the continuous integration/continuous
deployment (CI/CD) pipeline. This enables regular and systematic testing of applications and infrastructure throughout the
development lifecycle.
Early Detection of Vulnerabilities :
Penetration testing helps identify vulnerabilities early in the development process. By detecting and addressing security issues
during the development phase, organizations can reduce the likelihood of security flaws making it into production.
Continuous Improvement :
DevSecOps emphasizes continuous improvement, and penetration testing contributes to this by providing ongoing insights into
the evolving security posture of the applications and systems. Regular testing helps organizations stay ahead of emerging
threats.
Real-World Simulation :
Penetration tests simulate real-world cyberattacks, often involving attempts to exploit vulnerabilities, bypass security controls,
and gain unauthorized access. This realistic approach helps organizations understand their security strengths and
weaknesses in a dynamic environment.
White Box and Black Box Testing :
Penetration testing can take different forms, including white box testing (with knowledge of the internal structure and code) and
black box testing (without prior knowledge). Both approaches provide valuable perspectives on security vulnerabilities.
Comprehensive Security Assessment :
Pentests assess various aspects of security, including network security, application security, infrastructure security, and
potentially social engineering aspects. The goal is to provide a comprehensive view of the security landscape.
Adherence to Compliance Requirements :
Penetration testing is often required to meet regulatory compliance standards. By incorporating it into the DevSecOps process,
organizations can demonstrate ongoing compliance and reduce the risk of security breaches.
Collaboration with Development and Operations Teams :
Collaboration is key in DevSecOps, and penetration testing involves close coordination with development and operations teams.
This collaboration ensures that security findings are communicated effectively, and remediation efforts are understood and
addressed promptly.
Reporting and Remediation :
Penetration testing results in detailed reports outlining vulnerabilities and recommended remediation steps. DevSecOps teams
use these reports to prioritize and implement security fixes efficiently.
Continuous Monitoring :
While penetration testing provides a snapshot of the security posture, continuous monitoring tools and practices are also
important to detect and respond to security incidents in real-time.
● Scan git repositories for finding potential
credentials leakage.
● SAST (Static Application Security Test)
● SCA (Software Composition Analysis)
● IAST (Interactive Application Security Testing)
● DAST (Dynamic Application Security Test)
● IaC Scanning (Scanning Terraform, HelmChart
code to find misconfiguration)
● Infrastructure scanning
● Compliance check
DEVSECOPS
DEVSECOPS
https://blog.aquasec.com/docker-security-best-practices
RULE #0 - Keep Host and Docker up to date
To prevent from known, container escapes vulnerabilities, which typically end in escalating to root/administrator
privileges, patching Docker Engine and Docker Machine is crucial.
In addition, containers (unlike in virtual machines) share the kernel with the host, therefore kernel exploits executed
inside the container will directly hit host kernel. For example, kernel privilege escalation exploit (like Dirty COW) executed
inside a well-insulated container will result in root access in a host.
RULE #1 - Do not expose the Docker daemon socket (even to the containers)
Docker socket /var/run/docker.sock is the UNIX socket that Docker is listening to. This is the primary entry point for the
Docker API. The owner of this socket is root. Giving someone access to it is equivalent to giving unrestricted root access
to your host.
Do not enable tcp Docker daemon socket. If you are running docker daemon with -H tcp://0.0.0.0:XXX or similar you
are exposing un-encrypted and unauthenticated direct access to the Docker daemon, if the host is internet connected this
means the docker daemon on your computer can be used by anyone from the public internet. If you really, really have to
do this, you should secure it. Check how to do this following Docker official documentation.
Do not expose /var/run/docker.sock to other containers. If you are running your docker image with -v
/var/run/docker.sock://var/run/docker.sock or similar, you should change it. Remember that mounting the socket
read-only is not a solution but only makes it harder to exploit. Equivalent in the docker-compose file is something like this:
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
RULE #2 - Set a user
Configuring the container to use an unprivileged user is the best way to prevent privilege escalation attacks. This can be
accomplished in three different ways as follows:
1. During runtime using -u option of docker run command e.g.:
docker run -u 4000 alpine
1. During build time. Simple add user in Dockerfile and use it. For example:
FROM alpine
RUN groupadd -r myuser && useradd -r -g myuser myuser
<HERE DO WHAT YOU HAVE TO DO AS A ROOT USER LIKE INSTALLING PACKAGES ETC.>
USER myuser
1. Enable user namespace support (--userns-remap=default) in Docker daemon
More information about this topic can be found at Docker official documentation
In kubernetes, this can be configured in Security Context using runAsNonRoot field e.g.:
kind: ...
apiVersion: ...
metadata:
name: ...
spec:
...
containers:
- name: ...
image: ....
securityContext:
...
runAsNonRoot: true
...
As a Kubernetes cluster administrator, you can configure it using Pod Security Policies.
RULE #3 - Limit capabilities (Grant only specific capabilities, needed by a container)
Linux kernel capabilities are a set of privileges that can be used by privileged. Docker, by default, runs with only a subset of
capabilities. You can change it and drop some capabilities (using --cap-drop) to harden your docker containers, or add
some capabilities (using --cap-add) if needed. Remember not to run containers with the --privileged flag - this will add
ALL Linux kernel capabilities to the container.
The most secure setup is to drop all capabilities --cap-drop all and then add only required ones. For example:
docker run --cap-drop all --cap-add CHOWN alpine
And remember: Do not run containers with the --privileged flag!!!
In kubernetes this can be configured in Security Context using capabilities field e.g.:
kind: ...
apiVersion: ...
metadata:
name: ...
spec:
...
containers:
- name: ...
image: ....
securityContext:
...
capabilities:
drop:
- all
add:
- CHOWN
...
As a Kubernetes cluster administrator, you can configure it using Pod Security Policies.
RULE #4 - Add –no-new-privileges flag
Always run your docker images with --security-opt=no-new-privileges in order to prevent escalate privileges using
setuid or setgid binaries.
In kubernetes, this can be configured in Security Context using allowPrivilegeEscalation field e.g.:
kind: ...
apiVersion: ...
metadata:
name: ...
spec:
...
containers:
- name: ...
image: ....
securityContext:
...
allowPrivilegeEscalation: false
...
As a Kubernetes cluster administrator, you can refer to Kubernetes documentation to configure it using Pod Security
RULE #5 - Disable inter-container communication (--icc=false)
By default inter-container communication (icc) is enabled - it means that all containers can talk with each other (using
docker0 bridged network). This can be disabled by running docker daemon with --icc=false flag. If icc is disabled
(icc=false) it is required to tell which containers can communicate using --link=CONTAINER_NAME_or_ID:ALIAS option.
See more in Docker documentation - container communication
In Kubernetes Network Policies can be used for it.
RULE #6 - Use Linux Security Module (seccomp, AppArmor, or SELinux, …. )
First of all, do not disable default security profile!
Consider using security profile like seccomp or AppArmor.
Instructions how to do this inside Kubernetes can be found at Security Context documentation and in Kubernetes API
documentation
RULE #7 - Limit resources (memory, CPU, file descriptors, processes, restarts)
The best way to avoid DoS attacks is by limiting resources. You can limit memory, CPU, maximum number of restarts (--
restart=on-failure:<number_of_restarts>), maximum number of file descriptors (--ulimit nofile=<number>) and
maximum number of processes (--ulimit nproc=<number>).
Check documentation for more details about ulimits
You can also do this inside Kubernetes: Assign Memory Resources to Containers and Pods, Assign CPU Resources to
Containers and Pods and Assign Extended Resources to a Container
RULE #8 - Set filesystem and volumes to read-only
Run containers with a read-only filesystem using --read-only flag. For example:
docker run --read-only alpine sh -c 'echo "whatever" > /tmp'
If an application inside a container has to save something temporarily, combine --read-only flag with --tmpfs like this:
docker run --read-only --tmpfs /tmp alpine sh -c 'echo "whatever" > /tmp/file'
Equivalent in the docker-compose file will be:
version: "3"
services:
alpine:
image: alpine
read_only: true
Equivalent in kubernetes in Security Context will be:
kind: ...
apiVersion: ...
metadata:
name: ...
spec:
...
containers:
- name: ...
image: ....
securityContext:
...
readOnlyRootFilesystem: true
...
In addition, if the volume is mounted only for reading mount them as a read-only It can be done by appending :ro to the -
v like this:
docker run -v volume-name:/path/in/container:ro alpine
Or by using --mount option:
docker run --mount source=volume-name,destination=/path/in/container,readonly alpine
RULE #9 - Use static analysis tools
To detect containers with known vulnerabilities - scan images using static analysis tools.
● Free
● Clair
● ThreatMapper
● Trivy
● Commercial
● Snyk (open source and free option available)
● anchore (open source and free option available)
● Docker Scout (open source and free option available)
● JFrog XRay
● Qualys
To detect secrets in images:
● ggshield (open source and free option available)
● SecretScanner (open source)
To detect misconfigurations in Kubernetes:
● kubeaudit
● kubesec.io
● kube-bench
To detect misconfigurations in Docker:
● inspec.io
● dev-sec.io
● Docker Bench for Security
RULE #10 - Set the logging level to at least INFO
By default, the Docker daemon is configured to have a base logging level of 'info', and if this is not the case: set the
Docker daemon log level to 'info'. Rationale: Setting up an appropriate log level, configures the Docker daemon to log
events that you would want to review later. A base log level of 'info' and above would capture all logs except the debug
logs. Until and unless required, you should not run docker daemon at the 'debug' log level.
To configure the log level in docker-compose:
docker-compose --log-level info up
Rule #11 - Lint the Dockerfile at build time
Many issues can be prevented by following some best practices when writing the Dockerfile. Adding a security linter as a
step in the build pipeline can go a long way in avoiding further headaches. Some issues that are worth checking are:
● Ensure a USER directive is specified
● Ensure the base image version is pinned
● Ensure the OS packages versions are pinned
● Avoid the use of ADD in favor of COPY
● Avoid curl bashing in RUN directives
References:
● Docker Baselines on DevSec
● Use the Docker command line
● Overview of docker-compose CLI
● Configuring Logging Drivers
● View logs for a container or service
● Dockerfile Security Best Practices
Rule #12 - Run Docker in root-less mode
Rootless mode ensures that the Docker daemon and containers are running as an unprivileged user, which means that
even if an attacker breaks out of the container, they will not have root privileges on the host, which in turn substantially
limits the attack surface.
Rootless mode graduated from experimental in Docker Engine v20.10 and should be considered for added security,
provided the known limitations are not an impediment.
Rootless mode allows running the Docker daemon and containers as a non-root user to mitigate potential vulnerabilities
in the daemon and the container runtime. Rootless mode does not require root privileges even during the installation of
the Docker daemon, as long as the prerequisites are met. Rootless mode was introduced in Docker Engine v19.03 as an
experimental feature. Rootless mode graduated from experimental in Docker Engine v20.10.
Read more about rootless mode and its limitations, installation and usage instructions on Docker documentation page.
Open Worldwide Application Security Project (OWASP) & DOCKER
● https://qwiet.ai/an-introduction-to-the-owasp-docker-top-10/
● https://github.com/OWASP/Docker-Security
What are the threats to Docker containers?
The OWASP team breaks down the eight main threats into two primary categories of attacks to:
● Host via network services, protocol flaw, or kernel exploit
● Orchestration via network management backplane
The first five threats all start with the same initial attack vector, where attackers escape the application and container.
However, from there, they engage in different behaviors:
● Container escape: Kernel exploit to control all containers running on the host
● Other containers via network: Using shell access to attack another container through the network.
● Attacking orchestration tool via network: Using shell access then attacking the management interfaces or other
orchestration tools’ attack surfaces
● Attacking the host via network: Using shell access and attacking an open port from the host
● Attacking other resources via network: Using shell access and finding a network-based vulnerability to exploit
The last three threats cover attacks with different initial vectors:
● Resource starvation: Exploiting a security condition from another container running on the same host
● Host compromise: Compromising the host either through another container or the network
OWASP Docker Top 10
To protect Docker containers – or really any container if you can abstract the Docker-specific language OWASP uses
– you can implement the security controls outlined below.
D01 – Secure User Mapping
Applications should never run as root because when attackers escape the application, the privileges will follow them.
You should run all microservices with the least privilege possible. To ensure this, you should:
● Never use the –privileged flag
● Configure the appropriate parameters for all user IDs or use Linux user namespaces
D02 – Patch Management Strategy
The host, containment technology, orchestration solution, and minimal operating system images may have security
vulnerabilities that attackers can exploit.
You should patch often and automate the process. If you are establishing a patch management strategy, you should:
● Specify a time span for “regular basis”
● Create policies or processes for each patch domain
● Execute patches and monitor for success or failure
● Define a policy for critical patches that can’t wait until the next scheduled patch
D03 – Network Segmentation and Firewalling
You should implement a multilayer network defense that denies all access by default and provides access on a case-
by-case basis.
When planning your network segmentation and firewall strategy, you should:
● Ensure each tenant is on a different network
● Define necessary communication
● Prevent management frontends/APIs from being exposed to the internet
● Use strict allow-list rules for your management backplane
● Protect host services the same as your management frontends/APIs
For an orchestrated environment, you should have:
● An inbound network and routing policy
● An outbound network and routing policy that restricts downloads from the internet as much as possible
● Determine necessary container inter-communication
D04 – Secure Defaults and Hardening
You should identify and disable all unnecessary network services across interfaces from the following:
● Orchestration tool, like dashboard, etcd, API
● Host, like RPC services, OpenSSHD, avahi, network-based systemd-services
● Container, from the microservice (e.g. spring-boot) or distribution
At the orchestration and host levels, you should identify all services and then review the following:
● Does disabling/stopping it affect the operation?
● Can it be started only on the localhost interface or any other network interface?
● Is authentication configured according to the principle of least privilege?
● Are there configuration options that narrow down the access to this service?
● Are there any known design flaws?
● Are there any known vulnerabilities?
At the container level, you should:
● Uninstall any unnecessary packages
● Review for defective syscalls that can affect the host kernel’s security
● Disable SUID/SGID bits
D05 – Maintain Security Contexts
Your different environments require different levels of security. You should separate development and testing
environments from the production environment. To do this, you should:
● Place production containers on a separate host system and restrict access
● Identify sensitive data types that require additional protection and separate containers accordingly
● Ensure that databases, middleware, authentication services, frontend, and master components are on different
hosts
● Use Virtual Machines (VMs) to separate different security contexts
D06 – Protect Secrets
To protect access to a microservice, you should secure passwords, token, private keys, and certificates.
● Hashicorp vault
● Redhat Ansible vault
● Passbolt
● etc …
D07 – Resource Protection
Since containers share physical CPU, disks, memory, and network, you need to secure these physical resources to
prevent one container from impacting other containers’ resources.
To protect resources, you should:
● Limit the amount of memory a container can use
● Limit the amount of CPU a container can use
D08 – Container Image Integrity and Origin
For the container that runs your code, you should choose a minimal operating system from a trustworthy resource.
Additionally, you should scan and monitor all transfers and images at rest.
D09 – Follow Immutable Paradigm
Since deployed container images rarely need to write into their filesystem or a mounted filesystem, you can
implement additional security by starting them in read-only mode.
D10 – Logging
To trace all activity, you should log all relevant security events for container images, orchestration tools, and hosts
at the system and API levels.
Additionally, your application should provide remote logging.
Qwiet AI: Integrating Docker Container Security into Development Processes
With preZero, you can scan all the containers that your applications use and correlate these results with the rest of
your application scan. You can integrate the preZero platform into your current CI/CD pipelines, ticketing systems, and
development tools.
By building security directly into your current processes, our platform enables you to incorporate container security
into your secure software development life cycle (SSDLC) processes while still ensuring that you get the speed you
need to deliver software on time.
The Qwiet AI platform gives you visibility into the context around vulnerabilities so that you can effectively prioritize
remediation actions based on whether attackers can exploit a weakness in your application and account for whether
attackers are currently exploiting that vulnerability in the wild.
1. Keep Host and Docker Up to Date
It is essential to patch both Docker Engine and the underlying host operating system running Docker, to prevent a range of known
vulnerabilities, many of which can result in container espaces.
Since the kernel is shared by the container and the host, kernel exploits when an attacker manages to run on a container can
directly affect the host. For example, a successful kernel exploit can enable attackers to break out of a non-privileged container and
gain root access to the host.
2. Do Not Expose the Docker Daemon Socket
The Docker daemon socket is a Unix network socket that facilitates communication with the Docker API. By default, this socket is
owned by the root user. If anyone else obtains access to the socket, they will have permissions equivalent to root access to the
host.
Take note that it is possible to bind the daemon socket to a network interface, making the Docker container available remotely. This
option should be enabled with care, especially in production containers.
To avoid this issue, follow these best practices:
● Never make the daemon socket available for remote connections, unless you are using Docker's encrypted HTTPS socket,
which supports authentication.
● Do not run Docker images with an option like -v /var/run/docker.sock://var/run/docker.sock, which exposes the socket in the
resulting container.
3. Run Docker in Rootless Mode
Docker provides “rootless mode”, which lets you run Docker daemons and containers as non-root users. This is extremely important
to mitigate vulnerabilities in daemons and container runtimes, which can grant root access of entire nodes and clusters to an
attacker.
Rootless mode runs Docker daemons and containers within a user namespace. This is similar to the userns-remap mode, but unlike
it, rootless mode runs daemons and containers without root privileges by default.
To run Docker in rootless mode:
1. Install Docker in root mode - see instructions.
2. Use the following command to launch the Daemon when the host starts:
systemctl --user enable docker
sudo loginctl enable-linger $(whoami)
3. Here is how to run a container as rootless using Docker context:
docker context use rootless
docker run -d -p 8080:80 nginx
4. Avoid Privileged Containers
Docker provides a privileged mode, which lets a container run as root on the local machine. Running a container in privileged
mode provides the capabilities of that host—including:
● Root access to all devices
● Ability to tamper with Linux security modules like AppArmor and SELinux
● Ability to install a new instance of the Docker platform, using the host's kernel capabilities, and run Docker within Docker.
Privileged containers create a major security risk—enabling attackers to easily escalate privileges if the container is compromised.
Therefore, it is not recommended to use privileged containers in a production environment. Best of all, never use them in any
environment.
To check if the container is running in privileged mode, use the following command (returns true if the container is privileged, or an
error message if not):
docker inspect --format =''[container_id]
5. Limit Container Resources
When a container is compromised, attackers may try to make use of the underlying host resources to perform malicious activity. Set
Docker memory and CPU usage limits to minimize the impact of breaches for resource-intensive containers.
In Docker, the default setting is to allow the container to access all RAM and CPU resources on the host. It is important to set
resource quotas, to limit the resources your container can use—for security reasons, and to ensure each container has the
appropriate resources and does not disrupt other services running on the host.
6. Segregate Container Networks
Docker containers require a network layer to communicate with the outside world through the network interfaces on the host. The
default bridge network exists on all Docker hosts—if you do not specify a different network, new containers automatically connect to
it.
It is strongly recommended not to rely on the default bridge network—use custom bridge networks to control which containers can
communicate between them, and to enable automatic DNS resolution from container name to IP address. You can create as many
networks as you need and decide which networks each container should connect to (if at all).
Ensure that containers can connect to each other only if absolutely necessary, and avoid connecting sensitive containers to public-
facing networks.
Docker provides network drivers that let you create your own bridge network, overlay network, or macvlan network. If you need
more control, you can create a Docker network plugin.
7. Improve Container Isolation
Operations teams should create an optimized environment to run containers. Ideally, the operating system on a container host
should protect the host kernel from container escapes, and prevent mutual influence between containers.
Containers are Linux processes with isolation and resource limitations, running on a shared operating system kernel. Protecting a
container is exactly the same as protecting any process running on Linux. You can use one or more of the following Linux security
capabilities:
● Linux namespace
Namespaces make Linux processes appear to have access to their own, separate global resources. Namespaces provide an
abstraction that gives the impression of running in a container on its own operating system. They are the basis of container
isolation.
● SELinux
For Red Hat Linux distributions, SELinux provides an additional layer of security to isolate containers from each other and
from the host. It allows administrators to apply mandatory access controls for users, applications, processes and files. It is a
second line of defense that will stop attackers who manage to breach the namespace abstraction.
● AppArmor
For Debian Linux distributions, AppArmor is a Linux kernel enhancements that can limit programs in terms of the system
resources they can access. It binds access control attributes to specific programs, and is controlled by security profiles
loaded into the kernel at boot time.
● Cgroups
Limits, describes and isolates resource usage of a group of processes, including CPU, memory, disk I/O, and networking.
You can use cgroups to prevent container resources from being used by other containers on the same host, and at the
same time, stop attackers from creating pseudo devices.
● Capabilities
Linux allows you to limit privileges of any process, containers included. Linux provides “capabilities”, which are specific
privileges that can be enabled for each process. When running a container, you can usually deny privileges for numerous
capabilities, without affecting containerized applications.
● Seccomp
The secure computing mode (seccomp) in the Linux kernel lets you transition a process to a secure mode, in which it is only
allowed to perform a small set of safe system calls. Setting a seccomp profile for a container provides one more level of
defense against compromise.
8. Set Filesystem and Volumes to Read-only
A simple and effective security trick is to run containers with a read-only filesystem. This can prevent malicious activity such as
deploying malware on the container or modifying configuration.
9. Complete Lifecycle Management
Cloud native security requires security controls and mitigation techniques at every stage of the application lifecycle, from build to
workload and infrastructure. Follow these best practices:
● Implement vulnerability scanning to ensure clean code at all stages of the development lifecycle.
● Use a sandbox environment where you can QA your code before it goes into production, to ensure there is nothing malicious
that will deploy at runtime.
● Implement drift prevention to ensure container immutability.
● Create an incident response process to ensure rapid response in the case of an attack
● Apply automated patching.
● Ensure you have robust auditing and forensics for quick troubleshooting and compliance reporting.
10. Restrict System Calls from Within Containers
In a container, you can choose to allow or deny any system calls. Not all system calls are required to run a container.
With this in mind, you can monitor the container, obtain a list of all system calls made, explicitly allow those calls and no others. It is
important to base your configuration on observation of the container at runtime, because you may not be aware of the specific
system calls used by your container’s components, and how those calls are named in the underlying operating system.
11. Scan and Verify Container Images
Docker container images must be tested for vulnerabilities before use, especially if they were pulled from public repositories.
Remember that a vulnerability in any component of your image will exist in all containers you create from it. If you use a base image
to create new images, any vulnerability in the base image will extend to your new images.
Container image scanning is the process of analyzing the content and composition of images to detect security issues,
misconfigurations or vulnerabilities.
Images containing software with security vulnerabilities are susceptible to attacks during container runtime. If you are building an
image from the CI pipeline, you need to scan it before running it through the build. Images with vulnerabilities that exceed a severity
threshold should fail the build. Unsafe images should not be pushed to a container registry accessible by production systems.
There are many open source and proprietary image scanners available. A comprehensive solution can scan both the operating
system (if the container runs a stripped-down Linux distribution), specific libraries running within the container, and their
dependencies. Ensure the scanner supports the languages used by the components in your image.
Most container scanning tools use multiple Common Vulnerability and Exposure (CVE) databases, and test if those CVEs are
present in a container image. Some tools can also test a container image for security best practices and misconfigurations.
12. Use Minimal Base Images
Docker images are commonly built on top of “base images”. While this is convenient, because it avoids having to configure an
image from scratch, it raises security concerns. You may use a base image with components that are not really required for your
purposes. A common example is using a base image with a full Debian Stretch distribution, whereas your specific project does not
really require operating system libraries or utilities.
Remember that any additional component added to your images expands the attack surface. Carefully select base images to
ensure they suit your purposes, and if necessary, build your own minimal base image.
13. Don’t Leak Sensitive Info to Docker Images
Docker images often require sensitive data for their normal operations, such as credentials, tokens, SSH keys, TLS certificates,
database names or connection strings. In other cases, applications running in a container may generate or store sensitive data.
Sensitive information should never be hardcoded into the Dockerfile—it will be copied to Docker containers, and may be cached in
intermediate container layers, even if you attempt to delete them.
Container orchestrators like Kubernetes and Docker Swarm provide a secrets management capability which can solve this problem.
You can use secrets to manage sensitive data a container needs at runtime, without storing it in the image or in source code.
14. Use Multi Stage Builds
To build containerized applications in a consistent manner, it is common to use multi-stage builds. This has both operational and
security advantages.
In a multi-stage build, you create an intermediate container that contains all the tools you need to compile or generate the final
artifact. At the last stage, only the generated artifacts are copied to the final image, without any development dependencies or
temporary build files.
A well-designed multi-stage build contains only the minimal binary files and dependencies required for the final image, with no build
tools or intermediate files. This significantly reduces the attack surface.
In addition, a multi-stage build gives you more control over the files and artifacts that go into a container image, making it more
difficult for attackers or insiders to add malicious or untested artifacts without permission.
15. Secure Container Registries
Container registries are highly convenient, letting you download container images at the click of a button, or automatically as part of
development and testing workflows.
However, together with this convenience comes a security risk. There is no guarantee that the image you are pulling from the
registry is trusted. It may unintentionally contain security vulnerabilities, or may have intentionally been replaced with an image
compromised by attackers.
The solution is to use a private registry deployed behind your own firewall, to reduce the risk of tampering. To add another layer of
protection, ensure that your registry uses Role Based Access Control (RBAC) to restrict which users can upload and download
images from it.
Avoid giving open access to your entire team—this simplifies operations, but increases the risk that a team member, or an attacker
compromising their attack, can introduce unwanted artifacts into an image.
16. Use Fixed Tags for Immutability
Tags are commonly used to manage versions of Docker images. For example, a latest tag is used to indicate that this is the latest
version of an image. However, because tags can be changed, it is possible for several images to have a latest tag, causing
confusion and inconsistent behavior in automated builds.
There are three main strategies for ensuring tags are immutable and are not affected by subsequent changes to the image:
● Preferring a more specific tag—if an image has several tags, a build process should select the tag containing the most
information (e.g. both version and operating system).
● Keeping a local copy of images—for example, in a private repository, and confirming that tags are the same as those in
the local copy.
● Signing images—Docker offers a Content Trust mechanism that allows you to cryptographically sign images using a private
key. This guarantees the image, and its tags, have not been modified.
17. Add the HEALTHCHECK Instruction to the Container Image
The HEALTHCHECK instruction tells Docker to continuously test a container, to check that it is still working. If Docker finds that a
container is not healthy, it can automatically restart it. This can allow your Docker environment to automatically respond to issues
that affect container availability or security.
Implementing the HEALTHCHECK instruction is straightforward. It involves adding a command to your Dockerfile that Docker can
execute to check the health of your container. This command could be as simple as checking if a particular service is running or as
complex as running a script that tests various aspects of your container.
18. Use COPY Instead of ADD When Writing Dockerfiles
COPY and ADD are two commands you can use in your Dockerfiles to add elements to your container. The main difference
between them is that ADD has some additional features—for example, it can automatically extract compressed files, and can
download files from a URL.
These additional features in the ADD command can lead to security vulnerabilities. For example, if you use ADD to download a file
from a URL, and that URL is compromised, your Docker container could be infected with malware. Therefore, it is more secure to
use COPY in your Dockerfiles.
19. Monitor Container Activity
Visibility and monitoring are critical to smooth operation and security of Docker containers. Containerized environments are
dynamic, and close monitoring is required to understand what is running in your environment, identify anomalies and respond to
them.
Each container image can have multiple running instances. Due to the speed at which new images and versions are deployed,
issues can quickly propagate across containers and applications. Therefore, it is critical to identify problems early and remediate
them at the source—for example, by identifying a faulty image, fixing it, and rebuilding all containers using that image.
Put tools and practices in place that can help you achieve observability of the following components:
● Docker hosts
● Container engines
● Master nodes (if running an orchestrator like Kubernetes)
● Containerized middleware and networking
● Workloads running in containers
In large-scale environments, this can only be achieved with dedicated cloud-native monitoring tools.
20. Secure Containers at Runtime
At the center of the cloud native stack are workloads, always a prized asset for hackers. The ability to stop an attack in progress is
of utmost importance but few organizations are effectively able to stop an attack or zero-day exploit as it happens, or before it
happens.
Runtime security for Docker containers involves securing your workload, so that once a container is running, drift is not possible,
and any malicious action is blocked immediately. Ideally, this should be done with minimal overhead and rapid response time.
Implement drift prevention measures to stop attacks in progress and prevent zero day exploits. In addition, use automated
vulnerability patching and management to provide another layer of runtime security.
21. Save Troubleshooting Data Separately from Containers
If your team needs to log into your containers using SSH for every maintenance operation, this creates a security risk. You
should design a way to maintain containers without needing to directly access them.
A good way to do this and limit SSH access is to make the logs available outside the container. In this way, administrators can
troubleshoot containers without logging in. They can then tear down existing containers and deploy new ones, without ever
establishing a connection.
22. Use Metadata Labels for Images
Container labeling is a common practice, applied to objects like images, deployments, Docker containers, volumes, and networks.
Use labels to add information to containers, such as licensing information, sources, names of authors, and relation of containers to
projects or components. They can also be used to categorize containers and their contents for compliance purposes, for example
labeling a container as containing protected data.
Labels are commonly used to organize containerized environments and automate workflows. However, when workflows rely on
labels, errors in applying a label can have severe consequences. To address this concern, automate labeling processes as much as
possible, and carefully control which users and roles are allowed to assign or modify labels.
Host Configuration
● Create a separate partition for containers
● Harden the container host
● Update your Docker software on a regular basis
● Manage Docker daemon access authorization wisely
● Configure your Docker files directories, and
● Audit all Docker daemon activity.
Docker Daemon Configuration
● Restrict network traffic between default bridge containers and access to new privileges from containers.
● Enable user namespace support to provide additional, Docker client commands authorization, live restore, and default cgroup
usage
● Disable legacy registry operations and Userland Proxy
● Avoid networking misconfiguration by allowing Docker to make changes to iptables, and avoid experimental features during
production.
● Configure TLS authentication for Docker daemon and centralized and remote logging.
● Set the logging level to 'info', and set an appropriate default ulimit
● Don’t use insecure registries and aufs storage drivers
● Apply base device size for containers and a daemon-wide custom SECCOMP profile to limit calls.
Container Images and Build File
● Create a user for the container
● Ensure containers use only trusted images
● Ensure unnecessary packages are not installed in the container
● Include security patches during scans and rebuilding processes
● Enable content trust for Docker
● Add HEALTHCHECK instructions to the container image
● Remove setuid and setgid permissions from the images
● Use COPY is instead of ADD in Dockerfile
● Install only verified packages
● Don’t use update instructions in a single line or alone in the Dockerfile
● Don’t store secrets in Dockerfiles
Container Runtime
● Restrict containers from acquiring additional privileges and restrict Linux Kernel Capabilities.
● Enable AppArmor Profile.
● Avoid use of privileged containers during runtime, running ssh within containers, mapping privileged ports within containers.
● Ensure sensitive host system directories aren’t mounted on containers, the container's root filesystem is mounted as read-
only, the Docker socket is not mounted inside any containers.
● Set appropriate CPU priority for the container, set 'on-failure' container restart policy to '5', and open only necessary ports on
the container.
● Apply per need SELinux security options, and overwrite the default ulimit at runtime.
● Don’t share the host's network namespace and the host's process namespace, the host's IPC namespace, mount
propagation mode, the host's UTS namespace, the host's user namespaces.
● Limit memory usage for container and bind incoming container traffic to a specific host interface.
● Don’t expose host devices directly to containers, don’t disable the default SECCOMP profile, don’t use docker exec
commands with privileged and user option, and don’t use Docker's default bridge docker0.
● Confirm cgroup usage and use PIDs cgroup limit, check container health at runtime, and always update docker commands
with the latest version of the image.
Docker Security Operations
Avoid image sprawl and container sprawl.
Docker Swarm Configuration
● Enable swarm mode only if needed
● Create a minimum number of manager nodes in a swarm
● Bind swarm services are bound to a specific host interface
● Encrypt containers data exchange on different overlay network nodes
● Manage secrets in a Swarm cluster with Docker's secret management commands
● Run swarm manager in auto-lock mode
● Rotate swarm manager auto-lock key periodicallya
● Rotate node and CA certificates as needed
● Separate management plane traffic from data plane traffic
Docker Forensics
This repo contains a toolkit for performing post-mortem analysis of Docker runtime environments based on forensic HDD copies
of the docker host system.
● dof (Docker Forensics Toolkit) - Extracts and interprets forensic artifacts from disk images of
Docker Host systems
● https://github.com/docker-forensics-toolkit/toolkit
Docker explorer
This project helps a forensics analyst explore offline Docker filesystems.
This is not an officially supported Google product.
● https://github.com/google/docker-explorer?tab=readme-ov-file
Container explorer
Container Explorer (container-explorer) is a tool to explore containers of a disk image.
Container Explorer supports exploring containers managed using containerd and docker container runtimes.
Container Explorer attempts to provide the familiar output generated by tools like ctr and docker.
Container Explorer provides the following functionalities:
● Exploring namespaces
● Exploring containers
● Exploring images
● Exploring snapshots
● Exploring contents
● Mounting containers
● Support JSON output
● https://github.com/google/container-explorer
For more information :
DOCKER vs PODMAN
Podman is an open-source container runtime management tool that has gained popularity
as an alternative to Docker.
It originates from the broader container ecosystem in the Linux world and provides a
lightweight, secure, and efficient environment for managing containers.
Podman addresses several key problems faced by developers and administrators.
Firstly, it allows users to run containers without requiring a daemon (system service),
eliminating the need for a root process, enhancing security, and providing a more
streamlined experience.
Additionally, it offers a familiar command-line interface, allowing users to easily transition
from Docker and leverage existing container management knowledge.
Furthermore, it provides improved compatibility with the Open Container Initiative (OCI)
standards, enabling better interoperability with other container tools and platforms.
Docker Vs Podman Vs Containerd Vs CRI-O
Exploring the key roles of container runtimes in modern software deployment, this comparison navigates the unique features of
four popular technologies :
● Docker : A comprehensive platform that enables developers to build, share, and run containers with an easy-to-use CLI
and a daemon-based architecture.
● Podman : A daemonless container engine for developing, managing, and running OCI Containers on your Linux System,
with a CLI similar to Docker.
● Containerd : An industry-standard core container runtime, focused on simplicity and robustness, providing the minimum
functionalities required to run containers and manage images on a system.
● CRI-O : A lightweight container runtime specifically designed for Kubernetes, providing an implementation of the
Kubernetes Container Runtime Interface (CRI) to allow OCI compatible runtimes to be used in Kubernetes clusters.
USEFULL HELP
Official Resources:
Docker Documentation: https://docs.docker.com/
Docker Get Started: https://docs.docker.com/get-started/
Docker Labs: https://dockerlabs.collabnix.com/
Play with Docker: https://labs.play-with-docker.com/
Docker Hub: https://hub.docker.com/
Katacoda Labs: https://katacoda.com/
Docker Awesome: https://github.com/docker/awesome-compose
Docker Cheat Sheet: https://devhints.io/docker
Docker Blog: https://www.docker.com/blog/
QUESTIONS & ECHANGES

More Related Content

Similar to Tips pour sécuriser ses conteneurs docker/podman

Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)MeetupDataScienceRoma
 
Managing Docker containers
Managing Docker containersManaging Docker containers
Managing Docker containerssiuyin
 
Docker Essentials Workshop— Innovation Labs July 2020
Docker Essentials Workshop— Innovation Labs July 2020Docker Essentials Workshop— Innovation Labs July 2020
Docker Essentials Workshop— Innovation Labs July 2020CloudHero
 
Docker in everyday development
Docker in everyday developmentDocker in everyday development
Docker in everyday developmentJustyna Ilczuk
 
Docker and containers - Presentation Slides by Priyadarshini Anand
Docker and containers - Presentation Slides by Priyadarshini AnandDocker and containers - Presentation Slides by Priyadarshini Anand
Docker and containers - Presentation Slides by Priyadarshini AnandPRIYADARSHINI ANAND
 
Docker for developers z java
Docker for developers z javaDocker for developers z java
Docker for developers z javaandrzejsydor
 
Docker: A New Way to Turbocharging Your Apps Development
Docker: A New Way to Turbocharging Your Apps DevelopmentDocker: A New Way to Turbocharging Your Apps Development
Docker: A New Way to Turbocharging Your Apps Developmentmsyukor
 
How to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeHow to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeEvoke Technologies
 
Docker Introduction.pdf
Docker Introduction.pdfDocker Introduction.pdf
Docker Introduction.pdfOKLABS
 
How to dockerize rails application compose and rails tutorial
How to dockerize rails application compose and rails tutorialHow to dockerize rails application compose and rails tutorial
How to dockerize rails application compose and rails tutorialKaty Slemon
 
DevAssistant, Docker and You
DevAssistant, Docker and YouDevAssistant, Docker and You
DevAssistant, Docker and YouBalaBit
 
Dockers & kubernetes detailed - Beginners to Geek
Dockers & kubernetes detailed - Beginners to GeekDockers & kubernetes detailed - Beginners to Geek
Dockers & kubernetes detailed - Beginners to GeekwiTTyMinds1
 

Similar to Tips pour sécuriser ses conteneurs docker/podman (20)

Docker for Developers
Docker for DevelopersDocker for Developers
Docker for Developers
 
Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)
 
Managing Docker containers
Managing Docker containersManaging Docker containers
Managing Docker containers
 
Docker
DockerDocker
Docker
 
docker.pdf
docker.pdfdocker.pdf
docker.pdf
 
Docker Essentials Workshop— Innovation Labs July 2020
Docker Essentials Workshop— Innovation Labs July 2020Docker Essentials Workshop— Innovation Labs July 2020
Docker Essentials Workshop— Innovation Labs July 2020
 
Docker in everyday development
Docker in everyday developmentDocker in everyday development
Docker in everyday development
 
Docker and containers - Presentation Slides by Priyadarshini Anand
Docker and containers - Presentation Slides by Priyadarshini AnandDocker and containers - Presentation Slides by Priyadarshini Anand
Docker and containers - Presentation Slides by Priyadarshini Anand
 
Docker for developers z java
Docker for developers z javaDocker for developers z java
Docker for developers z java
 
Docker.pdf
Docker.pdfDocker.pdf
Docker.pdf
 
Docker: A New Way to Turbocharging Your Apps Development
Docker: A New Way to Turbocharging Your Apps DevelopmentDocker: A New Way to Turbocharging Your Apps Development
Docker: A New Way to Turbocharging Your Apps Development
 
How to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeHow to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker Compose
 
Docker for dev
Docker for devDocker for dev
Docker for dev
 
Docker, LinuX Container
Docker, LinuX ContainerDocker, LinuX Container
Docker, LinuX Container
 
Docker Introduction.pdf
Docker Introduction.pdfDocker Introduction.pdf
Docker Introduction.pdf
 
How to dockerize rails application compose and rails tutorial
How to dockerize rails application compose and rails tutorialHow to dockerize rails application compose and rails tutorial
How to dockerize rails application compose and rails tutorial
 
Introduction To Docker
Introduction To  DockerIntroduction To  Docker
Introduction To Docker
 
DevAssistant, Docker and You
DevAssistant, Docker and YouDevAssistant, Docker and You
DevAssistant, Docker and You
 
How to _docker
How to _dockerHow to _docker
How to _docker
 
Dockers & kubernetes detailed - Beginners to Geek
Dockers & kubernetes detailed - Beginners to GeekDockers & kubernetes detailed - Beginners to Geek
Dockers & kubernetes detailed - Beginners to Geek
 

Recently uploaded

Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - InfographicHr365.us smith
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
software engineering Chapter 5 System modeling.pptx
software engineering Chapter 5 System modeling.pptxsoftware engineering Chapter 5 System modeling.pptx
software engineering Chapter 5 System modeling.pptxnada99848
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 

Recently uploaded (20)

Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - Infographic
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
software engineering Chapter 5 System modeling.pptx
software engineering Chapter 5 System modeling.pptxsoftware engineering Chapter 5 System modeling.pptx
software engineering Chapter 5 System modeling.pptx
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 

Tips pour sécuriser ses conteneurs docker/podman

  • 1. Tips for generating docker complaints with the dev(sec)ops Thierry GAYET - 01/2024
  • 2. GOAL The purpose of this presentation will to provide several useful information on the good way to make docker containers.
  • 5.
  • 6.
  • 12. # syntax=docker/dockerfile:1 FROM ubuntu:22.04 COPY . /app RUN make /app CMD python /app/app.py Each instruction creates one layer : ● FROM creates a layer from the ubuntu:22.04 Docker image. ● COPY adds files from your Docker client's current directory. ● RUN builds your application with make. ● CMD specifies what command to run within the container. [EXAMPLE] DOCKERFILE
  • 13. REFERENCES Never use an undefined tag such as : FROM ubuntu or FROM ubuntu:latest This makes generation inaccurate and non-reproducible because the tag can change ! So always use a specific tag such as : FROM ubuntu:22.04 Officials tags are available on docker hub : https://hub.docker.com/_/ubuntu/tags
  • 16. DYNAMIC FILE GENERATION ON A DOCKERFILE It can ve useful to generate dynamically some files : # syntax=docker/dockerfile:1 FROM golang:1.21 WORKDIR /src COPY << EOF ./main.go package main import "fmt" func main() { fmt.Println("hello, world") } EOF RUN go build -o /bin/hello ./main.go
  • 18.
  • 19. Layers The order of Dockerfile instructions matters. A Docker build consists of a series of ordered build instructions. Each instruction in a Dockerfile roughly translates to an image layer. The following diagram illustrates how a Dockerfile translates into a stack of layers in a container image. Because of the current order of the Dockerfile instructions, the builder must download the Go modules again, despite none of the packages having changed since the last time.
  • 20. Cached layers When you run a build, the builder attempts to reuse layers from earlier builds. If a layer of an image is unchanged, then the builder picks it up from the build cache. If a layer has changed since the last build, that layer, and all layers that follow, must be rebuilt. The Dockerfile from the previous section copies all project files to the container (COPY . .) and then downloads application dependencies in the following step (RUN go mod download). If you were to change any of the project files, then that would invalidate the cache for the COPY layer. It also invalidates the cache for all of the layers that follow.
  • 21. Update the instruction order You can avoid this redundancy by reordering the instructions in the Dockerfile. Change the order of the instructions so that downloading and installing dependencies occur before the source code is copied over to the container. In that way, the builder can reuse the "dependencies" layer from the cache, even when you make changes to your source code. Go uses two files, called go.mod and go.sum, to track dependencies for a project. These files are to Go, what package.json and package-lock.json are to JavaScript. For Go to know which dependencies to download, you need to copy the go.mod and go.sum files to the container. Add another COPY instruction before RUN go mod download, this time copying only the go.mod and go.sum files. # syntax=docker/dockerfile:1 FROM golang:1.21-alpine WORKDIR /src - COPY . . + COPY go.mod go.sum . RUN go mod download + COPY . . RUN go build -o /bin/client ./cmd/client RUN go build -o /bin/server ./cmd/server ENTRYPOINT [ "/bin/server" ]
  • 22. Ordering your Dockerfile instructions appropriately helps you avoid unnecessary work at build time. https://kodekloud.com/blog/docker-image-layers/ Now if you edit your source code, building the image won't cause the builder to download the dependencies each time. The COPY . . instruction appears after the package management instructions, so the builder can reuse the RUN go mod download layer.
  • 23.
  • 24.
  • 25.
  • 26. ADD NON ROOT USER
  • 27. ADD NON ROOT USER By default, Docker containers run as the root user, which can pose security risks if the container becomes compromised. Also, running as root can be an issue when sharing folders between the host and the docker container. To reduce these risks, we can run a Docker container with a custom non-root user that matches your host Linux user's user ID (UID) and group ID (GID), ensuring seamless permission handling for mounted folders. Running a docker build command that uses (mainly) a non-root user might force us to use sudo for some commands. The same is valid for running the docker itself using unattended scripts. You may need elevated privileges for specific tasks. Granting password-less sudo permissions to a non-root user allows you to perform administrative tasks without the risk of running the entire container as the root user.
  • 28. Step 1: Adjust the Dockerfile to Accept UID and GID as Arguments Modify your Dockerfile to accept the host's UID and GID as arguments. This way, you can create a user in the container with a matching UID and GID. Add the following lines to your Dockerfile: FROM ubuntu ARG UID ARG GID # Update the package list, install sudo, create a non-root user, and grant password-less sudo permissions RUN apt update && apt install -y sudo && addgroup --gid $GID nonroot && adduser --uid $UID --gid $GID --disabled-password --gecos "" nonroot && echo 'nonroot ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers # Set the non-root user as the default user USER nonroot ADD NON ROOT USER
  • 29. Step 2: Set the Working Directory Set the working directory where the non-root user can access it. Add the following line to your Dockerfile: # Set the working directory WORKDIR /home/nonroot/app This sets the working directory to '/home/nonroot/app', where the non-root user has read and write permissions. ADD NON ROOT USER
  • 30. Step 3: Copy Files and Set Permissions Ensure the non-root user has the necessary permissions to access the copied files. Add the following lines to your Dockerfile: # Copy files into the container and set the appropriate permissions COPY --chown=nonroot:nonroot . /home/nonroot/app RUN chmod -R 755 /home/nonroot/app ADD NON ROOT USER
  • 31. ADD NON ROOT USER Step 4: Build and Run the Docker Container with UID and GID Parameters Now you can build the Docker image and run the container with the custom non-root user. Pass your host's UID and GID as build arguments to create a user with matching permissions. Use the following commands to build and run your container: # Get your host's UID and GID export HOST_UID=$(id -u) export HOST_GID=$(id -g) # Build the Docker image docker build --build-arg UID=$HOST_UID --build-arg GID=$HOST_GID -t your-image-name . # Run the Docker container docker run -it --rm --name your-container-name your-image-name id The docker output will be : uid=1000(nonroot) gid=1000(nonroot) groups=1000(nonroot)
  • 32. ADD NON ROOT USER Optional - Adding Docker Compose for Running a Custom Non-Root User Container Docker Compose is a tool for defining and running multi-container applications using a YAML file to configure the application's services, networks, and volumes. It simplifies managing containers, especially when working with multiple services. This section will discuss how to use Docker Compose to run a Docker container with a custom non-root user that matches your host's UID and GID. Create a docker-compose.yml file in your project directory with the following content: version: '3.8' services: your_service_name: build: context: . args: UID: ${HOST_UID} GID: ${HOST_GID} image: your-image-name container_name: your-container-name volumes: - ./app:/home/nonroot/app
  • 33. ADD NON ROOT USER This YAML file defines a service, your_service_name, using the Dockerfile in the current directory. The build section passes the UID and GID build arguments from the host environment variables HOST_UID and HOST_GID. The volumes section maps a local directory (./app) to the container's working directory (/home/nonroot/app), ensuring seamless permission handling for the mounted folder. First, to run the container using Docker Compose set the HOST_UID and HOST_GID environment variables in your host system. The following command will build the docker (if needed), start it, print the user ID, and remove the container: HOST_UID=$(id -u) HOST_GID=$(id -g) docker compose run --rm your_service_name id
  • 34. Running a Docker container with a custom non-root user that matches your host's UID and GID ensures seamless permission handling for mounted folders while maintaining security. Optimizing the Dockerfile and combining RUN commands can reduce the image size and improve performance. Following these steps will help you create and run a Docker container with a non-root user that aligns with your host's permissions, reducing the risk of potential security breaches and permission issues. Always prioritize security when deploying applications and containers to ensure a safe and stable environment. Integrating Docker Compose into your workflow simplifies container management and improves the overall development experience, allowing you to focus on building your application. ADD NON ROOT USER
  • 35. ● you should chmod outside the image before you COPY to avoid duplicating all the files in a new layer (explore them with a tool like Dive to detect such waste; also note that while not documented you can -- chmod during COPY with BuildKit enabled, but this applies to files and directories, and most if the time you don't want files to be executables) ● apps shouldn't be given permission to modify themselves; while not as important as on a non-containerized system, a vulnerability in the app could lead to it modifying its own code and configuration files, which could allow RCEs. We've seen that for config files with log4j and logback a year ago. Only "data" files should be writeable. ADD NON ROOT USER
  • 37. MULTISTAGE BUILD The goal of using a multistage is multiple: ● have several nested build levels callable separately ● be able to have several levels of internal builds to reduce the final size of an image by copying an intermediate build to the final image To build containerized applications in a consistent manner, it is common to use multi-stage builds. This has both operational and security advantages. In a multi-stage build, you create an intermediate container that contains all the tools you need to compile or generate the final artifact. At the last stage, only the generated artifacts are copied to the final image, without any development dependencies or temporary build files. A well-designed multi-stage build contains only the minimal binary files and dependencies required for the final image, with no build tools or intermediate files. This significantly reduces the attack surface. In addition, a multi-stage build gives you more control over the files and artifacts that go into a container image, making it more difficult for attackers or insiders to add malicious or untested artifacts without
  • 38. MULTISTAGE BUILD Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client.
  • 39. MULTISTAGE BUILD Why we need multi-stage build? One of the most challenging things about building images is keeping the image size down. For that we have to be careful while moving from one environment to another environment and we needed to keep tracks of artifacts, traditionally these can be achieved using shell scripts. Apart from that, Maintaining two or more dockerfile for application is not ideal. Multi-stage build simplifies this situation.
  • 40. MULTISTAGE BUILD What is multi-stage build? Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. COPY --from=0 /src/app .
  • 41. MULTISTAGE BUILD In the above instruction, we are using stage 0 to copy artifacts and leaving everything else behind. But, numbering stage, let’s just say not easy to read. We can name our build stage as FROM nginx:latest AS dev COPY --from=dev /src/app .
  • 42. MULTISTAGE BUILD Command Guide — Visual Studio Code Intelligence :
  • 43. Control over a build — Stop at a specific build stage When you build your image, you don’t necessarily need to build the entire Dockerfile including every stage. You can specify a target build stage. This is useful when Debugging a specific build stage. $ docker build --target test . This will build image till mention target stage and stop. When using multi-stage builds, you are not limited to copying from stages you created earlier in your Dockerfile. The Docker client pulls the image from the registry (like docker hub) if necessary and copies the artifact from there. MULTISTAGE BUILD
  • 44. EXAMPLES #1 : FROM maven:3.5.2-jdk-9 AS build COPY src /usr/src/app/src COPY pom.xml /usr/src/app RUN mvn -f /usr/src/app/pom.xml clean package FROM openjdk:9 COPY --from=build /usr/src/app/target/flighttracker-1.0.0-SNAPSHOT.jar /usr/app/flighttracker-1.0.0-SNAPSHOT.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/usr/app/flighttracker-1.0.0-SNAPSHOT.jar"] MULTISTAGE BUILD
  • 45. EXAMPLES #2 : FROM node:12.13.0-alpine as build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build FROM nginx EXPOSE 3000 COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf COPY --from=build /app/build /usr/share/nginx/html MULTISTAGE BUILD
  • 46. EXAMPLES #3 : FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:12 AS development # Build steps go here FROM development as builder WORKDIR /app COPY src/ *.json ./ RUN yarn install && yarn compile # Just install prod dependencies && yarn install --prod # Actual production environment setup goes here FROM node:12-slim AS production WORKDIR /app COPY --from=builder /app/out/ ./out/ COPY --from=builder /app/node_modules/ ./node_modules/ COPY --from=builder /app/package.json . EXPOSE 3000 ENTRYPOINT [ "/bin/bash", "-c" ] CMD [ "npm start" ] MULTISTAGE BUILD
  • 47. MULTISTAGE BUILD EXAMPLES #4 : # Stage 1: Build FROM python:3.10 AS build # Install RUN apt update && apt install -y sudo # Add non-root user ARG USERNAME=nonroot RUN groupadd --gid 1000 $USERNAME && useradd --uid 1000 --gid 1000 -m $USERNAME ## Make sure to reflect new user in PATH ENV PATH="/home/${USERNAME}/.local/bin:${PATH}" USER $USERNAME ## Pip dependencies # Upgrade pip RUN pip install --upgrade pip # Install production dependencies COPY --chown=nonroot:1000 requirements.txt /tmp/requirements.txt RUN pip install -r /tmp/requirements.txt && rm /tmp/requirements.txt
  • 48. MULTISTAGE BUILD # Stage 2: Development FROM build AS development # Install development dependencies COPY --chown=nonroot:1000 requirements-dev.txt /tmp/requirements-dev.txt RUN pip install -r /tmp/requirements-dev.txt && rm /tmp/requirements-dev.txt # Stage 3: Production FROM build AS production # No additional steps are needed, as the production dependencies are already installed docker build --target development : build an image with both production and development dependencies while docker build --target production : build an image with only the production dependencies.
  • 49. TOOLTIPS TO WRITE DOCKERFILE
  • 50. Use multi-stage builds Multi-stage builds let you reduce the size of your final image, by creating a cleaner separation between the building of your image and the final output. Split your Dockerfile instructions into distinct stages to make sure that the resulting output only contains the files that's needed to run the application. Using multiple stages can also let you build more efficiently by executing build steps in parallel. See Multi-stage builds for more information.
  • 51. Exclude with .dockerignore To exclude files not relevant to the build, without restructuring your source repository, use a .dockerignore file. This file supports exclusion patterns similar to .gitignore files. For information on creating one, see Dockerignore file.
  • 52. Create ephemeral containers The image defined by your Dockerfile should generate containers that are as ephemeral as possible. Ephemeral means that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration. Refer to Processes under The Twelve-factor App methodology to get a feel for the motivations of running containers in such a stateless fashion.
  • 53. Don't install unnecessary packages Avoid installing extra or unnecessary packages just because they might be nice to have. For example, you don’t need to include a text editor in a database image. When you avoid installing extra or unnecessary packages, your images have reduced complexity, reduced dependencies, reduced file sizes, and reduced build times.
  • 54. Decouple applications Each container should have only one concern. Decoupling applications into multiple containers makes it easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner. Limiting each container to one process is a good rule of thumb, but it's not a hard and fast rule. For example, not only can containers be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, and Apache can create one process per request. Use your best judgment to keep containers as clean and modular as possible. If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.
  • 55. Sort multi-line arguments Whenever possible, sort multi-line arguments alphanumerically to make maintenance easier. This helps to avoid duplication of packages and make the list much easier to update. This also makes PRs a lot easier to read and review. Adding a space before a backslash () helps as well. Here’s an example from the buildpack-deps image: RUN apt-get update && apt-get install -y bzr cvs git mercurial subversion && rm -rf /var/lib/apt/lists/*
  • 56. Leverage build cache When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. For each instruction, Docker checks whether it can reuse the instruction from the build cache. The basic rules of build cache invalidation are as follows: ● Starting with a parent image that's already in the cache, the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated. ● In most cases, simply comparing the instruction in the Dockerfile with one of the child images is sufficient. However, certain instructions require more examination and explanation. ● For the ADD and COPY instructions, the modification time and size file metadata is used to determine whether cache is valid. During cache lookup, cache is invalidated if the file metadata has changed for any of the files involved. ● Aside from the ADD and COPY commands, cache checking doesn't look at the files in the container to determine a cache match. For example, when processing a RUN apt-get -y update command the files updated in the container aren't examined to determine if a cache hit exists. In that case just the command string itself is used to find a match. Once the cache is invalidated, all subsequent Dockerfile commands generate new images and the cache isn't used. If your build contains several layers and you want to ensure the build cache is reusable, order the instructions from less frequently changed to more frequently changed where possible. For more information about the Docker build cache and how to optimize your builds, see cache management.
  • 57. Pin base image versions Image tags are mutable, meaning a publisher can update a tag to point to a new image. This is a useful because it lets publishers update tags to point to newer versions of an image. And as an image consumer, it means you automatically get the new version when you re-build your image. For example, if you specify FROM alpine:3.19 in your Dockerfile, 3.19 resolves to the latest patch version for 3.19. content_copy # syntax=docker/dockerfile:1 FROM alpine:3.19 At one point in time, the 3.19 tag might point to version 3.19.1 of the image. If you rebuild the image 3 months later, the same tag might point to a different version, such as 3.19.4. This publishing workflow is best practice, and most publishers use this tagging strategy, but it isn't enforced. The downside with this is that you're not guaranteed to get the same for every build. This could result in breaking changes, and it means you also don't have an audit trail of the exact image versions that you're using. To fully secure your supply chain integrity, you can pin the image version to a specific digest. By pinning your images to a digest, you're guaranteed to always use the same image version, even if a publisher replaces the tag with a new image. For example, the following Dockerfile pins the Alpine image to the same tag as earlier, 3.19, but this time with a digest reference as well.
  • 58. FROM alpine:3.19@sha256:13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd With this Dockerfile, even if the publisher updates the 3.19 tag, your builds would still use the pinned image version: 13b7e62e8df80264dbb747995705a986aa530415763a6c58f84a3ca8af9a5bcd. While this helps you avoid unexpected changes, it's also more tedious to have to look up and include the image digest for base image versions manually each time you want to update it. And you're opting out of automated security fixes, which is likely something you want to get. Docker Scout has a built-in Outdated base images policy that checks for whether the base image version you're using is in fact the latest version. This policy also checks if pinned digests in your Dockerfile correspond to the correct version. If a publisher updates an image that you've pinned, the policy evaluation returns a non-compliant status, indicating that you should update your image. Docker Scout also supports an automated remediation workflow for keeping your base images up-to-date. When a new image digest is available, Docker Scout can automatically raise a pull request on your repository to update your Dockerfiles to use the latest version. This is better than using a tag that changes the version automatically, because you're in control and you have an audit trail of when and how the change occurred. For more information about automatically updating your base images with Docker Scout, see Remediation
  • 59. USEFUL TOOLS FOR CONTINUOUS INTEGRATION (CI) & CONTINUOUS DEPLOYEMENT (CD) AND MORE
  • 60. The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated, and are based on the CIS Docker Benchmark v1.6.0. ● https://github.com/docker/docker-bench-security
  • 61. Lynis is a security auditing tool for systems based on UNIX like Linux, macOS, BSD, and others. It performs an in-depth security scan and runs on the system itself. The primary goal is to test security defenses and provide tips for further system hardening. It will also scan for general system information, vulnerable software packages, and possible configuration issues. Lynis was commonly used by system administrators and auditors to assess the security defenses of their systems. Besides the "blue team," nowadays penetration testers also have Lynis in their toolkit. We believe software should be simple, updated on a regular basis, and open. You should be able to trust, understand, and have the option to change the software. Many agree with us, as the software is being used by thousands every day to protect their systems. https://github.com/CISOfy/lynis https://cisofy.com/lynis/
  • 62. HADOLINT : Haskell Dockerfile Linter https://github.com/hadolint/hadolint A smarter Dockerfile linter that helps you build best practice Docker images. The linter parses the Dockerfile into an AST and performs rules on top of the AST. It stands on the shoulders of ShellCheck to lint the Bash code inside RUN instructions. You can run hadolint locally to lint your Dockerfile. hadolint <Dockerfile> hadolint --ignore DL3003 --ignore DL3006 <Dockerfile> # exclude specific rules hadolint --trusted-registry my-company.com:500 <Dockerfile> # Warn when using untrusted FROM images Docker comes to the rescue, providing an easy way how to run hadolint on most platforms. Just pipe your Dockerfile to docker run: docker run --rm -i hadolint/hadolint < Dockerfile # OR docker run --rm -i ghcr.io/hadolint/hadolint < Dockerfile
  • 63. Dockle - Container Image Linter for Security, Helping build the Best-Practice Docker Image, Easy to start https://github.com/goodwithtech/dockle
  • 64. Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues. https://github.com/aquasecurity/trivy Targets (what Trivy can scan): ● Container Image ● Filesystem ● Git Repository (remote) ● Virtual Machine Image ● Kubernetes ● AWS Scanners (what Trivy can find there): ● OS packages and software dependencies in use (SBOM) ● Known vulnerabilities (CVEs) ● IaC issues and misconfigurations ● Sensitive information and secrets ● Software licenses
  • 65. DOCKER CLEAN A simple Shell script to clean up the Docker Daemon. GIT REPO : https://github.com/ZZROTDesign/docker-clean INSTALL : $ curl -s https://raw.githubusercontent.com/ZZROTDesign/docker- clean/v2.0.4/docker-clean | sudo tee /usr/local/bin/docker-clean > /dev/null && sudo chmod +x /usr/local/bin/docker-clean USAGE : $ docker-clean –all
  • 66. DIVE TOOL A tool for exploring a docker image, layer contents, and discovering ways to shrink the size of your Docker/OCI image. GIT REPO : https://github.com/wagoodman/dive https://github.com/wagoodman/dive/releases INSTALL : $ curl -L https://github.com/wagoodman/dive/releases/download/v0.11.0/dive_0.11.0_darwin_amd64.tar.gz -o /tmp/dive_0.11.0_darwin_amd64.tar.gz && tar zxvf /tmp/dive_0.11.0_darwin_amd64.tar.gz -C /tmp && sudo mv /tmp/dive /usr/bin/dive $ curl -L https://github.com/wagoodman/dive/releases/download/v0.11.0/dive_0.11.0_linux_arm64.deb -o /tmp/dive_0.11.0_linux_arm64.deb && sudo dpkg -i /tmp/dive_0.11.0_linux_arm64.deb
  • 67. DIVE TOOL USAGE : To analyze a Docker image simply run dive with an image tag/id/digest: : $ dive <your-image-tag> or you can dive with docker command directly $ alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive" $ dive <your-image-tag> # for example $ dive nginx:latest or if you want to build your image then jump straight into analyzing it: $ dive build -t <some-tag> .
  • 69. Additionally you can run this in your CI pipeline to ensure you're keeping wasted space to a minimum (this skips the UI): $ CI=true dive <your-image> DIVE TOOL
  • 71. LAZY DOCKER USAGE : The lazier way to manage everything docker A simple terminal UI for both docker and docker-compose, written in Go with the gocui library. URL : https://github.com/jesseduffield/lazydocker
  • 72.
  • 73. STARTER Starter is an open-source command line tool to generate a Dockerfile and a service.yml file from arbitrary source code. The service.yml file is a Cloud 66 service definition file which is used to define the service configuration on a stack. Starter works in the same way as BuildPacks do, but only generates the above mentioned files; the image compile step happens on BuildGrid. Starter does not require any additional third party tools or frameworks to work (it's compiled as a Go executable). GIT REPO : https://www.startwithdocker.com/ https://www.youtube.com/watch?v=50-0IQNGd3g https://github.com/cloud66-oss/starter/releases/ https://github.com/cloud66-oss/starter#quick-start INSTALL : $ curl -L https://github.com/cloud66-oss/starter/releases/download/1.4.3/linux_amd64_1.4.3 -o /tmp/starter && sudo mv /tmp/starter /usr/bin/starter
  • 74. USAGE : $ cd /my/project $ starter -g dockerfile,service,docker-compose This will analyze the project in the current folder and generate the three files: Dockerfile, docker-compose.yml and service.yml in the same folder, prompting for information when required. Cloud 66 Starter ~ (c) 2016 Cloud 66 Detecting framework for the project at /Users/awesome/work/boom Found ruby application Enter ruby version: [latest] ----> Found config/database.yml Found mysql, confirm? [Y/n] Found redis, confirm? [Y/n] Found elasticsearch, confirm? [Y/n] Add any other databases? [y/N] ----> Analyzing dependencies ----> Parsing Procfile ----> Found Procfile item web ----> Found Procfile item worker ----> Found unicorn This command will be run after each build: '/bin/sh -c "RAILS_ENV=_env:RAILS_ENV bundle exec rake db:schema:load"', confirm? [Y/n] This command will be run after each deployment: '/bin/sh -c "RAILS_ENV=_env:RAILS_ENV bundle exec rake db:migrate"', confirm? [Y/n] ----> Writing Dockerfile… ----> Writing docker-compose.yml… ----> Writing service.yml Done STARTER
  • 75. CADVISOR cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide. cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is based on lmctfy's so containers are inherently nested hierarchically. GIT REPO : https://github.com/google/cadvisor https://github.com/google/cadvisor/blob/master/docs/web.md
  • 76. CADVISOR To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run: VERSION=v0.36.0 # use the latest release version from https://github.com/google/cadvisor/releases sudo docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --volume=/dev/disk/:/dev/disk:ro --publish=8080:8080 --detach=true --name=cadvisor --privileged --device=/dev/kmsg gcr.io/cadvisor/cadvisor:$VERSION cAdvisor is now running (in the background) on http://localhost:8080. The setup includes directories with Docker state cAdvisor needs to observe.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86. MONITOR A DOCKER COMPOSITION
  • 87. Centralized logging in a Dockerized environment, especially when using Docker Compose for container orchestration, offers several important benefits for monitoring, troubleshooting, and maintaining the health of your applications. Here are some key reasons why log centralization is crucial in a Docker composition: Visibility Across Containers : In a Docker composition, your application may consist of multiple interconnected containers. Centralized logging allows you to aggregate and view logs from all containers in a single location. This consolidated view simplifies troubleshooting and debugging by providing a holistic understanding of the application's behavior. Distributed Environment Monitoring : Docker Compose often involves deploying applications across multiple hosts or nodes. Centralized logging enables you to monitor the logs of containers distributed across different machines. This is especially valuable in microservices architectures where various services run independently. Troubleshooting and Diagnostics : Centralized logs serve as a valuable tool for troubleshooting and diagnostics. When an issue arises, having logs centralized allows you to quickly identify and analyze problems without the need to access individual containers or nodes. It accelerates the root cause analysis process.
  • 88. Security and Auditing : Centralized logging is crucial for security monitoring and auditing purposes. By aggregating logs in a centralized location, security events and anomalies can be easily identified. This aids in detecting and responding to security incidents, ensuring that any unauthorized access or suspicious activities are promptly addressed. Scalability and Performance Monitoring : As your Dockerized applications scale, monitoring and analyzing logs become more challenging. Centralized logging solutions can efficiently handle large volumes of logs and provide tools for searching, filtering, and analyzing logs at scale. This is essential for monitoring performance and identifying potential bottlenecks. Log Retention and Compliance : Centralized logging allows for consistent log retention policies. You can configure centralized logging systems to store logs for specific durations, ensuring compliance with regulatory requirements. This is important for auditing and meeting data retention standards. Integration with Monitoring Tools : Centralized logs can be integrated seamlessly with various monitoring and analytics tools. This integration enhances your ability to create dashboards, alerts, and notifications based on log data, facilitating proactive monitoring and alerting.
  • 89. Streamlining DevOps Processes : In a DevOps environment, where collaboration between development and operations is crucial, centralized logging streamlines communication. Developers and operations teams can share a common view of application behavior and collaborate effectively during the development, deployment, and maintenance phases. Efficient Log Management : Centralized logging systems often come with features for log aggregation, parsing, and indexing. These capabilities make log management more efficient, allowing you to search, analyze, and extract valuable insights from logs easily. Cost and Resource Optimization : Centralized logging can help optimize resource utilization by offloading log storage and analysis to dedicated systems. This ensures that containers focus on their primary tasks without incurring unnecessary overhead related to local log management. Popular centralized logging solutions include the ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Graylog, and others. The choice of a specific solution depends on your requirements and the scale of your Dockerized environment. See the following NN6 summary : https://blog.eleven-labs.com/fr/monitorer-ses-containers-docker/ https://docs.google.com/presentation/d/1qQQznJBX9hHfZyEkcNRfDM3ZhI4MbOAWl0mY2NKFFfE/edit?usp=drive_ link https://www.youtube.com/watch?v=44A_2oWnEII
  • 93. "Security as Code" (SaC) in DevSecOps refers to the practice of integrating security controls, policies, and processes directly into the software development and deployment pipelines. The concept is analogous to the broader "Infrastructure as Code" (IaC) approach, where infrastructure provisioning and management are automated through code. Key aspects of Security as Code in DevSecOps include: Automation : Security as Code involves automating security processes and controls throughout the development and deployment lifecycle. This includes automating security testing, vulnerability scanning, compliance checks, and other security activities. Integration into CI/CD Pipelines : Security controls are integrated directly into continuous integration/continuous deployment (CI/CD) pipelines. This ensures that security assessments and validations are performed automatically at each stage of the software development lifecycle. Code Review and Analysis : Security as Code emphasizes the integration of security reviews and analyses directly into the code review process. Security checks are conducted alongside regular code reviews to identify and address security issues early in development. Policy as Code : Security policies and compliance requirements are codified, meaning that the rules and requirements for secure development and deployment are expressed as code. This allows for versioning, tracking changes, and maintaining a clear audit trail.
  • 94. Infrastructure as Code (IaC) Security : In addition to application security, Security as Code extends to the security of the underlying infrastructure. Security controls are applied to infrastructure components using IaC principles, ensuring that the entire technology stack is secure. Automated Security Testing : Automated security testing tools are utilized to assess code for vulnerabilities, misconfigurations, and compliance with security policies. These tools can include static application security testing (SAST), dynamic application security testing (DAST), and other specialized security scanners. Continuous Monitoring : Continuous security monitoring is part of the Security as Code approach. This involves the use of monitoring tools and automated processes to detect and respond to security incidents in real-time. Scalability and Consistency : By treating security controls as code, organizations can achieve scalability and consistency. Security policies are applied uniformly across different projects, environments, and teams, reducing the risk of human error and ensuring a consistent security posture.
  • 95. Collaboration between Security and Development : Security as Code promotes collaboration between security teams and development teams. Security requirements are clearly defined in code, making it easier for developers to understand and implement security controls without hindering the development process. Shift-Left Security : Security as Code embraces the "shift-left" approach, meaning that security considerations are moved earlier in the development lifecycle. This shift-left strategy helps catch and address security issues as early as possible, reducing the cost and effort of remediation.
  • 96. SAST, or Static Application Security Testing, is a key component in DevSecOps practices aimed at enhancing the security of software development processes. It is a type of security testing that is performed without executing the code. Instead, it analyzes the application's source code, bytecode, or binary code to identify potential security vulnerabilities, weaknesses, or coding errors. Here are some key points about SAST in the context of DevSecOps: Early Detection of Vulnerabilities : SAST is typically conducted early in the development lifecycle, during the coding and build phases. This allows security issues to be identified and addressed at an early stage, reducing the cost and effort required for fixing vulnerabilities later in the development process. Automation and Integration : In the DevSecOps methodology, automation is crucial for continuous integration and delivery. SAST tools are integrated into the development pipeline to automatically analyze code as it is committed, providing rapid feedback to developers about potential security issues. Identification of Code-level Security Flaws : SAST tools analyze the codebase for common security issues, such as SQL injection, cross-site scripting (XSS), buffer overflows, and other vulnerabilities. By scanning the source code, SAST tools can identify patterns and indicators that may pose security risks. Code Review Assistance : SAST tools can assist developers during code reviews by highlighting security-related issues. This helps developers understand and address security concerns while reviewing and refining their code.
  • 97. Language and Platform Support : SAST tools support various programming languages and frameworks. They are designed to identify vulnerabilities specific to the languages and platforms used in the application development, making them versatile across different technology stacks. False Positives and Tuning : SAST tools may generate false positives, where they flag code as insecure even though it is not. Tuning and customization of SAST tools are often necessary to reduce false positives and improve the accuracy of results. Complementing Dynamic Testing : While SAST focuses on analyzing the source code, dynamic application security testing (DAST) complements SAST by assessing the application in runtime. Both SAST and DAST contribute to a comprehensive security testing strategy in DevSecOps.
  • 98. DAST, or Dynamic Application Security Testing, is an essential component of DevSecOps practices focused on enhancing the security of software applications. DAST involves testing an application in its running state to identify vulnerabilities, weaknesses, and security issues from the perspective of an attacker. Here are key points about DAST in the context of DevSecOps: Runtime Testing : DAST tests the application while it is running or deployed in an environment. Instead of analyzing the source code like SAST (Static Application Security Testing), DAST interacts with the application dynamically to identify vulnerabilities that may be exploited during actual usage. Simulation of Real-World Attacks : DAST simulates real-world attack scenarios by sending malicious requests to the application, probing for vulnerabilities in the network, web services, APIs, and other entry points. It helps identify issues that may not be evident from static analysis alone. Automation and Continuous Testing : In DevSecOps, DAST is often automated and integrated into the continuous integration/continuous deployment (CI/CD) pipeline. This enables ongoing testing throughout the development lifecycle, providing quick feedback to developers about potential security weaknesses.
  • 99. Scanning Web Applications and APIs : DAST tools specialize in scanning web applications, APIs, and other web services. They analyze the application's responses to different inputs, identify security vulnerabilities like injection attacks, cross-site scripting (XSS), and other issues that might arise during real-world usage. Identification of Configuration Issues : DAST also helps identify configuration issues in the deployed environment that might expose security vulnerabilities. This includes issues related to server configurations, network settings, and authentication mechanisms. False Positives and Reporting : Similar to SAST, DAST tools may produce false positives. Adjustments and tuning are often required to reduce false positives and enhance the accuracy of results. DAST tools provide reports with identified vulnerabilities, severity levels, and recommendations for remediation. Complementing SAST : While SAST (Static Application Security Testing) focuses on identifying vulnerabilities in the source code, DAST complements this by detecting vulnerabilities that might only be apparent during runtime. Together, they provide a more comprehensive approach to application security. Integration with Security Orchestration : DAST tools are often integrated with security orchestration platforms to coordinate and automate security testing activities. This integration facilitates better collaboration between security teams and development teams.
  • 100. A penetration test (pentest) in the context of DevSecOps refers to the process of systematically assessing the security of a system, application, or infrastructure by simulating a real-world attack. The objective is to identify and exploit vulnerabilities to determine the system's resilience to security threats. Integrating penetration testing into the DevSecOps pipeline is essential for identifying and addressing security issues early in the development lifecycle. Here are key aspects of penetration testing in DevSecOps: Automated and Continuous Testing : In the DevSecOps model, peanetration testing is often automated and integrated into the continuous integration/continuous deployment (CI/CD) pipeline. This enables regular and systematic testing of applications and infrastructure throughout the development lifecycle. Early Detection of Vulnerabilities : Penetration testing helps identify vulnerabilities early in the development process. By detecting and addressing security issues during the development phase, organizations can reduce the likelihood of security flaws making it into production. Continuous Improvement : DevSecOps emphasizes continuous improvement, and penetration testing contributes to this by providing ongoing insights into the evolving security posture of the applications and systems. Regular testing helps organizations stay ahead of emerging threats.
  • 101. Real-World Simulation : Penetration tests simulate real-world cyberattacks, often involving attempts to exploit vulnerabilities, bypass security controls, and gain unauthorized access. This realistic approach helps organizations understand their security strengths and weaknesses in a dynamic environment. White Box and Black Box Testing : Penetration testing can take different forms, including white box testing (with knowledge of the internal structure and code) and black box testing (without prior knowledge). Both approaches provide valuable perspectives on security vulnerabilities. Comprehensive Security Assessment : Pentests assess various aspects of security, including network security, application security, infrastructure security, and potentially social engineering aspects. The goal is to provide a comprehensive view of the security landscape. Adherence to Compliance Requirements : Penetration testing is often required to meet regulatory compliance standards. By incorporating it into the DevSecOps process, organizations can demonstrate ongoing compliance and reduce the risk of security breaches.
  • 102. Collaboration with Development and Operations Teams : Collaboration is key in DevSecOps, and penetration testing involves close coordination with development and operations teams. This collaboration ensures that security findings are communicated effectively, and remediation efforts are understood and addressed promptly. Reporting and Remediation : Penetration testing results in detailed reports outlining vulnerabilities and recommended remediation steps. DevSecOps teams use these reports to prioritize and implement security fixes efficiently. Continuous Monitoring : While penetration testing provides a snapshot of the security posture, continuous monitoring tools and practices are also important to detect and respond to security incidents in real-time.
  • 103. ● Scan git repositories for finding potential credentials leakage. ● SAST (Static Application Security Test) ● SCA (Software Composition Analysis) ● IAST (Interactive Application Security Testing) ● DAST (Dynamic Application Security Test) ● IaC Scanning (Scanning Terraform, HelmChart code to find misconfiguration) ● Infrastructure scanning ● Compliance check DEVSECOPS
  • 106. RULE #0 - Keep Host and Docker up to date To prevent from known, container escapes vulnerabilities, which typically end in escalating to root/administrator privileges, patching Docker Engine and Docker Machine is crucial. In addition, containers (unlike in virtual machines) share the kernel with the host, therefore kernel exploits executed inside the container will directly hit host kernel. For example, kernel privilege escalation exploit (like Dirty COW) executed inside a well-insulated container will result in root access in a host.
  • 107. RULE #1 - Do not expose the Docker daemon socket (even to the containers) Docker socket /var/run/docker.sock is the UNIX socket that Docker is listening to. This is the primary entry point for the Docker API. The owner of this socket is root. Giving someone access to it is equivalent to giving unrestricted root access to your host. Do not enable tcp Docker daemon socket. If you are running docker daemon with -H tcp://0.0.0.0:XXX or similar you are exposing un-encrypted and unauthenticated direct access to the Docker daemon, if the host is internet connected this means the docker daemon on your computer can be used by anyone from the public internet. If you really, really have to do this, you should secure it. Check how to do this following Docker official documentation. Do not expose /var/run/docker.sock to other containers. If you are running your docker image with -v /var/run/docker.sock://var/run/docker.sock or similar, you should change it. Remember that mounting the socket read-only is not a solution but only makes it harder to exploit. Equivalent in the docker-compose file is something like this: volumes: - "/var/run/docker.sock:/var/run/docker.sock"
  • 108. RULE #2 - Set a user Configuring the container to use an unprivileged user is the best way to prevent privilege escalation attacks. This can be accomplished in three different ways as follows: 1. During runtime using -u option of docker run command e.g.: docker run -u 4000 alpine 1. During build time. Simple add user in Dockerfile and use it. For example: FROM alpine RUN groupadd -r myuser && useradd -r -g myuser myuser <HERE DO WHAT YOU HAVE TO DO AS A ROOT USER LIKE INSTALLING PACKAGES ETC.> USER myuser 1. Enable user namespace support (--userns-remap=default) in Docker daemon More information about this topic can be found at Docker official documentation
  • 109. In kubernetes, this can be configured in Security Context using runAsNonRoot field e.g.: kind: ... apiVersion: ... metadata: name: ... spec: ... containers: - name: ... image: .... securityContext: ... runAsNonRoot: true ... As a Kubernetes cluster administrator, you can configure it using Pod Security Policies.
  • 110. RULE #3 - Limit capabilities (Grant only specific capabilities, needed by a container) Linux kernel capabilities are a set of privileges that can be used by privileged. Docker, by default, runs with only a subset of capabilities. You can change it and drop some capabilities (using --cap-drop) to harden your docker containers, or add some capabilities (using --cap-add) if needed. Remember not to run containers with the --privileged flag - this will add ALL Linux kernel capabilities to the container. The most secure setup is to drop all capabilities --cap-drop all and then add only required ones. For example: docker run --cap-drop all --cap-add CHOWN alpine And remember: Do not run containers with the --privileged flag!!!
  • 111. In kubernetes this can be configured in Security Context using capabilities field e.g.: kind: ... apiVersion: ... metadata: name: ... spec: ... containers: - name: ... image: .... securityContext: ... capabilities: drop: - all add: - CHOWN ... As a Kubernetes cluster administrator, you can configure it using Pod Security Policies.
  • 112. RULE #4 - Add –no-new-privileges flag Always run your docker images with --security-opt=no-new-privileges in order to prevent escalate privileges using setuid or setgid binaries. In kubernetes, this can be configured in Security Context using allowPrivilegeEscalation field e.g.: kind: ... apiVersion: ... metadata: name: ... spec: ... containers: - name: ... image: .... securityContext: ... allowPrivilegeEscalation: false ... As a Kubernetes cluster administrator, you can refer to Kubernetes documentation to configure it using Pod Security
  • 113. RULE #5 - Disable inter-container communication (--icc=false) By default inter-container communication (icc) is enabled - it means that all containers can talk with each other (using docker0 bridged network). This can be disabled by running docker daemon with --icc=false flag. If icc is disabled (icc=false) it is required to tell which containers can communicate using --link=CONTAINER_NAME_or_ID:ALIAS option. See more in Docker documentation - container communication In Kubernetes Network Policies can be used for it.
  • 114. RULE #6 - Use Linux Security Module (seccomp, AppArmor, or SELinux, …. ) First of all, do not disable default security profile! Consider using security profile like seccomp or AppArmor. Instructions how to do this inside Kubernetes can be found at Security Context documentation and in Kubernetes API documentation
  • 115. RULE #7 - Limit resources (memory, CPU, file descriptors, processes, restarts) The best way to avoid DoS attacks is by limiting resources. You can limit memory, CPU, maximum number of restarts (-- restart=on-failure:<number_of_restarts>), maximum number of file descriptors (--ulimit nofile=<number>) and maximum number of processes (--ulimit nproc=<number>). Check documentation for more details about ulimits You can also do this inside Kubernetes: Assign Memory Resources to Containers and Pods, Assign CPU Resources to Containers and Pods and Assign Extended Resources to a Container
  • 116. RULE #8 - Set filesystem and volumes to read-only Run containers with a read-only filesystem using --read-only flag. For example: docker run --read-only alpine sh -c 'echo "whatever" > /tmp' If an application inside a container has to save something temporarily, combine --read-only flag with --tmpfs like this: docker run --read-only --tmpfs /tmp alpine sh -c 'echo "whatever" > /tmp/file' Equivalent in the docker-compose file will be: version: "3" services: alpine: image: alpine read_only: true
  • 117. Equivalent in kubernetes in Security Context will be: kind: ... apiVersion: ... metadata: name: ... spec: ... containers: - name: ... image: .... securityContext: ... readOnlyRootFilesystem: true ... In addition, if the volume is mounted only for reading mount them as a read-only It can be done by appending :ro to the - v like this: docker run -v volume-name:/path/in/container:ro alpine Or by using --mount option: docker run --mount source=volume-name,destination=/path/in/container,readonly alpine
  • 118. RULE #9 - Use static analysis tools To detect containers with known vulnerabilities - scan images using static analysis tools. ● Free ● Clair ● ThreatMapper ● Trivy ● Commercial ● Snyk (open source and free option available) ● anchore (open source and free option available) ● Docker Scout (open source and free option available) ● JFrog XRay ● Qualys To detect secrets in images: ● ggshield (open source and free option available) ● SecretScanner (open source)
  • 119. To detect misconfigurations in Kubernetes: ● kubeaudit ● kubesec.io ● kube-bench To detect misconfigurations in Docker: ● inspec.io ● dev-sec.io ● Docker Bench for Security
  • 120. RULE #10 - Set the logging level to at least INFO By default, the Docker daemon is configured to have a base logging level of 'info', and if this is not the case: set the Docker daemon log level to 'info'. Rationale: Setting up an appropriate log level, configures the Docker daemon to log events that you would want to review later. A base log level of 'info' and above would capture all logs except the debug logs. Until and unless required, you should not run docker daemon at the 'debug' log level. To configure the log level in docker-compose: docker-compose --log-level info up
  • 121. Rule #11 - Lint the Dockerfile at build time Many issues can be prevented by following some best practices when writing the Dockerfile. Adding a security linter as a step in the build pipeline can go a long way in avoiding further headaches. Some issues that are worth checking are: ● Ensure a USER directive is specified ● Ensure the base image version is pinned ● Ensure the OS packages versions are pinned ● Avoid the use of ADD in favor of COPY ● Avoid curl bashing in RUN directives References: ● Docker Baselines on DevSec ● Use the Docker command line ● Overview of docker-compose CLI ● Configuring Logging Drivers ● View logs for a container or service ● Dockerfile Security Best Practices
  • 122. Rule #12 - Run Docker in root-less mode Rootless mode ensures that the Docker daemon and containers are running as an unprivileged user, which means that even if an attacker breaks out of the container, they will not have root privileges on the host, which in turn substantially limits the attack surface. Rootless mode graduated from experimental in Docker Engine v20.10 and should be considered for added security, provided the known limitations are not an impediment. Rootless mode allows running the Docker daemon and containers as a non-root user to mitigate potential vulnerabilities in the daemon and the container runtime. Rootless mode does not require root privileges even during the installation of the Docker daemon, as long as the prerequisites are met. Rootless mode was introduced in Docker Engine v19.03 as an experimental feature. Rootless mode graduated from experimental in Docker Engine v20.10. Read more about rootless mode and its limitations, installation and usage instructions on Docker documentation page.
  • 123. Open Worldwide Application Security Project (OWASP) & DOCKER ● https://qwiet.ai/an-introduction-to-the-owasp-docker-top-10/ ● https://github.com/OWASP/Docker-Security
  • 124. What are the threats to Docker containers? The OWASP team breaks down the eight main threats into two primary categories of attacks to: ● Host via network services, protocol flaw, or kernel exploit ● Orchestration via network management backplane The first five threats all start with the same initial attack vector, where attackers escape the application and container. However, from there, they engage in different behaviors: ● Container escape: Kernel exploit to control all containers running on the host ● Other containers via network: Using shell access to attack another container through the network. ● Attacking orchestration tool via network: Using shell access then attacking the management interfaces or other orchestration tools’ attack surfaces ● Attacking the host via network: Using shell access and attacking an open port from the host ● Attacking other resources via network: Using shell access and finding a network-based vulnerability to exploit The last three threats cover attacks with different initial vectors: ● Resource starvation: Exploiting a security condition from another container running on the same host ● Host compromise: Compromising the host either through another container or the network
  • 125. OWASP Docker Top 10 To protect Docker containers – or really any container if you can abstract the Docker-specific language OWASP uses – you can implement the security controls outlined below. D01 – Secure User Mapping Applications should never run as root because when attackers escape the application, the privileges will follow them. You should run all microservices with the least privilege possible. To ensure this, you should: ● Never use the –privileged flag ● Configure the appropriate parameters for all user IDs or use Linux user namespaces
  • 126. D02 – Patch Management Strategy The host, containment technology, orchestration solution, and minimal operating system images may have security vulnerabilities that attackers can exploit. You should patch often and automate the process. If you are establishing a patch management strategy, you should: ● Specify a time span for “regular basis” ● Create policies or processes for each patch domain ● Execute patches and monitor for success or failure ● Define a policy for critical patches that can’t wait until the next scheduled patch
  • 127. D03 – Network Segmentation and Firewalling You should implement a multilayer network defense that denies all access by default and provides access on a case- by-case basis. When planning your network segmentation and firewall strategy, you should: ● Ensure each tenant is on a different network ● Define necessary communication ● Prevent management frontends/APIs from being exposed to the internet ● Use strict allow-list rules for your management backplane ● Protect host services the same as your management frontends/APIs For an orchestrated environment, you should have: ● An inbound network and routing policy ● An outbound network and routing policy that restricts downloads from the internet as much as possible ● Determine necessary container inter-communication
  • 128. D04 – Secure Defaults and Hardening You should identify and disable all unnecessary network services across interfaces from the following: ● Orchestration tool, like dashboard, etcd, API ● Host, like RPC services, OpenSSHD, avahi, network-based systemd-services ● Container, from the microservice (e.g. spring-boot) or distribution At the orchestration and host levels, you should identify all services and then review the following: ● Does disabling/stopping it affect the operation? ● Can it be started only on the localhost interface or any other network interface? ● Is authentication configured according to the principle of least privilege? ● Are there configuration options that narrow down the access to this service? ● Are there any known design flaws? ● Are there any known vulnerabilities? At the container level, you should: ● Uninstall any unnecessary packages ● Review for defective syscalls that can affect the host kernel’s security ● Disable SUID/SGID bits
  • 129. D05 – Maintain Security Contexts Your different environments require different levels of security. You should separate development and testing environments from the production environment. To do this, you should: ● Place production containers on a separate host system and restrict access ● Identify sensitive data types that require additional protection and separate containers accordingly ● Ensure that databases, middleware, authentication services, frontend, and master components are on different hosts ● Use Virtual Machines (VMs) to separate different security contexts
  • 130. D06 – Protect Secrets To protect access to a microservice, you should secure passwords, token, private keys, and certificates. ● Hashicorp vault ● Redhat Ansible vault ● Passbolt ● etc …
  • 131. D07 – Resource Protection Since containers share physical CPU, disks, memory, and network, you need to secure these physical resources to prevent one container from impacting other containers’ resources. To protect resources, you should: ● Limit the amount of memory a container can use ● Limit the amount of CPU a container can use
  • 132. D08 – Container Image Integrity and Origin For the container that runs your code, you should choose a minimal operating system from a trustworthy resource. Additionally, you should scan and monitor all transfers and images at rest.
  • 133. D09 – Follow Immutable Paradigm Since deployed container images rarely need to write into their filesystem or a mounted filesystem, you can implement additional security by starting them in read-only mode.
  • 134. D10 – Logging To trace all activity, you should log all relevant security events for container images, orchestration tools, and hosts at the system and API levels. Additionally, your application should provide remote logging.
  • 135. Qwiet AI: Integrating Docker Container Security into Development Processes With preZero, you can scan all the containers that your applications use and correlate these results with the rest of your application scan. You can integrate the preZero platform into your current CI/CD pipelines, ticketing systems, and development tools. By building security directly into your current processes, our platform enables you to incorporate container security into your secure software development life cycle (SSDLC) processes while still ensuring that you get the speed you need to deliver software on time. The Qwiet AI platform gives you visibility into the context around vulnerabilities so that you can effectively prioritize remediation actions based on whether attackers can exploit a weakness in your application and account for whether attackers are currently exploiting that vulnerability in the wild.
  • 136. 1. Keep Host and Docker Up to Date It is essential to patch both Docker Engine and the underlying host operating system running Docker, to prevent a range of known vulnerabilities, many of which can result in container espaces. Since the kernel is shared by the container and the host, kernel exploits when an attacker manages to run on a container can directly affect the host. For example, a successful kernel exploit can enable attackers to break out of a non-privileged container and gain root access to the host.
  • 137. 2. Do Not Expose the Docker Daemon Socket The Docker daemon socket is a Unix network socket that facilitates communication with the Docker API. By default, this socket is owned by the root user. If anyone else obtains access to the socket, they will have permissions equivalent to root access to the host. Take note that it is possible to bind the daemon socket to a network interface, making the Docker container available remotely. This option should be enabled with care, especially in production containers. To avoid this issue, follow these best practices: ● Never make the daemon socket available for remote connections, unless you are using Docker's encrypted HTTPS socket, which supports authentication. ● Do not run Docker images with an option like -v /var/run/docker.sock://var/run/docker.sock, which exposes the socket in the resulting container.
  • 138. 3. Run Docker in Rootless Mode Docker provides “rootless mode”, which lets you run Docker daemons and containers as non-root users. This is extremely important to mitigate vulnerabilities in daemons and container runtimes, which can grant root access of entire nodes and clusters to an attacker. Rootless mode runs Docker daemons and containers within a user namespace. This is similar to the userns-remap mode, but unlike it, rootless mode runs daemons and containers without root privileges by default. To run Docker in rootless mode: 1. Install Docker in root mode - see instructions. 2. Use the following command to launch the Daemon when the host starts: systemctl --user enable docker sudo loginctl enable-linger $(whoami) 3. Here is how to run a container as rootless using Docker context: docker context use rootless docker run -d -p 8080:80 nginx
  • 139. 4. Avoid Privileged Containers Docker provides a privileged mode, which lets a container run as root on the local machine. Running a container in privileged mode provides the capabilities of that host—including: ● Root access to all devices ● Ability to tamper with Linux security modules like AppArmor and SELinux ● Ability to install a new instance of the Docker platform, using the host's kernel capabilities, and run Docker within Docker. Privileged containers create a major security risk—enabling attackers to easily escalate privileges if the container is compromised. Therefore, it is not recommended to use privileged containers in a production environment. Best of all, never use them in any environment. To check if the container is running in privileged mode, use the following command (returns true if the container is privileged, or an error message if not): docker inspect --format =''[container_id]
  • 140. 5. Limit Container Resources When a container is compromised, attackers may try to make use of the underlying host resources to perform malicious activity. Set Docker memory and CPU usage limits to minimize the impact of breaches for resource-intensive containers. In Docker, the default setting is to allow the container to access all RAM and CPU resources on the host. It is important to set resource quotas, to limit the resources your container can use—for security reasons, and to ensure each container has the appropriate resources and does not disrupt other services running on the host.
  • 141. 6. Segregate Container Networks Docker containers require a network layer to communicate with the outside world through the network interfaces on the host. The default bridge network exists on all Docker hosts—if you do not specify a different network, new containers automatically connect to it. It is strongly recommended not to rely on the default bridge network—use custom bridge networks to control which containers can communicate between them, and to enable automatic DNS resolution from container name to IP address. You can create as many networks as you need and decide which networks each container should connect to (if at all). Ensure that containers can connect to each other only if absolutely necessary, and avoid connecting sensitive containers to public- facing networks. Docker provides network drivers that let you create your own bridge network, overlay network, or macvlan network. If you need more control, you can create a Docker network plugin.
  • 142. 7. Improve Container Isolation Operations teams should create an optimized environment to run containers. Ideally, the operating system on a container host should protect the host kernel from container escapes, and prevent mutual influence between containers. Containers are Linux processes with isolation and resource limitations, running on a shared operating system kernel. Protecting a container is exactly the same as protecting any process running on Linux. You can use one or more of the following Linux security capabilities: ● Linux namespace Namespaces make Linux processes appear to have access to their own, separate global resources. Namespaces provide an abstraction that gives the impression of running in a container on its own operating system. They are the basis of container isolation. ● SELinux For Red Hat Linux distributions, SELinux provides an additional layer of security to isolate containers from each other and from the host. It allows administrators to apply mandatory access controls for users, applications, processes and files. It is a second line of defense that will stop attackers who manage to breach the namespace abstraction.
  • 143. ● AppArmor For Debian Linux distributions, AppArmor is a Linux kernel enhancements that can limit programs in terms of the system resources they can access. It binds access control attributes to specific programs, and is controlled by security profiles loaded into the kernel at boot time. ● Cgroups Limits, describes and isolates resource usage of a group of processes, including CPU, memory, disk I/O, and networking. You can use cgroups to prevent container resources from being used by other containers on the same host, and at the same time, stop attackers from creating pseudo devices. ● Capabilities Linux allows you to limit privileges of any process, containers included. Linux provides “capabilities”, which are specific privileges that can be enabled for each process. When running a container, you can usually deny privileges for numerous capabilities, without affecting containerized applications. ● Seccomp The secure computing mode (seccomp) in the Linux kernel lets you transition a process to a secure mode, in which it is only allowed to perform a small set of safe system calls. Setting a seccomp profile for a container provides one more level of defense against compromise.
  • 144. 8. Set Filesystem and Volumes to Read-only A simple and effective security trick is to run containers with a read-only filesystem. This can prevent malicious activity such as deploying malware on the container or modifying configuration.
  • 145. 9. Complete Lifecycle Management Cloud native security requires security controls and mitigation techniques at every stage of the application lifecycle, from build to workload and infrastructure. Follow these best practices: ● Implement vulnerability scanning to ensure clean code at all stages of the development lifecycle. ● Use a sandbox environment where you can QA your code before it goes into production, to ensure there is nothing malicious that will deploy at runtime. ● Implement drift prevention to ensure container immutability. ● Create an incident response process to ensure rapid response in the case of an attack ● Apply automated patching. ● Ensure you have robust auditing and forensics for quick troubleshooting and compliance reporting.
  • 146. 10. Restrict System Calls from Within Containers In a container, you can choose to allow or deny any system calls. Not all system calls are required to run a container. With this in mind, you can monitor the container, obtain a list of all system calls made, explicitly allow those calls and no others. It is important to base your configuration on observation of the container at runtime, because you may not be aware of the specific system calls used by your container’s components, and how those calls are named in the underlying operating system.
  • 147. 11. Scan and Verify Container Images Docker container images must be tested for vulnerabilities before use, especially if they were pulled from public repositories. Remember that a vulnerability in any component of your image will exist in all containers you create from it. If you use a base image to create new images, any vulnerability in the base image will extend to your new images. Container image scanning is the process of analyzing the content and composition of images to detect security issues, misconfigurations or vulnerabilities. Images containing software with security vulnerabilities are susceptible to attacks during container runtime. If you are building an image from the CI pipeline, you need to scan it before running it through the build. Images with vulnerabilities that exceed a severity threshold should fail the build. Unsafe images should not be pushed to a container registry accessible by production systems. There are many open source and proprietary image scanners available. A comprehensive solution can scan both the operating system (if the container runs a stripped-down Linux distribution), specific libraries running within the container, and their dependencies. Ensure the scanner supports the languages used by the components in your image. Most container scanning tools use multiple Common Vulnerability and Exposure (CVE) databases, and test if those CVEs are present in a container image. Some tools can also test a container image for security best practices and misconfigurations.
  • 148. 12. Use Minimal Base Images Docker images are commonly built on top of “base images”. While this is convenient, because it avoids having to configure an image from scratch, it raises security concerns. You may use a base image with components that are not really required for your purposes. A common example is using a base image with a full Debian Stretch distribution, whereas your specific project does not really require operating system libraries or utilities. Remember that any additional component added to your images expands the attack surface. Carefully select base images to ensure they suit your purposes, and if necessary, build your own minimal base image.
  • 149. 13. Don’t Leak Sensitive Info to Docker Images Docker images often require sensitive data for their normal operations, such as credentials, tokens, SSH keys, TLS certificates, database names or connection strings. In other cases, applications running in a container may generate or store sensitive data. Sensitive information should never be hardcoded into the Dockerfile—it will be copied to Docker containers, and may be cached in intermediate container layers, even if you attempt to delete them. Container orchestrators like Kubernetes and Docker Swarm provide a secrets management capability which can solve this problem. You can use secrets to manage sensitive data a container needs at runtime, without storing it in the image or in source code.
  • 150. 14. Use Multi Stage Builds To build containerized applications in a consistent manner, it is common to use multi-stage builds. This has both operational and security advantages. In a multi-stage build, you create an intermediate container that contains all the tools you need to compile or generate the final artifact. At the last stage, only the generated artifacts are copied to the final image, without any development dependencies or temporary build files. A well-designed multi-stage build contains only the minimal binary files and dependencies required for the final image, with no build tools or intermediate files. This significantly reduces the attack surface. In addition, a multi-stage build gives you more control over the files and artifacts that go into a container image, making it more difficult for attackers or insiders to add malicious or untested artifacts without permission.
  • 151. 15. Secure Container Registries Container registries are highly convenient, letting you download container images at the click of a button, or automatically as part of development and testing workflows. However, together with this convenience comes a security risk. There is no guarantee that the image you are pulling from the registry is trusted. It may unintentionally contain security vulnerabilities, or may have intentionally been replaced with an image compromised by attackers. The solution is to use a private registry deployed behind your own firewall, to reduce the risk of tampering. To add another layer of protection, ensure that your registry uses Role Based Access Control (RBAC) to restrict which users can upload and download images from it. Avoid giving open access to your entire team—this simplifies operations, but increases the risk that a team member, or an attacker compromising their attack, can introduce unwanted artifacts into an image.
  • 152. 16. Use Fixed Tags for Immutability Tags are commonly used to manage versions of Docker images. For example, a latest tag is used to indicate that this is the latest version of an image. However, because tags can be changed, it is possible for several images to have a latest tag, causing confusion and inconsistent behavior in automated builds. There are three main strategies for ensuring tags are immutable and are not affected by subsequent changes to the image: ● Preferring a more specific tag—if an image has several tags, a build process should select the tag containing the most information (e.g. both version and operating system). ● Keeping a local copy of images—for example, in a private repository, and confirming that tags are the same as those in the local copy. ● Signing images—Docker offers a Content Trust mechanism that allows you to cryptographically sign images using a private key. This guarantees the image, and its tags, have not been modified.
  • 153. 17. Add the HEALTHCHECK Instruction to the Container Image The HEALTHCHECK instruction tells Docker to continuously test a container, to check that it is still working. If Docker finds that a container is not healthy, it can automatically restart it. This can allow your Docker environment to automatically respond to issues that affect container availability or security. Implementing the HEALTHCHECK instruction is straightforward. It involves adding a command to your Dockerfile that Docker can execute to check the health of your container. This command could be as simple as checking if a particular service is running or as complex as running a script that tests various aspects of your container.
  • 154. 18. Use COPY Instead of ADD When Writing Dockerfiles COPY and ADD are two commands you can use in your Dockerfiles to add elements to your container. The main difference between them is that ADD has some additional features—for example, it can automatically extract compressed files, and can download files from a URL. These additional features in the ADD command can lead to security vulnerabilities. For example, if you use ADD to download a file from a URL, and that URL is compromised, your Docker container could be infected with malware. Therefore, it is more secure to use COPY in your Dockerfiles.
  • 155. 19. Monitor Container Activity Visibility and monitoring are critical to smooth operation and security of Docker containers. Containerized environments are dynamic, and close monitoring is required to understand what is running in your environment, identify anomalies and respond to them. Each container image can have multiple running instances. Due to the speed at which new images and versions are deployed, issues can quickly propagate across containers and applications. Therefore, it is critical to identify problems early and remediate them at the source—for example, by identifying a faulty image, fixing it, and rebuilding all containers using that image. Put tools and practices in place that can help you achieve observability of the following components: ● Docker hosts ● Container engines ● Master nodes (if running an orchestrator like Kubernetes) ● Containerized middleware and networking ● Workloads running in containers In large-scale environments, this can only be achieved with dedicated cloud-native monitoring tools.
  • 156. 20. Secure Containers at Runtime At the center of the cloud native stack are workloads, always a prized asset for hackers. The ability to stop an attack in progress is of utmost importance but few organizations are effectively able to stop an attack or zero-day exploit as it happens, or before it happens. Runtime security for Docker containers involves securing your workload, so that once a container is running, drift is not possible, and any malicious action is blocked immediately. Ideally, this should be done with minimal overhead and rapid response time. Implement drift prevention measures to stop attacks in progress and prevent zero day exploits. In addition, use automated vulnerability patching and management to provide another layer of runtime security.
  • 157. 21. Save Troubleshooting Data Separately from Containers If your team needs to log into your containers using SSH for every maintenance operation, this creates a security risk. You should design a way to maintain containers without needing to directly access them. A good way to do this and limit SSH access is to make the logs available outside the container. In this way, administrators can troubleshoot containers without logging in. They can then tear down existing containers and deploy new ones, without ever establishing a connection.
  • 158. 22. Use Metadata Labels for Images Container labeling is a common practice, applied to objects like images, deployments, Docker containers, volumes, and networks. Use labels to add information to containers, such as licensing information, sources, names of authors, and relation of containers to projects or components. They can also be used to categorize containers and their contents for compliance purposes, for example labeling a container as containing protected data. Labels are commonly used to organize containerized environments and automate workflows. However, when workflows rely on labels, errors in applying a label can have severe consequences. To address this concern, automate labeling processes as much as possible, and carefully control which users and roles are allowed to assign or modify labels.
  • 159. Host Configuration ● Create a separate partition for containers ● Harden the container host ● Update your Docker software on a regular basis ● Manage Docker daemon access authorization wisely ● Configure your Docker files directories, and ● Audit all Docker daemon activity.
  • 160. Docker Daemon Configuration ● Restrict network traffic between default bridge containers and access to new privileges from containers. ● Enable user namespace support to provide additional, Docker client commands authorization, live restore, and default cgroup usage ● Disable legacy registry operations and Userland Proxy ● Avoid networking misconfiguration by allowing Docker to make changes to iptables, and avoid experimental features during production. ● Configure TLS authentication for Docker daemon and centralized and remote logging. ● Set the logging level to 'info', and set an appropriate default ulimit ● Don’t use insecure registries and aufs storage drivers ● Apply base device size for containers and a daemon-wide custom SECCOMP profile to limit calls.
  • 161. Container Images and Build File ● Create a user for the container ● Ensure containers use only trusted images ● Ensure unnecessary packages are not installed in the container ● Include security patches during scans and rebuilding processes ● Enable content trust for Docker ● Add HEALTHCHECK instructions to the container image ● Remove setuid and setgid permissions from the images ● Use COPY is instead of ADD in Dockerfile ● Install only verified packages ● Don’t use update instructions in a single line or alone in the Dockerfile ● Don’t store secrets in Dockerfiles
  • 162. Container Runtime ● Restrict containers from acquiring additional privileges and restrict Linux Kernel Capabilities. ● Enable AppArmor Profile. ● Avoid use of privileged containers during runtime, running ssh within containers, mapping privileged ports within containers. ● Ensure sensitive host system directories aren’t mounted on containers, the container's root filesystem is mounted as read- only, the Docker socket is not mounted inside any containers. ● Set appropriate CPU priority for the container, set 'on-failure' container restart policy to '5', and open only necessary ports on the container. ● Apply per need SELinux security options, and overwrite the default ulimit at runtime. ● Don’t share the host's network namespace and the host's process namespace, the host's IPC namespace, mount propagation mode, the host's UTS namespace, the host's user namespaces. ● Limit memory usage for container and bind incoming container traffic to a specific host interface. ● Don’t expose host devices directly to containers, don’t disable the default SECCOMP profile, don’t use docker exec commands with privileged and user option, and don’t use Docker's default bridge docker0. ● Confirm cgroup usage and use PIDs cgroup limit, check container health at runtime, and always update docker commands with the latest version of the image.
  • 163. Docker Security Operations Avoid image sprawl and container sprawl.
  • 164. Docker Swarm Configuration ● Enable swarm mode only if needed ● Create a minimum number of manager nodes in a swarm ● Bind swarm services are bound to a specific host interface ● Encrypt containers data exchange on different overlay network nodes ● Manage secrets in a Swarm cluster with Docker's secret management commands ● Run swarm manager in auto-lock mode ● Rotate swarm manager auto-lock key periodicallya ● Rotate node and CA certificates as needed ● Separate management plane traffic from data plane traffic
  • 165. Docker Forensics This repo contains a toolkit for performing post-mortem analysis of Docker runtime environments based on forensic HDD copies of the docker host system. ● dof (Docker Forensics Toolkit) - Extracts and interprets forensic artifacts from disk images of Docker Host systems ● https://github.com/docker-forensics-toolkit/toolkit
  • 166. Docker explorer This project helps a forensics analyst explore offline Docker filesystems. This is not an officially supported Google product. ● https://github.com/google/docker-explorer?tab=readme-ov-file
  • 167. Container explorer Container Explorer (container-explorer) is a tool to explore containers of a disk image. Container Explorer supports exploring containers managed using containerd and docker container runtimes. Container Explorer attempts to provide the familiar output generated by tools like ctr and docker. Container Explorer provides the following functionalities: ● Exploring namespaces ● Exploring containers ● Exploring images ● Exploring snapshots ● Exploring contents ● Mounting containers ● Support JSON output ● https://github.com/google/container-explorer
  • 168. For more information : DOCKER vs PODMAN
  • 169. Podman is an open-source container runtime management tool that has gained popularity as an alternative to Docker. It originates from the broader container ecosystem in the Linux world and provides a lightweight, secure, and efficient environment for managing containers. Podman addresses several key problems faced by developers and administrators. Firstly, it allows users to run containers without requiring a daemon (system service), eliminating the need for a root process, enhancing security, and providing a more streamlined experience. Additionally, it offers a familiar command-line interface, allowing users to easily transition from Docker and leverage existing container management knowledge. Furthermore, it provides improved compatibility with the Open Container Initiative (OCI) standards, enabling better interoperability with other container tools and platforms.
  • 170.
  • 171.
  • 172.
  • 173.
  • 174. Docker Vs Podman Vs Containerd Vs CRI-O Exploring the key roles of container runtimes in modern software deployment, this comparison navigates the unique features of four popular technologies : ● Docker : A comprehensive platform that enables developers to build, share, and run containers with an easy-to-use CLI and a daemon-based architecture. ● Podman : A daemonless container engine for developing, managing, and running OCI Containers on your Linux System, with a CLI similar to Docker. ● Containerd : An industry-standard core container runtime, focused on simplicity and robustness, providing the minimum functionalities required to run containers and manage images on a system. ● CRI-O : A lightweight container runtime specifically designed for Kubernetes, providing an implementation of the Kubernetes Container Runtime Interface (CRI) to allow OCI compatible runtimes to be used in Kubernetes clusters.
  • 175.
  • 177. Official Resources: Docker Documentation: https://docs.docker.com/ Docker Get Started: https://docs.docker.com/get-started/ Docker Labs: https://dockerlabs.collabnix.com/ Play with Docker: https://labs.play-with-docker.com/ Docker Hub: https://hub.docker.com/ Katacoda Labs: https://katacoda.com/ Docker Awesome: https://github.com/docker/awesome-compose Docker Cheat Sheet: https://devhints.io/docker Docker Blog: https://www.docker.com/blog/
  • 178.
  • 179.