Containers and Orchestration Create New Vulnerabilities
Over the last few years we have seen a dramatic rise in the use of containers and
container orchestration systems for the coordination and management of cloud
services. Among other things, containers allow for rapid deployment, ephemeral
workloads, and autoscaling of applications at scale. For organizations that work
in an agile way and deploy services continuously, it’s an enormously popular piece
of their infrastructure. Popular types of containers include: Kubernetes, Docker
Swarm, OpenShift, and Mesosphere.
Containers are a new and important component of modern environments, but
as they still have to live in a shared host and cloud account facing similar threat
vectors, their security cannot be treated in isolation. Lacework provides a holistic
approach to container security as it supports this natively, while at the same time
provides security for hosts and AWS accounts which if compromised can cause
even larger scale damage to any container environment.
Many organizations rely on containers to help them orchestrate among
applications and data sources, and as this approach grows, security teams are
discovering a corresponding increase in their overall threat surface. The people
interviewed in this book offer insightful proof that while containers provide
distinct advantages for workloads and applications, they also require focused,
automated security to remain safe.
Lacework is a SaaS platform that
automates threat defense, intrusion
detection, and compliance for cloud
workloads & containers. Lacework
monitors all your critical assets in
the cloud and automatically detects
threats and anomalous activity so
you can take action before your
company is at risk. The result?
Deeper security visibility and greater
threat defense for your critical cloud
workloads, containers, and IaaS
accounts. Based in Mountain View,
California, Lacework is a privately
held company funded by Sutter Hill
Ventures, Liberty Global Ventures,
Spike Ventures, the Webb Investment
Network (WIN), and AME Cloud
Ventures. Find out more at www.
Chief Product Officer
TABLE OF CONTENTS
Director of Information Security
Senior Security Architect
Lead Security Consulting Engineer
James P. Courtney,
Certified Chief Information
Courtney Consultants, LLC.........................
Cox Automotive Inc.......................................
Milinda Rambel Stone,
Vice President & CISO
“CONTAINERS RUNNING SER-
VICES OR APPLICATIONS ARE
OFTEN OVERPRIVILEGED FOR THE
FUNCTIONS THEY PERFORM.”
There’s a lot to like about containers, but also a lot not to like from
a security perspective. For one thing, they make the environment
considerably more complex, which introduces potential vulnerabilities.
For example, let’s say you have a normal Amazon EC2 server running
something like a Linux-based operating system. Then you have to install
a Docker engine on top of that. Now you have two types of vulnerabilities,
one being whether you keep your host operating system (OS) patched
and up to date, and the other is whether you configured your Docker
engine correctly. Then if you install two applications as containers,
the challenge becomes how you check to see if things are operating
as they should. Historically one might look at network traffic from one
EC2 instance to another. But in this simple example, there's no network
traffic leaving that EC2 instance. You need better tools capable of inter-
container monitoring of activity within one EC2 instance, and more inter-
container access control and authentication.
Ross Young, Director,
Ross Young is a veteran
technologist, innovation expert,
and transformational leader, having
learned DevSecOps, IT infrastructure,
and cybersecurity from a young
age from both ninjas and pirates.
Young currently teaches master-level
classes in cybersecurity at Johns
Hopkins University and is a director of
information security at Capital One.
Another problem is that containers running services or applications are often overprivileged for the
functions they perform. For instance, they are often set up with admin privileges for an application that
doesn’t require those privileges. That means they now have the ability to see everything in the host OS,
and also see other containers that are on that same EC2, including data. Solving this requires tools that
run the service with the least privileges it needs so that it can’t break out of its container and get to the
Another best practice that has started to evolve is using very small containers with minimum necessary
privileges, and making them read-only containers so they can’t be changed. If you get hacked, the
container still runs as intended.
Ultimately, developers need to incorporate security to the point where they create security policy as code.
This involves using tools that do security scanning during development and give developers instant
feedback about vulnerabilities. n
“MANY PEOPLE DON’T REALIZE
THE POTENTIAL FOR HAVING
A SINGLE POINT OF FAILURE
WITH MULTIPLE CONTAINERS
The easier it is to deploy code or apps, the greater the potential for
propagating vulnerabilities. You need to manage these processes carefully
and not get too comfortable with how easy it is to deploy and scale apps.
Containers themselves are pretty secure. However, many people don’t
realize the potential for having a single point of failure with multiple
containers going down, for instance if a host server is lost. The impact
of this kind of event depends on a number of factors, including how the
original environment is configured for density.
Securing an environment requires a layered approach that involves having
security appliances at each step of the way, whether it’s a layer-three device,
the endpoint itself, and how you authenticate into a system. The most
important part of container security is access control. Once something has
access to a system, there may be controls to detect behavior, and someone
who is already in a system may approach very cautiously to avoid detection.
It all comes back to appropriate access control. n
Paul Dackiewicz, Lead Security
Consulting Engineer, Advanced Network
Paul Dackiewicz has over 10 years of systems
engineering and cybersecurity experience in
the fields of healthcare, government, and value-
added resellers (VARs). He is currently leading
the security operations center (SOC) for a premier
managed security services provider (MSSP).
Container security begins with enforcing roles and responsibilities during
development, testing, and production. Ideally you will have segregation of
duties and segregation of access, which keeps your production container
logically separated from its development and test states. Defining roles
and responsibilities, and turning those on and off, determines who or what
process can promote a container from development to test, and from test
to production. These definitions become an integral part of your change-
management process. n
Katherine Riley, Director of
Information Security & Compliance,
Katherine (Kate) Riley is skilled in leading
teams to define cloud architecture, and
in development of controls. She has
developed and implemented security
frameworks such as ISO and NIST, and
performed compliance reviews such as
FFIEC, HIPAA, HITRUST, SOX, GDPR, and
“IT IS VERY IMPORTANT WHEN
YOU ARE PULLING CONTAINER
IMAGES TO DRIVE A PROCESS,
THAT YOU VERIFY THE AUTHEN-
TICITY OF THOSE IMAGES.”
One potential vulnerability with containers is that if one container is
infected, that compromise can spread to the host. That’s because, unlike
segmented environments where different applications can run on different
operating systems, container environments typically run all the containers
on top of one operating system, and the containers take their functionality
from that operating system.
This is why it is very important, when you are pulling container images to
drive a process, that you verify the authenticity of those images. You need
to verify the sources and make sure you are using a known, secure URL.
Cloud-platform functions can help enforce the verification of images.
For example, Amazon Web Services has an auto-scaling feature that
monitors container activity. If a container is reaching capacity, AWS will
automatically spin up an identical container to take on some of the load. If
there is a reduction in load, AWS automatically destroys that container. The
system will send notifications of these actions, which can be monitored
on a dashboard. That’s important in environments hosting high-volume
computing activity. n
Darrell Shack , Cloud Engineer,
Cox Automotive Inc.
Darrell Shack is a seasoned system
engineer focused on building resilient
and high--availability solutions. He has
experience in developing solutions in the
public cloud Amazon Web Services, helping
teams manage their cost, and overall
application performance in the cloud.
“THE BIG CHALLENGE IN A MAS-
SIVELY SCALED CONTAINER EN-
VIRONMENT IS THE NEED TO CON-
TINUOUSLY SCAN AND MONITOR
FOR NONCOMPLIANT IMAGES…”
Containers have many advantages, but the way containers sit on a common
OS kernel creates a situation where compromising one single container can
provide access to the OS kernel and all other containers associated with it.
This requires continuous monitoring, and it requires a different approach
to patch management. In a traditional environment, you patch all the
time. However in a container environment, you do not continuously patch
containers. When a vulnerability becomes known, you immediately update
the container image and deploy completely new containers. This changes
your entire approach to patch management.
The big challenge in a massively scaled container environment is the
need to continuously scan and monitor for noncompliant images, and
authenticate images across different container platforms. Tools used to
monitor container activity need to be adaptable to different situations
at any point and time. A container that is streaming an application right
now may not be in 10 seconds. The tools need to be intelligent, perhaps
artificial intelligence (AI) driven. Everything is pattern based, behavior
based, and risk based. The tools need to be able to protect you in a way
that dynamically adapts to the current state of your constantly changing
Mauro Loda, Senior Security
Mauro Loda is a passionate, data-
driven cybersecurity professional who
helped define and drive the “Cloud First”
strategy and culture within a Fortune 100
multinational enterprise. He is a strong
believer in offensive security and simple-
but-effective architecture-defense topology.
Emotional intelligence, pragmatism and
reliability are his guiding principles. He has
achieved numerous industry certifications
and actively participates in forums,
technology councils, and committees.
“ANOTHER CHALLENGE FOR
IS THAT THEY MAKE FORENSICS
The biggest security concern when using containers is that they come out of
a centralized distribution area. This means if one file gets infected, that can
affect everything in the environment. The big challenge for environments that
use containers is how you minimize the risk of that centralized architecture.
Another challenge for containerized architectures is that they make forensics
difficult. In an environment that instantly spins up a machine to provide
on-demand services and then eliminates that container when it is no longer
needed, if the container is compromised, what did it do while it was up? For
instance, if something jumped from a computer to an image and then got
access from that image to another server before the image spun down, the
image is now gone but the damage is already done. Even if you have good
monitoring tools that triggered an alert on a machine that is now gone, you
no longer have access. The bad guys, depending on what kind of access
they get, can erase logs and do other things to cover their tracks. From a
forensics point of view, once you’ve discovered you’ve been breached, the
way containers work can make it very difficult to go back and trace the
steps of an attack. If you have a large enough budget, you may be able to log
everything, but that may not be feasible in a massively scaled environment.
Addressing these challenges will fall on the way containerized environments
are architected and built. Most developers are not taught and do not think
about security first. They think application first and making it work. n
James P. Courtney, Certified Chief
Information Security Officer, Courtney
James Courtney is a recognized cybersecurity
professional who has spoken at multiple
conferences, including the CyberMaryland
Conference. He is a Certified Chief Information
Security Officer (one of 1,172 in the world), serving as
the IT network and operations security manager for a
private SIP consulting firm in McLean, Virginia.
“THE REAL ISSUE IS WHETHER
YOU HAVE A DISCIPLINE IN
PLACE TO ENSURE SECURE
USE OF CONTAINERS.”
It’s not that the container creates the vulnerability. The real issue is whether
you have a discipline in place to ensure secure use of containers. If you’re
simply creating containers without monitoring and measuring, then you
don’t have a consistent process. Your vulnerabilities will be replicated
across your stacks because you don’t have disciplined engineering hygiene,
and if that’s the case, things can go downhill fast. You have to focus on
making sure those containers are consistent and that they’re healthy.
One trend we’re seeing in the industry is this concept of cloud security.
It’s a new discipline between the old-school definition of what security
was and the concept of cloud, and there’s a shared level of skill between
the cloud team and the security team. That’s where you can build a
disciplined process across the two teams that works much better in the
cloud than the old-school model of security.
Part of the challenge is you are dealing with such a dynamic environment.
What worked for you yesterday or even four hours ago might not work
for you today or tomorrow. You have to be continually paying attention
to potential new threats and risks. You need third-party assessments
to validate the assumptions you’re making, whether they are accurate,
and if you are taking the right steps to mitigate them. You need to
take an engineering approach, and in this environment, if you’re
running processes manually, you’re going to miss many things. It’s an
environment where everything must be automated. n
Milinda Rambel Stone, Vice
President & CISO, Provation Medical
Milinda Rambel Stone is an executive
security leader with extensive experience
in building and leading security programs,
specializing in information-security
governance, incident investigation
and response, cloud security, security
awareness, and risk-management
compliance. As a former software engineer,
Stone has passion and experience in
building cloud security and DevSecOps
environments. She currently practices this
at Provation, where she is the vice president
and chief information security officer (CISO).
If you’re just creating containers without monitoring and measuring, then you don’t have a consistent process. Your
vulnerabilities will get replicated across your stacks because you don’t have disciplined engineering hygiene, and if that’s the
case, things can go bad fast.
The big challenge in a massively scaled container environment is the need to scan and monitor continuously for
noncompliant images, and authenticate images across different container platforms. Tools used to monitor container activity
need to be adaptable to different situations at any point and time.
Ultimately, developers need to incorporate security to the point where they create security policy as code. This involves using
tools that do security scanning during development and give developers instant feedback about vulnerabilities.