The network is dead. Reset your security mindset for a public cloud
More organizations are adopting cloud-first strategies as they initiate new
projects or migrate from older systems. To meet rigorous business demands,
they operate with frequent code releases, increasingly make use of containers,
and process and store data for compliance and cost-management. It’s a lot to
manage, and SIEMs and firewalls just can’t provide the level of insight required —
they aren’t engineered for automation, and they definitely can’t operate at scale.
Old school security merchants can’t build solutions fast enough to address the
growing tide of cloud migrants and upstarts initiating a cloud-first strategy, so
they’re opting to piece together component parts to make something that vaguely
resembles a comprehensive solution. Their sales approach is menu-like, but their
product strategy is far from unified. This is confusing for customers, especially as
security teams question whether a company with a hardware mentality can adapt
its technology and product strategy to meet the velocity and scalability needs of
It’s time for a new approach, one optimized for the cloud and containerized
environments that provides comprehensive threat defense, intrusion detection,
and compliance management over their cloud accounts and workloads, all at
scale. This book offers valuable insights into how innovative security managers
approach defense and risk management for their multicloud infrastructures.
Lacework is a SaaS platform that
automates threat defense, intrusion
detection, and compliance for cloud
workloads & containers. Lacework
monitors all your critical assets in
the cloud and automatically detects
threats and anomalous activity so
you can take action before your
company is at risk. The result?
Deeper security visibility and greater
threat defense for your critical cloud
workloads, containers, and IaaS
accounts. Based in Mountain View,
California, Lacework is a privately
held company funded by Sutter Hill
Ventures, Liberty Global Ventures,
Spike Ventures, the Webb Investment
Network (WIN), and AME Cloud
Ventures. Find out more at www.
Chief Product Officer
TABLE OF CONTENTS
Director of Information Security
Senior Security Architect
Lead Security Consulting Engineer
James P. Courtney,
Certified Chief Information
Courtney Consultants, LLC.........................
Cox Automotive Inc.......................................
Milinda Rambel Stone,
Vice President & CISO
“IF YOU DID NOT HAVE A STRONG
SECURITY FRAMEWORK IN YOUR
ON-PREMISES MODEL, JUST MOV-
ING TO THE CLOUD BRINGS THOSE
OLD BAD HABITS WITH YOU.”
When growing and scaling infrastructure, moving to the cloud is a logical
next step. But the cloud presents a different kind of IT environment even
though many of the fundamental security challenges remain the same.
n Cost — There are costs associated with building and maintaining a
cloud-based security strategy, just as there are costs of securing on-
n Focus — In the past, security focused on availability, then it moved
to risk, and later to compliance. Today it emphasizes optimization,
pushing on traditional approaches to reduce costs, be scalable, and
n Resources — Traditionally you were limited by budgets, skills, and
legacy systems. The cloud bypasses some of the old issues but
places new demands on resources.
Katherine Riley, Director of Information
Security & Compliance, Braintrace
Katherine (Kate) Riley is skilled in leading
teams to define cloud architecture, and in
development of controls. She has developed
and implemented security frameworks such
as ISO and NIST, and performed compliance
reviews such as FFIEC, HIPAA, HITRUST, SOX,
GDPR, and GLBA.
The same constraints are going to be factors when you go to the cloud, but now you manage them with
tools that give you more flexibility and that release you from dependencies you had before. Now you have
to think of the layers of cloud security, and architect a strategy around how you’re going to build cloud
applications, and how you test them, deploy them, and promote them. A key point, though, is that if you
did not have a strong security framework in your on-premises model, just moving to the cloud brings
those old bad habits with you.
You have more tools in a more accessible and dynamic format, and you can create containers for
development, testing, and production. But you still have to test for the same things and train your
resources. And you still need a process that’s going to ask which vulnerabilities you care about and which
ones are not important. n
CHANGE AND EVOLVE
DEPENDING ON WHAT
THE USERS NEED.”
The cloud is basically an extension of your network that’s hosted on
someone else’s server. You should always have that mindset. And
bridging the connection between on-premises locations and customer
sites to the cloud is a big security concern. To do that safely, you have
to know what that looks like, and you have to know what safeguards are
available from the cloud service provider.
Things happen differently in the cloud. You recycle so many things when
you’re offering a public cloud instance, whether IPs, disk drives, or the
fact that you’re constantly destroying and recreating data on the fly
to perform any number of on-demand resource capabilities. Services
constantly change and evolve depending on what the users need, so you
are constantly varying how you deliver those services to the appropriate
Paul Dackiewicz, Lead Security
Consulting Engineer, Advanced Network
Paul Dackiewicz has over 10 years of systems
engineering and cybersecurity experience in
the fields of healthcare, government, and value-
added resellers (VARs). He is currently leading
the security operations center (SOC) for a premier
managed security services provider (MSSP).
A lot of what is happening is not user-facing. For example, if I have a server in my environment that
needs to talk to Amazon, there’s no user interaction. You are not only configuring your local on-premises
equipment to talk to the cloud, you are configuring the cloud, too. To be able to grant secure access
when necessary, you need to leverage their tools, their identity sources, and their federation. A lot of
autonomous connections are being made, which is why you have to stay on top of your access control
Throughout the life cycle of a cloud process, you must always audit changes and controls. Keeping track
of how it’s being configured requires having eyes on it at all times. n
“THERE IS A LOT OF CONTINUOUS
CHANGE HAPPENING IN A CLOUD
ENVIRONMENT THAT REQUIRES
When you move into a public cloud such as Amazon Web Services, you
and the cloud provider have separate security responsibilities. You have to
make sure you have a good migration plan that includes in-depth research
and understanding of the different kinds of security features offered by the
cloud provider. For example, you still need firewall protection, but AWS builds
firewall functionality into its EC2 instances. Configuration of those firewall
settings is your responsibility. Your security team needs to be familiar with
these settings and comfortable managing access-control lists.
There is a lot of continuous change happening in a cloud environment that
requires continuous monitoring. To make sure you are covering all your
bases, it’s worth investing in a tool that audits your settings. For instance
there are AWS security configuration and monitoring tools that work by
taking an identity and access management role with audit permissions,
and then they look at all your configurations and roles. The results are
presented on a dashboard.
You can set up weekly, daily, or hourly scans, depending on your monitoring
needs Hourly audits would pick up on a vulnerability that might appear
in the environment pretty quickly. In a highly dynamic cloud environment
in which new APIs are being built and new services developed, frequent
scanning is essential for good security. n
Darrell Shack , Cloud Engineer,
Cox Automotive Inc.
Darrell Shack is a seasoned system
engineer focused on building resilient
and high--availability solutions. He has
experience in developing solutions in the
public cloud Amazon Web Services, helping
teams manage their cost, and overall
application performance in the cloud.
“IN A DYNAMIC CLOUD
ENVIRONMENT, THE OLD SECURITY
GROUPS ARE NOT AS IMPORTANT.
WHAT BECOMES MORE IMPORT-
ANT ARE SERVICE MESHES.”
When moving to the cloud, the way you secure things goes hand-in-hand
with how you lower maintenance and development costs. For example,
when you build your cloud architecture, are you talking about EC2 servers,
containerized servers, or Amazon serverless applications? As you go
further down that path, the cloud provider provides more functionality.
You no longer have to worry about patching the operating system,
configuring, monitoring, and scaling. All of those things are now managed
by the AWS provider. This impacts the way you develop and the way you
secure your architecture.
In a dynamic cloud environment, the old security groups are not as
important. What becomes more important are service meshes and Layer
7 firewalls where you’re limiting the scope of applications by controlling
which microservices talk to which APIs. The challenge becomes how
to create those types of services in an enterprise service-level offering
so that all of your developers from whatever lines of business can now
Ross Young, Director,
Ross Young is a veteran
technologist, innovation expert,
and transformational leader, having
learned DevSecOps, IT infrastructure,
and cybersecurity from a young
age from both ninjas and pirates.
Young currently teaches master-level
classes in cybersecurity at Johns
Hopkins University and is a director of
information security at Capital One.
It starts with everyone agreeing to a trusted DevSecOps or continuous integration, continuous delivery
(CI/CD) pipeline. Organizations begin by looking at the earliest point at which they can find anything bad,
which is typically the integrated developer environment (IDE), and that’s where they implement a code-
scanning tool. They also have a code check-in process that examines the quality of source code through
static code analysis.
The pipeline also needs to support component analysis that looks at all the code dependencies to see if
dependent components are properly patched and consistent, or what known vulnerabilities are in libraries
you are using. The challenge at this stage is optimizing the tools to focus on the vulnerabilities that
matter most in your environment, to make sure you are seeing everything and scanning what you need to
scan, and how you build more security checks into the pipeline.
Then you analyze the code in production and scan for application-layer vulnerabilities. Doing all of those
things helps you have a more proactively secure environment. To gain runtime protection, you still need
tools that provide continuous real-time monitoring. n
“IN THE CLOUD, EVERYTHING
SHOULD START FROM THE CODE,
AND EVERYONE MUST AGREE ON
WHAT IS NEEDED.”
In today’s world, the perimeter is expanding and visibility is impacted
by the volatile nature of the cloud. To assure security in this kind of
changeable environment, we strive to deploy an immutable architecture
and operations. For example, instead of patching a server, we simply
rebuild it from scratch and redeploy it to the cloud as a new image.
Our controls now need to focus on different levels of our application-
execution states, such as the least privileged design, data blocks, key
management, and all the different dependencies. And most important of
all is identity — everything is identity based.
In the cloud, everything should start from the code, and everyone must
agree on what is needed. Having consistency in the deployment life cycle
makes a big difference. This involves having a tightly controlled CIDI
pipeline, and a way to verify the process end-to-end. n
Mauro Loda, Senior Security
Mauro Loda is a passionate, data-
driven cybersecurity professional who
helped define and drive the “Cloud First”
strategy and culture within a Fortune 100
multinational enterprise. He is a strong
believer in offensive security and simple-
but-effective architecture-defense topology.
Emotional intelligence, pragmatism and
reliability are his guiding principles. He has
achieved numerous industry certifications
and actively participates in forums,
technology councils, and committees.
When it comes to cloud security, everyone in the organization — not
only the security department — needs to feel ownership responsibility
for security. There are too many ways human error can introduce
vulnerabilities into the system. Only with the mindset that security is a
collective effort will you be able to control the variables needed to secure
One of the biggest challenges in cloud security is verifying that the
controls you put in place are actually working. It’s surprising that many
large organizations still manually check each control they use. In a cloud
environment operating at scale, that becomes an impossible task.
James P. Courtney, Certified Chief
Information Security Officer, Courtney
James Courtney is a recognized cybersecurity
professional who has spoken at multiple
conferences, including the CyberMaryland
Conference. He is a Certified Chief Information
Security Officer (one of 1,172 in the world), serving as
the IT network and operations security manager for a
private SIP consulting firm in McLean, Virginia.
There are tools available to automate this process. They monitor and analyze all the security tools you
have in place to verify they are performing as expected.
For example, if you implement a firewall in your environment and you expect it to have a certain level of
traffic, the tool can verify that and alert you if it is not behaving as expected. This kind of continuous,
active monitoring is essential in a continuously changing cloud environment. n
“WHEN OPERATING IN THE CLOUD,
YOU MUST INTEGRATE SECURITY
INTO YOUR STRATEGY SO THAT
MONITORING AND REMEDIATION
BECOME AN INTEGRAL PART OF
YOUR OPERATIONAL PLAN.”
The public cloud is a very different environment from your typical
physical data center, because everything is living and breathing —
and changing. You have to think differently in terms of your overall
approach, what the security architecture looks like, how you strengthen
security, and how you automate it. There is a great deal of security
hygiene you may not have considered in the past.
To have the level of visibility you need in the cloud, you have to adapt
controls and engineering practices and apply a lot more automation.
This means automating processes that scan for and identify
vulnerabilities, and automating vulnerability remediation at the code
and container layer. You must also place strong security checkpoints
in place along the way so that you know what’s happening in every
Milinda Rambel Stone, Vice
President & CISO, Provation Medical
Milinda Rambel Stone is an executive
security leader with extensive experience
in building and leading security programs,
specializing in information-security
governance, incident investigation and
response, cloud security, security awareness,
and risk-management compliance. As a
former software engineer, Stone has passion
and experience in building cloud security
and DevSecOps environments. She currently
practices this at Provation, where she is the
vice president and chief information security
environment. Because you are continuously monitoring, the concept of manual monitoring is not going to
When operating in the cloud, you must integrate security into your strategy so that monitoring and
remediation become an integral part of your operational plan. That’s why the DevSecOps model is
so important in cloud implementations, where you have security engineers, software engineers, and
operational engineers partnering together. We all own the cloud-security challenge. n
When operating in a cloud environment, many resources are recycled, such as IP addresses, disk drives, or data that is
constantly destroyed and recreated on the fly to fulfill any number of on-demand resource requirements.
Securing the cloud goes hand-in-hand with operational considerations. Whether your cloud architecture consists of EC2
servers, containerized servers, or Amazon serverless applications determines levels of built-in functionality. This impacts the
way you develop and the way you secure your architecture.
Analyzing code in production and scanning for application-layer vulnerabilities helps you have a more proactively secure
environment. To gain runtime protection, you still need tools that provide continuous real-time monitoring.