DEVSECOPS
A Secure SDLC in the
Age of DevOps and
Hyper-Automation
By Alex Senkevitch, CISSP, CISM
ISSA Wisconsin
January Lunch Meeting
08 Jan 2019
i
WHAT’S IN STORE
1.0 Background (this stuff)
2.0 A Birth of a Paradigm
3.0 Throwing the Baby Out With the Bath Water
4.0 A More Mature Pipeline
5.0 Q&A
i
YOUR SPEAKER TODAY IS…
Alex Senkevitch, CISSP, CISM
o Security researcher and architect for over 20 years
o Working for/consulting to Fortune 500/Global 2000 for 20 years
o Worked in embedded systems and network engineering before that
o Have patents in multi-tiered security and event analytics systems
o Former product manager at Veracode (MPT product)
o Have been architecting and developing DevSecOps implementations since
2012
THE BIRTH OF A PARADIGM 2.0
What Did I Miss?
i
IN THE BEGINNING, THERE WERE THREE…
o Traditional production software ecosystem:
o Three Discrete Stakeholders: Development, Operations, Security
o Discrete Subject Matter Experts (SMEs) on dedicated teams, attempting to work together
o Ideally: Functional, Stable, and Secure software solutions are the result
o Attempts to streamline “operations” made by both developers and operations:
o Developers wanted to reduce/remove obstacles to the market (speed things up)
o Operations wanted to make their day-to-day “easier” (on-call is no fun, especially for thousands of servers)
o Technology and development architecture was mostly “static”
o Enter “DevOps” (circa 2009)…
WHEN THREE BECOME TWO… 2.1
The Emergence of DevOps
i
A PHILOSOPHY EMERGES - DEVOPS
o Started as an Agile development philosophy circa 2009 (linked to Patrick Debois)
o Originated from the Developer perspective (“if only we didn’t have to deal with the ops
team”)
o Adopted mostly from desperation and euphoria, but viewed as “the answer”
o As of today, there still is no standardized definition of it—proper and correct are in the eye of the implementor
o Why did it get so much traction so fast?
o Simple…
o Development is directly tied to revenue
o Operations directly to expense
o Elevator pitch: Automation of “operations” equals “cost reduction” (immediate ROI)
o Security still largely left to status quo approaches initially
i
THE TECH DISRUPTION
o Petabyte Datacenters become mainstream (circa 2008-2009), start eclipsing exabyte DCs
o Google, Facebook, Amazon vs. Rackspace and Server Central
o Managed Services start to decouple from underlying bare metal
o OpenStack introduces first serious vendor agnostic data center orchestration (2010)
o Everything starts shifting from static to dynamic in nature (e.g., “software defined”)
o Cloud Service Providers (CSPs) started to emerge as viable alternatives, not a novelty (2011)
o CSPs started to push the notion of “immutable infrastructure” as core to their service
o “Pets vs. Cattle” emerged as the paradigm
o Stop “raising” servers, and start driving the herd—Infrastructure-as-Code was born
i
PUNCTUATED EQUILIBRIUM: DEVOPS AS A
PRACTICE
o By 2012 there was a convergence—philosophy found technology innovation
o Patterns of practice emerged
o With the advent of implementation patterns, and adoption by CSPs, the migration from
traditional managed services data centers started to shift into the cloud
o “Cloud Native” came to equate to “DevOps native”
o Provisioning occurred through APIs and SDKs (“Ahha! We speak native Developer!”)
o Traditional data centers were left to try and implement some kind of “DevOps” (cue the vendors!)
o This new practice was defined by its dependency upon automation
o Continuous Integration (CI) solutions were tapped for code quality and build processes
o Continuous Deployment (CD) solutions were brought in to bridge CI into the elastic realm
o The CI/CD pipeline was born
AND WHEN TWO BECOME ONE 2.2
DevSecOps is Born
i
“IT WORKED FOR OPS…”
o By 2016, with AWS exponentially growing, the notion of DevSecOps became a thing in earnest
o Security viewed as the last hurdle to immediate time to market
o One of the themes of AWS’s re:invent 2017 was “Compliance-as-Code”
o The notion of automating security controls and compliance checks holistically in DevOps
o More and more articles and seminars were being held where Security was now being rolled into
DevOps
o This was done, in part, as assurance that “Cloud Native” did not mean “World Readable”
o Dozens of major S3 world readable breaches were making headlines in 2017
o Checking S3 bucket permissions topped the Trusted Advisor list
o Simultaneously, a whole new lineup of startups were rolling out new security software for automated
security processes within automation
o The establishment was no longer established
i
WE HAVE DEVSECOPS—NOW WHAT?
o So started the free-for-all
o Traditional security organizations resisted
o Stronger developer organizations started to prevail
o New security tech started to be produced in the commercial and Open Source markets
o Developers started evaluating their own solutions
o It’s automatable, right?
o We just need one of these, and one of those, right?
o Security orgs started to be bypassed since developers could start to demonstrate
compensating controls
o Or so they thought
o Initial DevSecOps attempts re-surfaced weak practices and discretionary controls
enforcement
THROWING THE BABY OUT WITH THE
BATH WATER
3.0
Lessons From the Field:
Learning the Hard Way…
All Over Again
i
“Those who fail to learn from history are condemned to repeat it.”
- Winston Churchill, 1948
i
AUTOMATION WILL SOLVE EVERYTHING, RIGHT?
o Automation viewed as more infallible
o Faster
o More capable of parallel execution
o Removes the “inefficient human” from the process
o Security functions started to get codified and automated…by non-security personnel
o Achieved coverage wasn’t as comprehensive as first believed
o If a process couldn’t return in “cloud time” (minutes), then it was viewed as an outlier
o Unnecessary prior to deployment and would be dealt with “at some other time”
o Errors now occurred at “cloud speed”
o Coverage suffered
i
BREAK GLASS: WHEN MANDATORY BECOMES
DISCRETIONARY?
o It is not uncommon, in the event of “hardship”, security controls can simply be
dropped/disabled (they’re just YAML configs, after all)
o Most DevOps orgs originated as development orgs first—functionality first
o Very few outside of the original security teams understand the fiduciary responsibility of security
o Compliance controls are usually included in the break glass overrides
o In a DevOps world, there really is no such thing as Mandatory Access Controls (MAC)
o If the automation administrator doesn’t agree with security directives…
i
FAILURE IS OPTIONAL
o So you get all this security automation setup in your pipeline
o However, the dev manager doesn’t like their builds being “failed” (halted)
o They would prefer you just send them an email with errors, but let them proceed
o (Reference previous “Break Glass”)
i
ONE TOOL IS AS GOOD AS ANOTHER
o Failure to understand what security tools and technologies actually do
o Software Composition Analysis is the same as Static Analysis, right?
o When non-security personnel select the security tooling, coverage can suffer mightily
o There can also be a tendency to only implement one or two types of coverage
i
ALL SECURITY OUTPUT IS THE SAME
o As different tools are used, they can be viewed as all having the same type and depth of
output
o “They’re all reporting ‘vulnerabilities’…it’s all the same”
o There can be a lack of understanding about data convergence and enrichment
o Different tools can report the same finding, but different facets
o In a hybrid pipeline, data convergence becomes very important
o Provide a single actionable finding data stream
A MORE MATURE PIPELINE 3.0
i
QUICK REVIEW: TYPES OF SECURITY FUNCTIONS
o Software Composition Analysis (SCA) – Scans third-party packages/code, usually by version
specific hashes
o Very fast (minutes); but only covers those file hashes it has records for—may miss files
o Static Application Security Testing (SAST), “Static” – Scans source code or binary files
o Moderate (hours); provides much better coverage, but has problems with hierarchy and very prone to FPs
o Dynamic Application Security Testing (DAST), “Dynamic” – Traditional network-based scanning
o Moderate-to-fast (hours); provides reasonable breadth-wise coverage, some depth—not as many FPs
o Manual Testing, “Pen Testing” – Provides the most comprehensive breadth and depth-wise tests
o SLOW (days); provides best identification of complex attack vectors, but can’t cover as much real-estate
o Hybridized Functions:
o Custom scripts, pre-commit hooks, IDE integrations, etc.
i
WHAT DO WE REALLY NEED TO TEST?
o The Code Pyramid:
o Third-Party – The lion’s share, can be scanned
with SCA
o WARNING: Not all SCAs scan as deeply
o Configuration/IaC – can also be scanned with
(certain) SCAs [mostly commercial]
o Custom Code – this is where we should be
spending most of our time and resources
o This is the Intellectual Property we’ve developed
Moderate-
Hard
Quick
Quick-ish
i
HERE’S WHAT TO WORRY ABOUT
o Focus on the “Custom Code” portion
o Generally comprised of:
o Configurations (deployment descriptors, etc.)
o Network-bound Layer (services, etc.)
o Internal Code
o Internal Code is where the highest latent risk potentials
will be
i
THINGS TO LOOK AT WHEN SELECTING TOOLS
o SCA lives and dies on two things:
o How many VDBs it pulls from (diversity of signatures)
o How far up the third-party stack it can go (can also include container awareness)
o SAST will be a love/hate tech, but you don’t have much choice:
o Will most likely be very noisy (high FP rates)
o Is subject to “rule drift” – when the vendor suddenly starts righting rules for langs you don’t care about, and not for the
ones you do
o Usually can’t really see externally facing “endpoints” (i.e., what you can talk to from the network)
o If your devs like “wrapping”/nesting everything, it will probably have some issues with it
o DAST can be good at enumerating initial network endpoints/injection points
o Usually not as high a FP rate as SAST
o Starts to “spin” the deeper into the running app it goes (e.g., custom injection testing, etc.)
o Traditionally has issues with APIs of just about any sort (again, subject to “drift”)
i
PIPELINE STRATEGIES
o Pick a single SCA solution capable of scanning up to the edge of your custom code
o Open Source can be nice, but not if it’s missing the most obvious low hanging fruit (i.e., doesn’t scan NPMs)
o Use triggered AND scheduled SAST actions
o Trigger off a post-commit hook for specific branches (merge/feature/release branches)
o Schedule jobs to run on the entire repo over time (you don’t have to wait for a dev to do something)
o Use Case: Use scheduled jobs to create project baselines and triggered jobs to do differential scans
o Use DAST in the system integration stage—if it’s not all together and running, you can’t scan it
o Use a tiered execution strategy with graduated SLAs—fastest first and most often, slower later and less
often
o Do NOT skip a manual test!! Make that the highest tier and farthest out in the pipeline, but it MUST be done to find
complex attack vectors.
o NOTE: If you current manual tests are showing nothing but “scanner fluff”, consider a different manual test provider
i
DON’T BE AFRAID OF DATA PROCESSING
o There’s a lot of data coming, it’s the data economy
o Don’t be afraid to leverage the major advances in data convergence/enrichment for the
output from the security tools
o Also think about external integrations
o Email notices are so 10 years ago
o Think about integrating converged findings into a “single pane of glass” (e.g., issue tracker)
QUESTIONS & ANSWERS

DevSecOps: A Secure SDLC in the Age of DevOps and Hyper-Automation

  • 1.
    DEVSECOPS A Secure SDLCin the Age of DevOps and Hyper-Automation By Alex Senkevitch, CISSP, CISM ISSA Wisconsin January Lunch Meeting 08 Jan 2019
  • 2.
    i WHAT’S IN STORE 1.0Background (this stuff) 2.0 A Birth of a Paradigm 3.0 Throwing the Baby Out With the Bath Water 4.0 A More Mature Pipeline 5.0 Q&A
  • 3.
    i YOUR SPEAKER TODAYIS… Alex Senkevitch, CISSP, CISM o Security researcher and architect for over 20 years o Working for/consulting to Fortune 500/Global 2000 for 20 years o Worked in embedded systems and network engineering before that o Have patents in multi-tiered security and event analytics systems o Former product manager at Veracode (MPT product) o Have been architecting and developing DevSecOps implementations since 2012
  • 4.
    THE BIRTH OFA PARADIGM 2.0 What Did I Miss?
  • 5.
    i IN THE BEGINNING,THERE WERE THREE… o Traditional production software ecosystem: o Three Discrete Stakeholders: Development, Operations, Security o Discrete Subject Matter Experts (SMEs) on dedicated teams, attempting to work together o Ideally: Functional, Stable, and Secure software solutions are the result o Attempts to streamline “operations” made by both developers and operations: o Developers wanted to reduce/remove obstacles to the market (speed things up) o Operations wanted to make their day-to-day “easier” (on-call is no fun, especially for thousands of servers) o Technology and development architecture was mostly “static” o Enter “DevOps” (circa 2009)…
  • 6.
    WHEN THREE BECOMETWO… 2.1 The Emergence of DevOps
  • 7.
    i A PHILOSOPHY EMERGES- DEVOPS o Started as an Agile development philosophy circa 2009 (linked to Patrick Debois) o Originated from the Developer perspective (“if only we didn’t have to deal with the ops team”) o Adopted mostly from desperation and euphoria, but viewed as “the answer” o As of today, there still is no standardized definition of it—proper and correct are in the eye of the implementor o Why did it get so much traction so fast? o Simple… o Development is directly tied to revenue o Operations directly to expense o Elevator pitch: Automation of “operations” equals “cost reduction” (immediate ROI) o Security still largely left to status quo approaches initially
  • 8.
    i THE TECH DISRUPTION oPetabyte Datacenters become mainstream (circa 2008-2009), start eclipsing exabyte DCs o Google, Facebook, Amazon vs. Rackspace and Server Central o Managed Services start to decouple from underlying bare metal o OpenStack introduces first serious vendor agnostic data center orchestration (2010) o Everything starts shifting from static to dynamic in nature (e.g., “software defined”) o Cloud Service Providers (CSPs) started to emerge as viable alternatives, not a novelty (2011) o CSPs started to push the notion of “immutable infrastructure” as core to their service o “Pets vs. Cattle” emerged as the paradigm o Stop “raising” servers, and start driving the herd—Infrastructure-as-Code was born
  • 9.
    i PUNCTUATED EQUILIBRIUM: DEVOPSAS A PRACTICE o By 2012 there was a convergence—philosophy found technology innovation o Patterns of practice emerged o With the advent of implementation patterns, and adoption by CSPs, the migration from traditional managed services data centers started to shift into the cloud o “Cloud Native” came to equate to “DevOps native” o Provisioning occurred through APIs and SDKs (“Ahha! We speak native Developer!”) o Traditional data centers were left to try and implement some kind of “DevOps” (cue the vendors!) o This new practice was defined by its dependency upon automation o Continuous Integration (CI) solutions were tapped for code quality and build processes o Continuous Deployment (CD) solutions were brought in to bridge CI into the elastic realm o The CI/CD pipeline was born
  • 10.
    AND WHEN TWOBECOME ONE 2.2 DevSecOps is Born
  • 11.
    i “IT WORKED FOROPS…” o By 2016, with AWS exponentially growing, the notion of DevSecOps became a thing in earnest o Security viewed as the last hurdle to immediate time to market o One of the themes of AWS’s re:invent 2017 was “Compliance-as-Code” o The notion of automating security controls and compliance checks holistically in DevOps o More and more articles and seminars were being held where Security was now being rolled into DevOps o This was done, in part, as assurance that “Cloud Native” did not mean “World Readable” o Dozens of major S3 world readable breaches were making headlines in 2017 o Checking S3 bucket permissions topped the Trusted Advisor list o Simultaneously, a whole new lineup of startups were rolling out new security software for automated security processes within automation o The establishment was no longer established
  • 12.
    i WE HAVE DEVSECOPS—NOWWHAT? o So started the free-for-all o Traditional security organizations resisted o Stronger developer organizations started to prevail o New security tech started to be produced in the commercial and Open Source markets o Developers started evaluating their own solutions o It’s automatable, right? o We just need one of these, and one of those, right? o Security orgs started to be bypassed since developers could start to demonstrate compensating controls o Or so they thought o Initial DevSecOps attempts re-surfaced weak practices and discretionary controls enforcement
  • 13.
    THROWING THE BABYOUT WITH THE BATH WATER 3.0 Lessons From the Field: Learning the Hard Way… All Over Again
  • 14.
    i “Those who failto learn from history are condemned to repeat it.” - Winston Churchill, 1948
  • 15.
    i AUTOMATION WILL SOLVEEVERYTHING, RIGHT? o Automation viewed as more infallible o Faster o More capable of parallel execution o Removes the “inefficient human” from the process o Security functions started to get codified and automated…by non-security personnel o Achieved coverage wasn’t as comprehensive as first believed o If a process couldn’t return in “cloud time” (minutes), then it was viewed as an outlier o Unnecessary prior to deployment and would be dealt with “at some other time” o Errors now occurred at “cloud speed” o Coverage suffered
  • 16.
    i BREAK GLASS: WHENMANDATORY BECOMES DISCRETIONARY? o It is not uncommon, in the event of “hardship”, security controls can simply be dropped/disabled (they’re just YAML configs, after all) o Most DevOps orgs originated as development orgs first—functionality first o Very few outside of the original security teams understand the fiduciary responsibility of security o Compliance controls are usually included in the break glass overrides o In a DevOps world, there really is no such thing as Mandatory Access Controls (MAC) o If the automation administrator doesn’t agree with security directives…
  • 17.
    i FAILURE IS OPTIONAL oSo you get all this security automation setup in your pipeline o However, the dev manager doesn’t like their builds being “failed” (halted) o They would prefer you just send them an email with errors, but let them proceed o (Reference previous “Break Glass”)
  • 18.
    i ONE TOOL ISAS GOOD AS ANOTHER o Failure to understand what security tools and technologies actually do o Software Composition Analysis is the same as Static Analysis, right? o When non-security personnel select the security tooling, coverage can suffer mightily o There can also be a tendency to only implement one or two types of coverage
  • 19.
    i ALL SECURITY OUTPUTIS THE SAME o As different tools are used, they can be viewed as all having the same type and depth of output o “They’re all reporting ‘vulnerabilities’…it’s all the same” o There can be a lack of understanding about data convergence and enrichment o Different tools can report the same finding, but different facets o In a hybrid pipeline, data convergence becomes very important o Provide a single actionable finding data stream
  • 20.
    A MORE MATUREPIPELINE 3.0
  • 21.
    i QUICK REVIEW: TYPESOF SECURITY FUNCTIONS o Software Composition Analysis (SCA) – Scans third-party packages/code, usually by version specific hashes o Very fast (minutes); but only covers those file hashes it has records for—may miss files o Static Application Security Testing (SAST), “Static” – Scans source code or binary files o Moderate (hours); provides much better coverage, but has problems with hierarchy and very prone to FPs o Dynamic Application Security Testing (DAST), “Dynamic” – Traditional network-based scanning o Moderate-to-fast (hours); provides reasonable breadth-wise coverage, some depth—not as many FPs o Manual Testing, “Pen Testing” – Provides the most comprehensive breadth and depth-wise tests o SLOW (days); provides best identification of complex attack vectors, but can’t cover as much real-estate o Hybridized Functions: o Custom scripts, pre-commit hooks, IDE integrations, etc.
  • 22.
    i WHAT DO WEREALLY NEED TO TEST? o The Code Pyramid: o Third-Party – The lion’s share, can be scanned with SCA o WARNING: Not all SCAs scan as deeply o Configuration/IaC – can also be scanned with (certain) SCAs [mostly commercial] o Custom Code – this is where we should be spending most of our time and resources o This is the Intellectual Property we’ve developed Moderate- Hard Quick Quick-ish
  • 23.
    i HERE’S WHAT TOWORRY ABOUT o Focus on the “Custom Code” portion o Generally comprised of: o Configurations (deployment descriptors, etc.) o Network-bound Layer (services, etc.) o Internal Code o Internal Code is where the highest latent risk potentials will be
  • 24.
    i THINGS TO LOOKAT WHEN SELECTING TOOLS o SCA lives and dies on two things: o How many VDBs it pulls from (diversity of signatures) o How far up the third-party stack it can go (can also include container awareness) o SAST will be a love/hate tech, but you don’t have much choice: o Will most likely be very noisy (high FP rates) o Is subject to “rule drift” – when the vendor suddenly starts righting rules for langs you don’t care about, and not for the ones you do o Usually can’t really see externally facing “endpoints” (i.e., what you can talk to from the network) o If your devs like “wrapping”/nesting everything, it will probably have some issues with it o DAST can be good at enumerating initial network endpoints/injection points o Usually not as high a FP rate as SAST o Starts to “spin” the deeper into the running app it goes (e.g., custom injection testing, etc.) o Traditionally has issues with APIs of just about any sort (again, subject to “drift”)
  • 25.
    i PIPELINE STRATEGIES o Picka single SCA solution capable of scanning up to the edge of your custom code o Open Source can be nice, but not if it’s missing the most obvious low hanging fruit (i.e., doesn’t scan NPMs) o Use triggered AND scheduled SAST actions o Trigger off a post-commit hook for specific branches (merge/feature/release branches) o Schedule jobs to run on the entire repo over time (you don’t have to wait for a dev to do something) o Use Case: Use scheduled jobs to create project baselines and triggered jobs to do differential scans o Use DAST in the system integration stage—if it’s not all together and running, you can’t scan it o Use a tiered execution strategy with graduated SLAs—fastest first and most often, slower later and less often o Do NOT skip a manual test!! Make that the highest tier and farthest out in the pipeline, but it MUST be done to find complex attack vectors. o NOTE: If you current manual tests are showing nothing but “scanner fluff”, consider a different manual test provider
  • 26.
    i DON’T BE AFRAIDOF DATA PROCESSING o There’s a lot of data coming, it’s the data economy o Don’t be afraid to leverage the major advances in data convergence/enrichment for the output from the security tools o Also think about external integrations o Email notices are so 10 years ago o Think about integrating converged findings into a “single pane of glass” (e.g., issue tracker)
  • 27.