This document proposes a 3-part framework for adopting DevSecOps practices and culture change. The first part is to win over developers by building trust between security and development teams. The second part is to make security practices easy for developers to understand and implement. Visual tools are proposed to show maturity levels and guide progress. The third part is to provide transparency to management on adoption status through regular reporting. Easy-to-use security tools integrated into the development pipeline are also recommended, starting with software composition analysis and integrating static and interactive application security testing.
Overview of DevSecOps and lean/agile transformation in software security by Larry Maccherone.
Quote by Nelson Mandela emphasizing perspective in security practices.
Contact information for Larry Maccherone.
Emphasizes that security must be integral to development, shifting from bolt-on to built-in practices.
Discusses various security practices along the DevOps continuum and the importance of adoption by engineering teams.
A three-part framework focusing on winning trust, simplifying processes for dev teams, and ensuring management transparency.Ensures management has visibility of DevSecOps status and goal setting mechanisms.
Emphasizes making security testing tools readily accessible and integrating them into regular testing frameworks.
Recommendations and criteria for selecting DevSecOps tools to enhance software security.
Encourages reading about trust algorithms and connecting with Larry Maccherone for further engagement.
Larry Maccherone
Security
at theSpeed of
Software Development
A lean/agile transformation approach
to achieving a DevSecOps culture
Larry Maccherone
LinkedIn.com/in/LarryMaccherone
Sponsored by:
2.
Larry Maccherone
Some
sharp
Photo by:lasanta.com.ec
upscaled and text overlaid
creativecommons.org/licenses/by/2.0/
Where you stand
depends on
where you sit.
~ Nelson Mandela
Security practices onDevOps continuum è DevSecOps
• Static/IAST analysis
• Abuse case tests
• Code review
• Interrupt-the-pipeline
code analysis
• Threat modeling à backlog items
• Analyze/Predict à backlog items
• Design complies with policy?
• Test security features
• Common abuse cases
• Pen testing (Vuls found à Test scripts)
• Compliance validation (PCI, etc.)
• Fuzzing
• If we do X will it mitigate Y?
• Capacity forecasting
• Learning à Update
playbooks and Training
• Configuration validation
• Feature toggles/Traffic
shaping configuration
• Secrets management
• Log information for
after-incident analysis
• Intrusion detection
• App attack
detection
• Restore/maintain service
for non-attack usage
• RASP auto respond
• Roll-back or toggle off
• Block attacker
• Shut down services
• Analysis à Learning
• Defect/Incident 3-step
• New attack surface?
Plan to update threat model
7.
That’s a lotof stuff!
How do we get
engineering teams to adopt?
Build security in
morethan bolt it on
Rely on empowered engineering teams
more than security specialists
Implement features securely
more than security features
Rely on continuous learning
more than end-of-phase gates
Build on culture change
more than policy enforcement
DevSecOpsManifesto
12.
We, the SecurityTeam…
Recognize that Engineering Teams…
• Want to do the right thing
• Are closer to the business context and will
make smart trade-off decisions between
security and other risks
• Want information and assistance so they
can improve our security posture
Pledge to…
• Lower the cost/effort side
of any investment in
developer security tools or
practices
• Assist 2x as much with
preventative initiatives as
we beg for your assistance
reacting to security
incidents
Understand that…
• We are no longer gate keepers but rather tool-smiths and advisors
2
Easy for devteams
to know what the right thing is
and
easy to do it
15.
Visualizing a team’sDevSecOps maturity
Disciplines ß Less mature practices More mature practices à
Craftsmanship
100% of group members who have been
employed at least 90 days have been
through Yellow Belt training
Group has access to at least one
Green Belt
Group has:
• Minimum 15% Green Belt
• At least one Brown Belt
Group has:
• Minimum 25% Green Belt
• Minimum 5% Brown Belt
• At least 1 Black Belt
Architecture and Design
As appropriate, a threat model or an security architecture review (SAR) has been done and kept up to date with every significant change to the attack surface and
resolved all issues within SLA (typically 120 days for critical and high)
Asset Management
In iTRC all components of your applications have an accurate Security POC.
(To find: Application > Component > Responsibilities) (To add: Application > Component > New Responsibility > Security POC (select Responsibility from drop-
down) and add NTID)
DevSecOps Tools
Primary Code Analysis (PCA = SAST/IAST)
Software Composition Analysis (SCA)
PCA/SCA tool(s) are integrated into the
pipeline and auto run on commit,
merge, or cadence
Team has working agreements on how it becomes aware of, triages, and resolves findings in the current dev cycle to
stay within current team chosen policy (see below). Ex: Notices in Slack/email; and/or findings put in Jira backlog via
integration and considered at planning; and/or some highly visible console is checked on a regular cadence; and/or
pipeline is interrupted (“failed build” or disabled merge button), etc.
“Stop the bleeding” -
Policy/configuration is set to interrupt
the pipeline or stop progress unless we
get a clean scan confirming no new (not
legacy) vulnerabilities with all
appropriate checkers turned on
We have clean scans for our
application's legacy code for a small
# of high risk checkers (OWASP Top
10 or appropriate subset)
We have clean scans for our
application's legacy code for a large
# of medium-high risk checkers
(SANS top 25, PCI 3.2, etc. as
appropriate)
We have clean scans of all code with
all appropriate checkers on
including 3rd party code and open
source (that isn't covered by an SCA)
Network-originated
Scans
Findings for network-originated scans (ex: Qualys, Nessus) run against your
components by the security team are resolved within the SLA (typically 120 days
for critical and high)
Authenticated automated self-scans
(UScan, etc.) are run and findings are
resolved in the current dev cycle
(PCI only) Authenticated scans are
being run for your components and
findings resolved within SLA
Independent Security
Assessment
Independent Security Assessment (aka Pen testing) done & kept up to date at
appropriate cadence (typically annually) and issues resolved within SLA
Red/Purple Team exercises done on
the system your solution is part of
and issues resolved in the SLA
…
EXAMPLE
Larry Maccherone
Visualizing anorg’s DevSecOps maturity
Disciplines ß Less Mature More Mature à
Craftsmanship
100% Yellow Belt
after first 90
days of
employment
At least 1 Green
Belt
Min 30% Green
Belt
At least one
Brown Belt
Min 50% Green
Belt
Min 20% Brown
At least 1 Black
Belt
+6%
+5%
Threat Modeling
A threat model
has been done
and kept up to
date with every
significant
change to the
attack surface
+23%
.
.
.
EXAMPLE
21.
Make it eveneasier
to do the right thing
with easy access to the right tools
22.
Think of securitytesting tools
as just another testing
framework like those for
unit, acceptance, etc.
23.
DevSecOps Tool Landscape
StaticAnalysis (aka SAST)
• Looks at source code
• Data/control flow analysis
• Prone to false positives
• Rapid feedback for developers
• Code fix suggestions
Dynamic
• Exercises app via UI/API
• Senses vulnerability by response to input
• Zero? false positives. Report is an exploit
• High false negatives
• Difficult to implement especially w/ auth
• Sometimes hard to find code to remediate
Runtime Application
Security Protection
(RASP)
• Often uses same engine
as IAST
• Reports on “bad”
behavior
• Can abort transaction or
kill process to protect
Fuzzing (black box)
• Instruments system (to varying degrees)
• Sends unexpected input at API
• Looks at response and instrumentation output
• Great for testing protocols like SIP
• Good for REST APIs
• Potentially long run times
• Hard to find code to remediate
Software Composition
Analysis (SCA)
• Identifies dependency and
version
• Checks CVE/NVD + … for
reported vulnerabilities
• Proposes version/patch to
remediate
• Checks license vs policy
• Runs fast
• Easy to implement
• Best bang for buck!
IAST
• Runtime code analysis
• Combine dynamic/static
• Low false positives
• Depends on test coverage
• Immature but getting there
Primary Code Analysis (PCA)
24.
DevSecOps Tool recommendations
•SCA (aka dependency checkers, open source security scanners) are
the best bang for the buck/effort. START HERE
• Next implement IAST or SAST (together, we created a category
grouping these two called Primary Code Analysis). FAVOR IAST
• Embed in pipeline and treat like an additional integration test suite
• For greenfield turn on all checkers and interrupt pipeline until issues
resolved
• For legacy, “stop the bleeding” by interrupting pipeline for NEW
issues and slowly turn up the interrupt policy until all legacy issues
are resolved. Plan to address all critical/high issues within 120 days.
25.
Criteria for DevSecOpstools
• Integrated and runs fast in pipeline for current dev cycle:
• Parallelized with other test suites, or…
• In-series, or…
• Incremental parallelized/in-series from asynchronous baseline, or…
• Asynchronous out of pipeline with baseline/feedback for next dev cycle
• Notifications the way the team wants: Slack, email, report,
dashboard, etc.
• Interrupt the pipeline the way the team wants: break the build,
grey out the merge button, etc.
• Environment-specific ”policy” settings for
WhiteSource
Agile Open SourceSecurity SolutionFounded in 2011
Offices:
Over 400 Customers
3 OEMs
Over 300% Growth
for 3 Consecutive Years
New York / Boston / Tel Aviv
DEPLOYPLAN BUILD & TEST MAINTENANCE
MaintenanceDetection Release
Documentation
Selection
Alertin
g
PolicyReporting