Scaling Security in a Cloud
Environment (@ UK Azure UG)
London, 29th Sep 2016
@DinisCruz
Me
• Developer for 28 years
• AppSec for 14 years
• Day job:
• Photobox Group CISO
• Leader OWASP O2 Platform
project (.Net REPL on steroids)
• @DinisCruz
• http://blog.diniscruz.com
• http://leanpub.com/u/
DinisCruz
Scaling Security is Cloud
Environment


Is all about Testing


and Automation
View your pipeline as an App
Your cloud environment is
‘The Application’
This is ‘The Application’
You need to test everything!!!!

From provisioning, to deployment
to scaling (up and down)
Test your assumptions and
Behaviour
Write ‘tests’ that execute
regularly

What is a ‘test’ is an very
interesting question that we
will explore in this presentation
Key concept:


Performance test your site
everyday and drop test on
high volume events/periods
QA Tests are the best
performance tests

(randomise execution order)
100% Code Coverage
Not the Summit but ‘base camp’
Who here (easily) writes code
with 100% code coverage?



in .Net and in Javascript
Who uses
To scale security (and testing) it
is key that you execute tests in
real-time and see code coverage
https://www.slideshare.net/DinisCruz/start-with-passing-tests-tdd-for-bugs-v05-22-sep-2016
When creating tests on the ‘Fix’ stage,
the focus (& time allocated) is on 

fixing the bug (not on testing it)
When creating tests on the ‘Issue Creation’
stage, the focus (& time allocated) is on 

how to test it and what is its root cause
WAFs
Web Application Firewalls
Azure WAF - Who is using it?
AWS + Lambda = Finally a WAF that can work!
These are the
KILLER 

FEATURE
Autoscaling
• The real power of the cloud
• Who is using this in production?
• How fast do you auto-scale?
– Seconds?
– Minutes?
• Do you also aggressively autoscale down?
Autoscaling
• Image on the left is a set of
rules and behaviours
• How do you test them?
– Manually
– Quasi-Manually
– With Visualisation?
– With Tests?
What about testing your auto-scale?
DevOps


Push left

Pushing vulnerabilities faster to
production!
DevOps
• In DevOps, the production pipeline is pushed faster and
faster
• Continuous Integration (CI) needs to be tested as an
application
• We need to think of the CI pipeline as a graph
• Write rules (i.e. tests) to validate our expectations
• We need Static analysis technology !!!!!! (SAST for CI)
• This will allow us to understand how the pipeline behaves
and interconnects
Jira and Confluence 

(for scaling Security)
This is how we
Scale Security
activities
• Using out-of-the-box Jira functionality
CUSTOMIZED JIRA WORKFLOWS
We use Jira as a Graph Database
Labels
Extra 

Attributes
Global Key
Workflows
Assignments
TimeStamps
Linked to Epic
Epic captures all risks and tasks
Confluence page captures facts
Hyperlinked risks
We use Confluence to view the data
How we handle incidents/events
Task Response is used to capture result
Security organisation as an graph
Each Pillar is
mapped to a
Capability
Each Capability is
mapped to an
Programme
Each Programme is
mapped to a Project
Group Security Projects as Jira Issues
please contribute
Using OpenSourced API to filter JIRA data
We are hiring
• Senior Cloud Security Engineer
• Head of Detect 

(Incidence Response, Situational Awareness)
• Head of AppSec

(Application Security)
• Head of InfoSec

(Information Security)
Opportunity to join our team
Can you crack this puzzle?
(ask for a card)
Thanks, any questions
@diniscruz
dinis.cruz@owasp.org
Take a look at the
OWASP O2 Platform
.Net REPL on Steroids 

(great IDE for mini-tool generation)
(needs some love)
SURROGATE DEPENDENCIES
• It tests the API and replays responses
– Use integration tests to ‘lock’ the api used
– Save responses in JSON format
– Replay data to client
• Allow client to be offline
What is it?
Locking the API using tests
API
A ‘client’
Network
APINetwork
Git repo with data
store as JSON files
Integration tests
Replay stored JSON
Git repo with data
store as JSON files
Surrogate
Dependency
A ‘client’
Network
Modify data 

(optional)
API
Client/app is running Offline!
Adding security tests (to server)
APINetwork
Git repo with data
store as JSON files
Integration tests
Insert Payloads here To attack the server
Adding Security Tests (from server)
Git repo with data
store as JSON files
Surrogate
Dependency
A ‘client’
Network
Modify data 

(optional)
Insert Payloads here
To attack the client 

(from the server)
What kind of issues can be found this way?
- XSS
- SQL Injection
- CSRF (to server)
- DoS
- Steal Sessions tokens
Once you know where the client is vulnerable
Once you know which

data received from the
server will exploit the client
You ‘ask’ the API 

where did 

that data 

come from?
A ‘client’
Network API
… and follow the rabbit holes
Which might lead to 

and external source

(i.e. attacker)
yes
Request for xyz url
(GET, POST, PUT)
in
Cache?
Modify data 

(optional)
no Load data from
real service
Save data to
cache
Git repo
with data
store as
JSON files
Load data from
cache
A ‘client’
With Proxy
Send data to user
Using Threat Models to ‘Lock’ the
Brief
• Key challenge for developers (and project managers) is
the constant flow of business changes/requests
• Some look simple, but have major implications
– usually because they don’t fit in the current architecture
• this is where Threat Models help since those ‘new
features’ will require an revisit of existing Threat
Models
– Putting a gentle (positive) ’break’ in the current business
requests
– Which means that the original brief is ‘locked’
Use Threat Models to control business
• Threat models should be set-up as sources of truth
• There should be a requirement to do a Threat
Model for every app and feature
• Not doing Threat Models means that the security
implications of the ‘new feature’ have not be
considered and documented
• This is easier to put in place than a requirement to
have ‘up-to-date documentation’ and ‘diagrams
that represent the real world’
Sources of truth
• Creating and following a threat model for a feature is a great
way to understand a threat model journey:
– First, take a very specific path, a very specific new feature that you are
adding, or take a property, such as a new field, or a new functionality.
– Next, you want to create a full flow of that feature. Look at the entry
point and the assets, and look at what is being used in that feature.
– Now, you can map the inputs of the feature; you can map the data
paths given by the data schema, and then you follow the data.
• You can see for example how the data go into the application,
what it ends up with, who calls who.
• This means you have a much tighter brief, and a much better
view of the situation.
Threat Model per Feature
• When you create threat models per feature or per component, a key element
is to start to chain them (i.e. map the connections between them)
– You will be able to identify uber-vulnerabilities, or uber-threats, that are created by
paths that exist from threat model, A to threat model B, to threat model C.
• For example, I have seen threat models where one will say, "Oh, we get data
from that over there. We trust their system, and they are supposed to have
DOS protections, and they rate limit their requests".
• However, after doing a threat model of that system, we find that it does not
have any DOS protections, even worse, it doesn't do any data validation/
sanitisation.
• This means that the upstream service (which is 'trusted') is just glorified proxy,:
– meaning that for all practices purposes, the 'internal' APIs and endpoints are directly
connected to the upstream service callers (which is usually the internet, or other
'glorified proxies' services).
Chained threat models
• A key objective of pentest should be to validate the threat
model. Pentests should confirm whether the expectations and
the logic defined in the threat model are true.
• Any variation identified is itself an important finding because it
means there is a gap in the company's understanding of how
the application behaves.
• There are three important steps to follow:
– Take the threat models per feature, per layer and confirm that there
is no blind spots or variations on the expectation
– Check the code path to improve the understanding of the code path
and what is happening in the threat model
– Confirm that there are no extra behaviours
Pentest Confirms Threat Model
• One of the key elements of threat modeling is it's ability
to highlight a variety of interesting issues and blind spots,
in particular within the architecture of the threat model.
• One of my favourite moments occurs when the
developers and the architects working on a threat model
realise something that they hadn't noticed before.
– In such cases, sometimes it is the developer who says, "Oh, I
never realised that is how it worked!".
– Other times when the architect says, "Well, this is how app was
designed", and the developer responds "Yeah, but that didn't
work, so we did it like this."
Capture the success stories of your threat models

Scaling security in a cloud environment v0.5 (Sep 2017)

  • 1.
    Scaling Security ina Cloud Environment (@ UK Azure UG) London, 29th Sep 2016 @DinisCruz
  • 2.
    Me • Developer for28 years • AppSec for 14 years • Day job: • Photobox Group CISO • Leader OWASP O2 Platform project (.Net REPL on steroids) • @DinisCruz • http://blog.diniscruz.com • http://leanpub.com/u/ DinisCruz
  • 3.
    Scaling Security isCloud Environment 
 Is all about Testing 
 and Automation
  • 4.
  • 5.
    Your cloud environmentis ‘The Application’
  • 6.
    This is ‘TheApplication’
  • 7.
    You need totest everything!!!!
 From provisioning, to deployment to scaling (up and down)
  • 8.
    Test your assumptionsand Behaviour
  • 9.
    Write ‘tests’ thatexecute regularly
 What is a ‘test’ is an very interesting question that we will explore in this presentation
  • 10.
    Key concept: 
 Performance testyour site everyday and drop test on high volume events/periods
  • 11.
    QA Tests arethe best performance tests
 (randomise execution order)
  • 12.
  • 13.
    Not the Summitbut ‘base camp’
  • 14.
    Who here (easily)writes code with 100% code coverage?
 
 in .Net and in Javascript
  • 15.
  • 16.
    To scale security(and testing) it is key that you execute tests in real-time and see code coverage https://www.slideshare.net/DinisCruz/start-with-passing-tests-tdd-for-bugs-v05-22-sep-2016
  • 17.
    When creating testson the ‘Fix’ stage, the focus (& time allocated) is on 
 fixing the bug (not on testing it) When creating tests on the ‘Issue Creation’ stage, the focus (& time allocated) is on 
 how to test it and what is its root cause
  • 18.
  • 19.
    Azure WAF -Who is using it?
  • 20.
    AWS + Lambda= Finally a WAF that can work! These are the KILLER 
 FEATURE
  • 21.
  • 22.
    • The realpower of the cloud • Who is using this in production? • How fast do you auto-scale? – Seconds? – Minutes? • Do you also aggressively autoscale down? Autoscaling
  • 23.
    • Image onthe left is a set of rules and behaviours • How do you test them? – Manually – Quasi-Manually – With Visualisation? – With Tests? What about testing your auto-scale?
  • 24.
  • 25.
    DevOps • In DevOps,the production pipeline is pushed faster and faster • Continuous Integration (CI) needs to be tested as an application • We need to think of the CI pipeline as a graph • Write rules (i.e. tests) to validate our expectations • We need Static analysis technology !!!!!! (SAST for CI) • This will allow us to understand how the pipeline behaves and interconnects
  • 26.
    Jira and Confluence
 (for scaling Security)
  • 27.
    This is howwe Scale Security activities
  • 28.
    • Using out-of-the-boxJira functionality CUSTOMIZED JIRA WORKFLOWS
  • 29.
    We use Jiraas a Graph Database Labels Extra 
 Attributes Global Key Workflows Assignments TimeStamps Linked to Epic
  • 30.
    Epic captures allrisks and tasks
  • 31.
  • 32.
  • 33.
    We use Confluenceto view the data
  • 34.
    How we handleincidents/events
  • 36.
    Task Response isused to capture result
  • 37.
    Security organisation asan graph Each Pillar is mapped to a Capability Each Capability is mapped to an Programme Each Programme is mapped to a Project
  • 38.
  • 40.
    please contribute Using OpenSourcedAPI to filter JIRA data
  • 41.
  • 42.
    • Senior CloudSecurity Engineer • Head of Detect 
 (Incidence Response, Situational Awareness) • Head of AppSec
 (Application Security) • Head of InfoSec
 (Information Security) Opportunity to join our team
  • 43.
    Can you crackthis puzzle? (ask for a card)
  • 44.
  • 45.
    Take a lookat the OWASP O2 Platform .Net REPL on Steroids 
 (great IDE for mini-tool generation) (needs some love)
  • 46.
  • 47.
    • It teststhe API and replays responses – Use integration tests to ‘lock’ the api used – Save responses in JSON format – Replay data to client • Allow client to be offline What is it?
  • 48.
    Locking the APIusing tests API A ‘client’ Network APINetwork Git repo with data store as JSON files Integration tests
  • 49.
    Replay stored JSON Gitrepo with data store as JSON files Surrogate Dependency A ‘client’ Network Modify data 
 (optional) API Client/app is running Offline!
  • 50.
    Adding security tests(to server) APINetwork Git repo with data store as JSON files Integration tests Insert Payloads here To attack the server
  • 51.
    Adding Security Tests(from server) Git repo with data store as JSON files Surrogate Dependency A ‘client’ Network Modify data 
 (optional) Insert Payloads here To attack the client 
 (from the server) What kind of issues can be found this way? - XSS - SQL Injection - CSRF (to server) - DoS - Steal Sessions tokens
  • 52.
    Once you knowwhere the client is vulnerable Once you know which
 data received from the server will exploit the client You ‘ask’ the API 
 where did 
 that data 
 come from? A ‘client’ Network API … and follow the rabbit holes Which might lead to 
 and external source
 (i.e. attacker)
  • 53.
    yes Request for xyzurl (GET, POST, PUT) in Cache? Modify data 
 (optional) no Load data from real service Save data to cache Git repo with data store as JSON files Load data from cache A ‘client’ With Proxy Send data to user
  • 54.
    Using Threat Modelsto ‘Lock’ the Brief
  • 55.
    • Key challengefor developers (and project managers) is the constant flow of business changes/requests • Some look simple, but have major implications – usually because they don’t fit in the current architecture • this is where Threat Models help since those ‘new features’ will require an revisit of existing Threat Models – Putting a gentle (positive) ’break’ in the current business requests – Which means that the original brief is ‘locked’ Use Threat Models to control business
  • 56.
    • Threat modelsshould be set-up as sources of truth • There should be a requirement to do a Threat Model for every app and feature • Not doing Threat Models means that the security implications of the ‘new feature’ have not be considered and documented • This is easier to put in place than a requirement to have ‘up-to-date documentation’ and ‘diagrams that represent the real world’ Sources of truth
  • 57.
    • Creating andfollowing a threat model for a feature is a great way to understand a threat model journey: – First, take a very specific path, a very specific new feature that you are adding, or take a property, such as a new field, or a new functionality. – Next, you want to create a full flow of that feature. Look at the entry point and the assets, and look at what is being used in that feature. – Now, you can map the inputs of the feature; you can map the data paths given by the data schema, and then you follow the data. • You can see for example how the data go into the application, what it ends up with, who calls who. • This means you have a much tighter brief, and a much better view of the situation. Threat Model per Feature
  • 58.
    • When youcreate threat models per feature or per component, a key element is to start to chain them (i.e. map the connections between them) – You will be able to identify uber-vulnerabilities, or uber-threats, that are created by paths that exist from threat model, A to threat model B, to threat model C. • For example, I have seen threat models where one will say, "Oh, we get data from that over there. We trust their system, and they are supposed to have DOS protections, and they rate limit their requests". • However, after doing a threat model of that system, we find that it does not have any DOS protections, even worse, it doesn't do any data validation/ sanitisation. • This means that the upstream service (which is 'trusted') is just glorified proxy,: – meaning that for all practices purposes, the 'internal' APIs and endpoints are directly connected to the upstream service callers (which is usually the internet, or other 'glorified proxies' services). Chained threat models
  • 59.
    • A keyobjective of pentest should be to validate the threat model. Pentests should confirm whether the expectations and the logic defined in the threat model are true. • Any variation identified is itself an important finding because it means there is a gap in the company's understanding of how the application behaves. • There are three important steps to follow: – Take the threat models per feature, per layer and confirm that there is no blind spots or variations on the expectation – Check the code path to improve the understanding of the code path and what is happening in the threat model – Confirm that there are no extra behaviours Pentest Confirms Threat Model
  • 60.
    • One ofthe key elements of threat modeling is it's ability to highlight a variety of interesting issues and blind spots, in particular within the architecture of the threat model. • One of my favourite moments occurs when the developers and the architects working on a threat model realise something that they hadn't noticed before. – In such cases, sometimes it is the developer who says, "Oh, I never realised that is how it worked!". – Other times when the architect says, "Well, this is how app was designed", and the developer responds "Yeah, but that didn't work, so we did it like this." Capture the success stories of your threat models