1. Scaling Security in a Cloud
Environment (@ UK Azure UG)
London, 29th Sep 2016
@DinisCruz
2. Me
• Developer for 28 years
• AppSec for 14 years
• Day job:
• Photobox Group CISO
• Leader OWASP O2 Platform
project (.Net REPL on steroids)
• @DinisCruz
• http://blog.diniscruz.com
• http://leanpub.com/u/
DinisCruz
16. To scale security (and testing) it
is key that you execute tests in
real-time and see code coverage
https://www.slideshare.net/DinisCruz/start-with-passing-tests-tdd-for-bugs-v05-22-sep-2016
17. When creating tests on the ‘Fix’ stage,
the focus (& time allocated) is on
fixing the bug (not on testing it)
When creating tests on the ‘Issue Creation’
stage, the focus (& time allocated) is on
how to test it and what is its root cause
22. • The real power of the cloud
• Who is using this in production?
• How fast do you auto-scale?
– Seconds?
– Minutes?
• Do you also aggressively autoscale down?
Autoscaling
23. • Image on the left is a set of
rules and behaviours
• How do you test them?
– Manually
– Quasi-Manually
– With Visualisation?
– With Tests?
What about testing your auto-scale?
25. DevOps
• In DevOps, the production pipeline is pushed faster and
faster
• Continuous Integration (CI) needs to be tested as an
application
• We need to think of the CI pipeline as a graph
• Write rules (i.e. tests) to validate our expectations
• We need Static analysis technology !!!!!! (SAST for CI)
• This will allow us to understand how the pipeline behaves
and interconnects
37. Security organisation as an graph
Each Pillar is
mapped to a
Capability
Each Capability is
mapped to an
Programme
Each Programme is
mapped to a Project
42. • Senior Cloud Security Engineer
• Head of Detect
(Incidence Response, Situational Awareness)
• Head of AppSec
(Application Security)
• Head of InfoSec
(Information Security)
Opportunity to join our team
47. • It tests the API and replays responses
– Use integration tests to ‘lock’ the api used
– Save responses in JSON format
– Replay data to client
• Allow client to be offline
What is it?
48. Locking the API using tests
API
A ‘client’
Network
APINetwork
Git repo with data
store as JSON files
Integration tests
49. Replay stored JSON
Git repo with data
store as JSON files
Surrogate
Dependency
A ‘client’
Network
Modify data
(optional)
API
Client/app is running Offline!
50. Adding security tests (to server)
APINetwork
Git repo with data
store as JSON files
Integration tests
Insert Payloads here To attack the server
51. Adding Security Tests (from server)
Git repo with data
store as JSON files
Surrogate
Dependency
A ‘client’
Network
Modify data
(optional)
Insert Payloads here
To attack the client
(from the server)
What kind of issues can be found this way?
- XSS
- SQL Injection
- CSRF (to server)
- DoS
- Steal Sessions tokens
52. Once you know where the client is vulnerable
Once you know which
data received from the
server will exploit the client
You ‘ask’ the API
where did
that data
come from?
A ‘client’
Network API
… and follow the rabbit holes
Which might lead to
and external source
(i.e. attacker)
53. yes
Request for xyz url
(GET, POST, PUT)
in
Cache?
Modify data
(optional)
no Load data from
real service
Save data to
cache
Git repo
with data
store as
JSON files
Load data from
cache
A ‘client’
With Proxy
Send data to user
55. • Key challenge for developers (and project managers) is
the constant flow of business changes/requests
• Some look simple, but have major implications
– usually because they don’t fit in the current architecture
• this is where Threat Models help since those ‘new
features’ will require an revisit of existing Threat
Models
– Putting a gentle (positive) ’break’ in the current business
requests
– Which means that the original brief is ‘locked’
Use Threat Models to control business
56. • Threat models should be set-up as sources of truth
• There should be a requirement to do a Threat
Model for every app and feature
• Not doing Threat Models means that the security
implications of the ‘new feature’ have not be
considered and documented
• This is easier to put in place than a requirement to
have ‘up-to-date documentation’ and ‘diagrams
that represent the real world’
Sources of truth
57. • Creating and following a threat model for a feature is a great
way to understand a threat model journey:
– First, take a very specific path, a very specific new feature that you are
adding, or take a property, such as a new field, or a new functionality.
– Next, you want to create a full flow of that feature. Look at the entry
point and the assets, and look at what is being used in that feature.
– Now, you can map the inputs of the feature; you can map the data
paths given by the data schema, and then you follow the data.
• You can see for example how the data go into the application,
what it ends up with, who calls who.
• This means you have a much tighter brief, and a much better
view of the situation.
Threat Model per Feature
58. • When you create threat models per feature or per component, a key element
is to start to chain them (i.e. map the connections between them)
– You will be able to identify uber-vulnerabilities, or uber-threats, that are created by
paths that exist from threat model, A to threat model B, to threat model C.
• For example, I have seen threat models where one will say, "Oh, we get data
from that over there. We trust their system, and they are supposed to have
DOS protections, and they rate limit their requests".
• However, after doing a threat model of that system, we find that it does not
have any DOS protections, even worse, it doesn't do any data validation/
sanitisation.
• This means that the upstream service (which is 'trusted') is just glorified proxy,:
– meaning that for all practices purposes, the 'internal' APIs and endpoints are directly
connected to the upstream service callers (which is usually the internet, or other
'glorified proxies' services).
Chained threat models
59. • A key objective of pentest should be to validate the threat
model. Pentests should confirm whether the expectations and
the logic defined in the threat model are true.
• Any variation identified is itself an important finding because it
means there is a gap in the company's understanding of how
the application behaves.
• There are three important steps to follow:
– Take the threat models per feature, per layer and confirm that there
is no blind spots or variations on the expectation
– Check the code path to improve the understanding of the code path
and what is happening in the threat model
– Confirm that there are no extra behaviours
Pentest Confirms Threat Model
60. • One of the key elements of threat modeling is it's ability
to highlight a variety of interesting issues and blind spots,
in particular within the architecture of the threat model.
• One of my favourite moments occurs when the
developers and the architects working on a threat model
realise something that they hadn't noticed before.
– In such cases, sometimes it is the developer who says, "Oh, I
never realised that is how it worked!".
– Other times when the architect says, "Well, this is how app was
designed", and the developer responds "Yeah, but that didn't
work, so we did it like this."
Capture the success stories of your threat models