APIsecure - April 6 & 7, 2022
APIsecure is the world’s first conference dedicated to API threat management; bringing together breakers, defenders, and solutions in API security.
API Security Testing: The Next Step in Modernizing AppSec
Scott Gerlach, Co-Founder, Chief Security Officer at StackHawk
3. AppSec Problem Overview
AppSec = Important, but hard and how do you not let this Tech Debt pile
up?
Static Code Analysis
● Noisy, often lacks Application Context
● Language Dependant (Don’t get me started on IDE support)
Dynamic Code Analysis
● Better at actual app and context, but still somewhat noisy
● Hard to use
RASP, IAST, WAF
● Wait til someone/something else finds it… in Prod
5. Hey! I broke the crap out of your thing. Cool huh!
6. working agreement | [wur-king ə-ˈgrē-mənt ]
Definition Time
1. The purpose of a working agreement is to ensure the Agile Team
shares responsibility in defining expectations for how they will function
together and enhance their self-organization process
2. Working agreements can apply to services, and can even be
documented.
3. You can certainly make a working agreement with the Security Team…
just saying
7. Functional Agreements:
● Rate Limiting?
● Standardized Errors?
Data Input:
● Validation?
● Encoding?
● Escaping?
Data Output:
● Validation?
● Encoding?
● Escaping?
● Paging?
Data Working Agreement
Functional Agreements:
● Back off routines?
Data Input:
● Validation?
● Encoding?
● Escaping?
Data Output:
● Validation?
● Encoding?
● Escaping?
Backend Team (API Team) Front End Team
8. Functional Agreements:
● Rate Limiting
● Standardized Errors
Data Input:
● Validation
● Encoding
● Escaping
Data Output:
● Validation
● Encoding
● Escaping
● Paging
Data Working Agreement
Functional Agreements:
● Back off routines
Data Input:
● Validation
● Encoding
● Escaping
Data Output:
● Validation
● Encoding
● Escaping
Backend Team (API Team) Front End Team
YES, THAT!
10. Hey! I broke the crap out of your thing. Cool huh!
FIX ALL THE THINGS!
I think we’ve got a
SQL Injection here
11. Security Websters
Broken Object Level Authorization
Tenancy Filtering
APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface
Level Access Control issue. Object level authorization checks should be considered in every
function that accesses a data source using an input from the user
Customer A shouldn’t be able to get to Customer B’s data
12. Let’s Teach Them AppSec
If they know how attackers think, they’ll be able to test like an attacker - Hack Yourself!
● Here’s 11ty Billion new Acronyms to learn
● Also, let’s talk about risk
● But wait before that, do you know the Internet is a bad
place?
● If you have sent any of your Devs to a Security Training
program, who usually gets selected?
13. “We Need to Model Out a Price Increase”
Have you ever seen the FP&A team teach the basics of accounting to the Exec Team
16. Examining the Production-Bias: People
Primary Value: These groups are very focused on the “finding” of vulnerabilities/security bugs. MOAR
findings = MOAR better.
The Security Team Pen Tester
Production is where they know the app the best Production is their only point of access
Repercussions…
● More focused on the numbers of things found, than finding and fixing the right things
● Inefficient — the “finders” are not the “fixers”
● Reinforces an adversarial relationship — “Hey look, I broke your stuff”
*Assuming you have a security team
17. Security is either
a blocker
or “playing catch up”
DEV OPS
Examining the Production-Bias: Timing
18. Production Bias - Pants Problem
Pants `R’ Us
GET /rest/api/v1/listPants
Returns list of pantIds
GET /rest/api/v1/{pantId}/details
Returns details about a pair of pants, size, color, stock
19. Production Bias - Pants Problem
GET /rest/api/v1/listPants
Returns list of pantIds
GET /rest/api/v1/{pantId}/details
Returns details about a pair of pants, size, color, stock
HUNDREDS OF STYLES AND SIZES WAITING FOR YOU!
23. How Test-Driven
Security
Should Work
When a team writes code, they know the syntax
is wrong when it won’t compile.
When a team merges code they know there is a
problem when it doesn’t merge.
When a team runs unit tests, they know the
code is wrong when it fails the unit test.
When a team runs integration tests, they
know the code is wrong when it doesn’t work
as designed.
When a team introduces a
vulnerability, they know when it
fails a security test.
24. DEV OPS
Right Time: Pre-Production
Instrumenting Security Tests into CI/CD
gives engineers immediate feedback.
Adding the ability to test locally allows for
quick iteration in the fix-test loop if a new
bug is identified.
Local Dev & CI/CD
25. ● Set up working agreements across
teams/apps/departments
● Create Standards documentation automatically
(OpenAPI/Introspection)
● If you are in security shopping for AppSec tools, BRING A
DEV with you!
● Seed the database! Test in Pre-Prod!
● Understand these two things deeply
○ Object Level Authorization (ie Tenancy Filtering)
○ Function Level Authorization (ie Admin API access)
Cover Your Bases
26.
27. ● Engage a project team and their pipeline
● Choose AN app or service to start
● Choose a technology (SCA, DAST)
● Iterate and expand
Just Start!
Working agreements just don’t exist between a lot of teams. Let’s dig in a bit more
Working agreements just don’t exist between a lot of teams. Our friends here don’t seme to buy into what we are talking about.
Working agreements are good Mr. Lamar! They can help you keep things straight between teams.
Perhaps you’ve not heard of them or maybe not understand how they relate to API security? Let dig into them a bit more.
Generally a working agreement is a contract or an understanding between two or more parties. This really came up in Agile and Scrum or Product Delivery Teams
Who is in charge of all of these things?
At what point should the front end be encoding data?
If the front end encodes the data, does the back end need to worry about it at all? Don’t forget, that API is probably public.
Who’s to say the Front End is the only thing accessing the API?
Data Shape / Data Contracts
Yes, we should be doing all of these things. HOW you do them is just as important as identifying needing to do them and that’s what should go into the working agreement
The hard truth is, you can never hire as many AppSec people as the organization can hire developers. The Security team often makes this whole thing a lot harder in the name of “accountability”, but really what they are doing at this point is making people go slower or cause interruptions because they can’t scale with the business.
This is what a lot of security tools look like and in fact almost all of the AppSec tools look like this.
If an engineer gained access to a tool that looked like this, they’d probably close it pretty quick OR start making fun of how it was developed. Neither is what you want them to do.
Built in security person language
A developers job is not to learn all of this new stuff, but they have to know how to protect against it and or prevent it.
These are basically the same thing here, obviously there could be more to the Tenancy Filtering, but so many of these definitions are so broadly described, it can be really hard to apply them to a real word scenario
This is at best misguided and at worst continues to drive division between Dev and Security teams. “You don’t know how to do your job, but we can teach you how to do ours...”
It’s the equivalent of accounting saying to leadership, lets teach you about the GL
Because they are built for the security team they inherit another problem
The people that do testing today and the context under which they understand the thing they are testing
As companies are rapidly shipping code to production, security is not baked into this workflow. (Either you’re not rapidly shipping (in which case appsec processes act as a blocker), or the security team is playing catch up)
If the security team is doing release approval, they are acting as blockers.
AppSec tools that run in production are often used infrequently, in <some duration> after a release and are just telling you about the bugs you released in production
There’s a huge problem with this methodology
Here’ an example API. It’s used to display pant details in an online shop. We don’t just sell one kind of pants though
We sell LOTS of pants.
Most tools that are used to scan APIs and Websites like this don’t understand and turn 2 simple API calls into HUNDREDS and can end up taking hours to complete.
Sometimes they do this because we aren’t generating standard specs like OpenAPI Spec and sometimes it’s because the scanner just doesn’t use the specs.
I mean like a LOT! of pants.
Most tools that are used to scan APIs and Websites like this don’t understand and turn 2 simple API calls into HUNDREDS and can end up taking hours to complete.
Sometimes they do this because we aren’t generating standard specs like OpenAPI Spec and sometimes it’s because the scanner just doesn’t use the specs.
The process is so frustrating for software engineers. The security team runs infrequent scans of your code that is already in production.
They then engage in a bunch of ticket shuffling trying to find the engineering team that wrote or can otherwise fix the issue in the code.
That team has long moved on to other engineering work (business value) and they have conflicting priorities - and often security tickets lack the concept of business context as to why they’re important to fix over current spint work. As a product person you’re fighting for roadmap delivery and meeting customer commitments.
May intentionally ship to production making a risk based decision…
There will be lots of times that we will intentionally ship security bugs to production, but the intentionality is the important thing here. This should be done eyes wide open and be a risk based decision. You might know that exposure is limited and it will be fixed in the next sprint. But production should not be the first place that you are checking *if* there are any bugs. <- PREACH!
To combat this trust issue, often times security teams com up with a new great idea
We (security), also have this nasty habit. Often we think of eliminating ALL risk - patch EVERYTHING, don’t do ANYTHING in the cloud, etc.
Businesses exist to take risk. That risk is to provide solutions to customer problems. Heck even thinking you can solve a customers problem is a risk.
Need measured and informed risk to operate, security is no different.
As CTOs, VPs Eng, and Engineering manager or even a Developer, perhaps you’ve been tasked with “Security”. That can lead to one of these things in your head.
It’s easier to start than you think, and we’ll go over that at the end.
And one other, that I know of for sure. Wink wink
A check for security bugs in production is inefficient. Engineers have moved on to other sprint tasks and fixing involves context switching.
Scanning www. Or app. In production makes it difficult to identify the app or service affected, and lacks context of the specific data handled by that service. You end up with ticket shuffling trying to identify the service affected then find the team who owns the service that was affected.
Focus on number of bugs found, and % fixed over time ignores business context of the findings and trade off decisions around business value generation
Might be some good data here about time to fix from other analogs (e.g., unit testing or integration testing)?
The findings often lack business context - How important is this thing to the business?
Should we be fixing ALL of the bugs on an internal application or going fast on that?
How should we think about our apps and the data they handle?
The process is so frustrating for software engineers. The security team runs infrequent scans of your code that is already in production.
They then engage in a bunch of ticket shuffling trying to find the engineering team that wrote or can otherwise fix the issue in the code.
That team has long moved on to other engineering work (business value) and they have conflicting priorities - and often security tickets lack the concept of business context as to why they’re important to fix over current spint work. As a product person you’re fighting for roadmap delivery and meeting customer commitments.
May intentionally ship to production making a risk based decision…
There will be lots of times that we will intentionally ship security bugs to production, but the intentionality is the important thing here. This should be done eyes wide open and be a risk based decision. You might know that exposure is limited and it will be fixed in the next sprint. But production should not be the first place that you are checking *if* there are any bugs. <- PREACH!
Instrumenting Security Tests into CICD gives feedback immediately. Adding the ability to test locally allows engineers to quickly iterate the fix-test loop if a new bug was identified.
would add something about feedback loops - CiCD gives feedback immediately and is configured to run with merge/pr etc - weekly scanning is not that at all… less security bugs make it to production
Our customers check for security bugs on every merge. And have the ability to quickly test locally when troubleshooting a fix.
You can test while you’re writing code, and test while you’re building code… and security tools should play well in these phases of development.
Working agreements make most of this stuff easier. Knowing what the other team is going to do, take a lot of guess work out of your daily job
Having code create documentation for you is your BEST Friend. I’ve spoken with so many company’s that have REST APIs and no API Specification. When I ask them how people in the organization figure out how to integrate with those services, the answers range from Internal Wikis to the read the code base of the API… None of that is efficient AND it makes testing really hard
Testing in Prod has a lot of drawbacks. If you need to test in prod, it should be the triple double stamp, not the only stamp. Test early, test often, test with seed data.
The best tool you can buy, is one the dev team will use and like (let’s not stretch here and say love) - Bring a lead dev or a dev experience or a dev manager with you to do evaluations. Printing outstanding issues to PDF is not doing you any good. Measuring Time to Close in months, is, we’ll it’s a waste of time…
Security tools just don’t understand these things and they are REALLY hard to generically test for. Write tests for these issues, make sure customer A can’t see customer Bs data and make sure you can’t just willy nilly get to the admin section of the API by guessing a path (becasue you documented the path, remember, it’s not secret)
Warning: You may need a few therapy sessions to break down the lack of trust between these teams
Define each appsec tool - high level of how it works
Options open source and commercial
(open source and commercial)
I feel like there could be something here that drills home the how.
Auto check on every merge
Visibility in developer tooling (e.g., Slack)
Reproducibility that developers can use themselves.
It should all live with the developers so they can self serve. This democratizes security.