Security testing tools are only as good as the humans who use them. Learn how to turn an automated security effort into an effective security assessment.
2. About Security Innovation
• Authority in Software Security
• 15+ years research on software vulnerabilities
• Security testing methodology adopted by SAP,
Symantec, Microsoft and McAfee
• Authors of 18 books
• Helping organizations minimize risk
• Assessment: Show me the gaps
• Education: Guide me to the right decisions
• Standards: Set goals and make it easy and natural
• Tech-enabled services for both breadth and depth
• Gartner MQ Leader for security computer-based training
3. Agenda
Attacker Perspective
• When/where to use tools vs. manual testing methods
• Focusing on hot spots and "seeing" clues that tools cannot
• Putting it together to reduce risk
4. The World Has Changed
• Attackers are more and more sophisticated
• Increasing number of skilled attackers
• Attacks are targeted
• Consequences are higher
• Attackers are:
• Criminal organizations
• Government sponsored (or employed)
• Activist/personal Motivation
5. The Costs Keep Piling Up
• Monetary
• Average cost of a data breach in the US is $6.5 million*
• Dwith phishing attacks costs the average 10,000-employee company $3.77
million a year*
• Phishing is the 2nd most common attack vector**
• Average cost per record $217, and $259 in financial sector*
• Reputational
• 69% of consumers would be less inclined to do business
with a breached organization**
• The FTC is involved...
• Court says that FTC can hit companies with fines for failing to protect consumer
information
*Ponemon 2015 ** Verizon 2015
6. Not Much of a Deterrent
• Approximately 5% of cyber criminals are caught*
• Although a 5% prosecution rate is troubling, what’s worse is that only 2%
of all intrusions ever reach the attention of law enforcement
• Not surprisingly, a 5% conviction rate does not lend itself to meaningful
deterrence
*McAfee ** Georgetown Law Journal
7. Attackers & Motivations
Classic Actors The New School
Individual Hackers
• Exploration
• Notoriety
State Sponsored
• Espionage
• Warfare
Criminals
• Money
Political/Hacktivists
• Disruption/Embarrassment
• Intelligence
8. How Did We Get Here?
• Software has grown up in a trusting, insecure world
• Systems have historically been built to share data and facilitate collaboration
• In the early days, trust was (safely) assumed
• Software developers got used to this trust
• Universities didn’t bother teaching security
• As an industry we…
• …know too little
• …trust too much
• …have too much faith in boundary defenses
9. Opposing Goals
• Security vs. Features
• More features == more bugs
• Security vs. Reliability
• Writing error handling code means more chances of
introducing security bugs
• When error handlers fail, the app fails…who then
is in charge of security?
• Security vs. Performance
• Security code slows apps down
• Security vs. Usability
• Running every process as admin/root sure is convenient for the user
• Security options are confusing and awkward
10. Solutions
• Integrate security into the entire development process:
• Beginning at Requirements
• Continuing in Design
• Implemented in Coding
• Verified in Testing
• Response Processes in Place
• Knowledge and training is the first step
• Tools and automation are a lever on your security expertise
11. Agenda
• Attacker Perspective
When/where to use tools vs. manual testing methods
• Focusing on hot spots and "seeing" clues that tools cannot
• Putting it together to reduce risk
12. Generally Accepted Realities
• Given the absence of time constraints, manual testing will discover most
defects; automated testing will find a varying percentage
• Automation will find known and common vulnerabilities faster than humans
• It is difficult for automated tools to uncover compound vulnerabilities and
defects related to the business functionality
• Automation produces false positives; humans rarely do
13. Partners in Crime: Humans and Robots
• Tools, like humans, excel at certain tasks
• Not all applications need a deep pen test, and many need to go
“beyond the scan”
• Both play a critical role because they find different types of
vulnerabilities
• The challenge is finding the optimal balance to achieve breadth and
depth of coverage in the most efficient manner
14. Power of Automation
• Finding low hanging fruit
• Lowering skills barrier to testing
• Identifying weak spots for an experienced tester to focus on conducting deeper,
more specialized attacks
• Recognizing a pattern and applying similar attacks
• E.g. if a scanner recognizes a SQL database error message it may try variants of SQL
injection attacks
• Comparing strings to cover version checks, error messages, configuration issues,
TLS cipher suites and options, HTTP server verbs, etc.
• Cross Site Request Forgery (CSRF) that require sending similar requests with a
few headers changed based on authorization rules
15. Challenges with SAST/DAST Tools
• Missed coverage due to:
• Constraints such as authentication for role-sensitive applications
• Use of technology that eludes “spidering” or “crawling” in web application
• The result is often an incomplete and enumerated attack surface
• Logic is difficult to program into a linear automated tool
• Whereas a human can understand business use cases and alter tests
• False positives can be time consuming to deal with
• Findings often lack business/risk context
• Complacency – people tend to trust coverage and reports
• Often limited to Web applications
16. The Human Value-Add
• Imaginative
• In the absence of code, we need to make assumptions as to how the technology is
commonly used, patterns people follow, what is running the back end
• Observant
• Ability to look for clues or other things are out of place or have changed (i.e. URL,
content types)
• Logical
• Can evaluate error messages from tools, encodings, page layouts, load time, etc
• Accurate
• Virtually no false positives and contextual vulnerability risk rating
• Specialized
• Not limited to Web applications
• Can identify root cause and design flaws
17. In-Practice: Value of Intuition
• During automated testing of a CRM application, results did not find
Session Management Weakness
• Bit surprising as scanners are usually good at this
• Manual testing discovered a Direct Object Access (DOA) vulnerability
• When this previously missing parameter was added, user was given access
to resources previously unavailable
• Further manual discovered that any tampering of those parameters
resulted in server sessions being terminated
Tools can be programmed to perform complicated checks, but not
intuition checks, which often result in the discovery of critical
vulnerabilities
18. In Practice: Chaining Vulnerabilities
• An example from our testing:
• Automation is good at finding simple file upload
vulnerabilities as well as cross site scripting problems
• In this case the automation found nothing since there
was code to block malicious upload
• Manual testing discovered a problem in this security code
• Clever testing allowed malicious upload as well as discovering a XSS problem
• Required a very specific malicious file with PNG file headers, an html file extension
and a malicious jscript payload embedded within the image
• This file was used to run script code on the server
Automation is not capable of this type of testing, but your attackers are!
19. The Challenges with Manual Testing
• More expensive and time consuming
• Requires highly skilled testers
• Tests are not reusable, though patterns are
• Inconsistency from tester to tester
20. Agenda
• Attacker Perspective
• When/where to use tools vs. manual testing methods
Focusing on hot spots and "seeing" clues that tools cannot
• Putting it together to reduce risk
21. What Are Hot Spots?
• Areas that provide disproportionate return for your effort
• Security defects tend to cluster in areas
• Some areas of the code have more security impact than others
• This combination is a Hot Spot
• Focus your efforts
• If you know what you’re looking for, you will find more vulnerabilities
• Un-guided testing/review is a waste of time
• Use Hot Spots as a heat map to guide security design, code and deployment
inspections
Hot Spots are the best places to apply human effort
22. Reflecting on Security Hot Spots
• You can use hot spots and common vulnerabilities to share:
• Principles, patterns, and practices
• Knowledge around threats, attacks, vulnerabilities, and
countermeasures
• To keep them relative and effective, consider the following questions
• How can you improve security results in your organization?
• How can you organize your bodies of knowledge?
• How can you improve sharing patterns, anti-patterns, and checklists?
• How can you tune and prune your security inspections?
23. Agenda
• Attacker Perspective
• When/where to use tools vs. manual testing methods
• Focusing on hot spots and "seeing" clues that tools cannot
Putting it together to reduce risk
24. AppSec Program Goals
• Effective Vulnerability Management
• Regular, iterative testing ensures continually-improving test results and will catch
vulnerabilities more quickly
• Measure risk of each application through vulnerability discovery & remediation metrics
• Discover trends and weaknesses that can be used to improve the overall AppSec and
secure coding program through standards and training
• Optimized Frequency and Depth of Testing
• Match level of testing and analysis to application criticality
• Ensure high risk applications get more attention and low-risk ones are not over tested
• Cost Management
• Predictable cost
• Investment matched to level of risk
25. Best Practices for Assessments
• Use automated tools for heavy lifting
• Find common and known vulnerabilities faster than humans
• Adopt when you have the skills to use properly
• Understand what was found - security implication is not always obvious
• Be sure they are integrated into SDLC and used at key checkpoints
• Complement with manual efforts
• Necessary to find deeply rooted and elusive vulnerabilities that tools can not
• Be sure to leverage a threat model to focus on high-risk areas
• Support vulnerability remediation
• Problem isn’t solved when found, only when corrected properly
• Match test efforts with your organization’s ability to remediate
26. Take a Risk-Based Approach
• Conventional approaches to application security not risk-based
• Typically no more than automated scanning that look for some pre-determined
set of common vulnerabilities
• Frequently fail to address each application’s unique code-, system- and
workflow-level vulnerabilities
• Provides little practical guidance on prioritizing defect remediation
• Does not yield a roadmap to guide enterprise AppSec posture improvements
• The majority of application security programs focus on:*
• Automated security testing during development (41%)
• Secure coding standards that are adhered to (32%)
• A secure SDLC process improvement plan (30%)
*”The State of Application Security Maturity” – Ponemon Institue & Security Innovation, 2013
27. Why Risk-Rank?
• Helps your organization to
• Quantitatively categorize application assets
• Plan assessment and mitigation activities cost effectively
• Ensure prioritization is based on –real- business risk
• Inappropriate security assessments are costly
• Deep inspection on all applications is neither feasible nor necessary
• Spending time on a low-priority application while a high-risk one remains
vulnerable can be devastating (data breach, DDOS, etc.)
• Helps gauge the security maturity level of your teams
• Enables risk-based decisions for managing deployed applications
• Remove, replace, take off-line, implement compensating controls
28. Analyze Root Cause
• Watch for duplicates
• Dozens of vulnerabilities can result from a single root that creates multiple paths to
exploit
• Static Analysis tools can help identify these root causes
• Determine root cause and likelihood of an exploit
• Requires some manual code review skills to do dataflow/control flow analysis
• You can use DREAD to evaluate severity
• Damage Potential
• Reproducibility
• Exploitability
• Affected Users
• Discoverability
• Review trend information to determine whether the vulnerability has existed before and
what actions were taken to reduce or eliminate it
29. Hands-on Security Code Reviews
• Manual Code Review compliments automated scanning
• Should be part of your normal SDLC practices
• The use of manual reviews vs. automated tools shouldn't be an either-or
proposition
• Static testing tools can find common problems a lot faster than humans
• Only humans can find design-level issues like poor identity-verification questions,
business logic attacks, and compound vulnerabilities
30. No Scanner is 100% effective ‘out of the box’
• Fine-tuning scans to reduce false positives can shorten time spent chasing
“fake vulnerabilities”
• When a human recognizes a certain pattern or input that causes
misbehavior, automation can be leveraged to check for that particular
input/pattern across a much broader surface area quickly
• Need to:
• Customize and fine-tune
• Integrate into the SDLC at key checkpoints
• Compliment with manual reviews
31. Conclusion
• Automation provides a fast method for flagging common vulnerabilities
• However, it yields only partial code coverage, cannot detect certain vulnerabilities,
and can lose efficiency gains with lots of false positives
• Humans can leverage expertise and creativity to hunt down unknown, complex
and stealth-like vulnerabilities
• However, this can be time consuming so focus on critical areas
• Different tools will find different vulnerabilities in different types of applications
• Ensure your teams know what their tools are capable of
• Leveraging automation early on to find low hanging fruit and other potential
issues provides a useful jumpstart in determining deeper testing is needed
32. How Security Innovation Can Help
• Standards – Set goals and make it easy
• Align development activities with policies and compliance mandates
• Secure coding standards
Education – Enable me to make the right decisions
• 120+ courses cover all major technologies, platforms and roles
• Mobile and Web Application Security CTF hackathons make it
fun to “learn by doing”
• Assessment – Show me the Gaps!
• Security design and code reviews
• Software penetration tests
• Secure SDLC Gap Analysis
33. Whitepaper
Smart Software Security Testing
• Findings from study comparing
manual and automated techniques
• Available end of March
• getsecure@securityinnovation.com
to request a copy