Software Security and the Software Development Lifecycle Stan Wisseman [email_address] Booz Allen Hamilton 8251 Greensboro Drive McLean VA 22102
Software security: Why care?
Software is ubiquitous.
We rely on software to handle the sensitive and high-value data on which our livelihoods, privacy, and very lives depend.
Many critical business functions in government and industry depend completely on software.
Software—even high-consequence software—is increasingly exposed to the Internet.
Increased exposure makes software (and the data it handles) visible to people who never even knew it existed before.
Not all of those people are well- intentioned (to say the least!).
Security as a property of software
Secure software is software that can’t be intentionally forced to perform any unintended function.
Secure software continues to operate correctly even under attack.
Secure software can recognize attack patterns and avoid or withstand recognized attacks.
At the whole-system level, after an attack, secure software recovers rapidly and sustains only minimal damage.
Exploitable defects in software lead to vulnerabilities
Inherent deficiencies in the software’s processing model (e.g., Web, SOA, Email) and the model’s associated protocols/technologies
Example: Trust establishment in web applications is only one-way (client authenticates server)
Shortcomings in the software’s security architecture
Example: Exclusive reliance on infrastructure components to filter/block dangerous input, malicious code, etc.
Defects in execution environment components (middleware, frameworks, operating system, etc.),
Example: Known vulnerabilities in WebLogic, J2EE, Windows XP, etc.
Exploitable defects cont’d
Defects in the design or implementation of software’s interfaces with environment- and application-level components
Example: Reliance on known-to-be-insecure API, RPC, or communications protocol implementations
Defects in the design or implementation of the software’s interfaces with its users (human or software process)
Example: Web application fails to establish user trustworthiness before accepting user input.
Defects in the design or implementation of the software’s processing of input
Example: C++ application does not do bounds checking on user-submitted input data before writing that data to a memory buffer.
So what do you do with these exploitable defects? Exploit them!
Session hijacking – A hacker will claim the identity of another user in the system
Command Injection (e.g. SQL Injection) – A hacker will modify input causing a database to return other users’ data, drop tables, shutdown the database
Cross Site Scripting (XSS) – A hacker will reflect malicious scripts off a web server to be executed in another user’s browser to steal their session, redirect them to a malicious site, steal sensitive user data, or deface the webpage
Buffer Overflows – A hacker will overflow a memory buffer or the stack, causing the system to crash or to load and execute malicious code, thereby taking over the machine
Denial of Service – A hacker will cause individual users or the entire system the inability to operate
Topology of an Application Attack Network Layer OS Layer Application Layer (End-user interface) Network Layer OS Layer Application Layer Custom Application Back-end Database Application Traffic
Broken or illogical access control (RBAC over tiers)
Method over-riding problems (subclass issues)
Signing too much code
The Challenge: Find Security Problems Before Deployment
Software Security SDLC Touchpoints Source: Gary McGraw Requirements and use cases Design Test plans Code Test results Field feedback Abuse cases Security requirements External review Risk analysis Risk-based security tests Security breaks Static analysis (tools) Risk analysis Penetration testing
Security Throughout the Application Lifecycle
You may have built a perfectly functional car, but that doesn’t mean it’s gas tank won’t blow up.
System requirements usually include functional requirements
But omit security requirements!
Principles of the Requirements Phase
You can’t assume security will be addressed by the developers
To adequately identify and specify security requirements, a threat-based risk assessment must be performed to understand the threats that the system may face when deployed. The development needs to understand that the threats to the system may change while the system is under development and when it is deployed
If it’s not a requirement, it doesn’t get implemented and doesn’t get tested
Reuse Common Requirements
Most IT systems have a common set of security requirements
Access control checks
Dozens of common security requirements have been collected and perfected by security professionals…use these to get your requirements right
Security Requirements should include negative requirements
Requirement Tools should include misuse and abuse cases as well as use cases to capture what the system isn’t suppose to do
Requirements Phase: Misuse and Abuse Cases
Use cases formalize normative behavior (and assume correct usage)
Describing non-normative behavior is a good idea
Prepare for abnormal behavior (attack)
Misuse or abuse cases do this
Uncover exceptional cases
Leverage the fact that designers know more about their system than potential attackers do
Document explicitly what the software will do in the face of illegitimate used
Principles of Secure Design
Based on premise that correctness is NOT the same as security
Defense-in-depth: layering defenses to provide added protection. Defense in depth increases security by raising the cost of an attack by placing multiple barriers between an attacker and critical information resources.
Secure by design, secure by default, secure in deployment
Avoid High Risk Technologies
Principles of Secure Design (cont.)
Isolate and constrain less trustworthy functions
Implement least privilege
Security through obscurity is wrong except to make reverse engineering more difficult
Using good software engineering practices doesn’t mean the software is secure
Security in the Design Phase
Have security expert involved when designing system
Design should be specific enough to identify all security mechanisms
Flow charts, sequence diagrams
Use cases, misuse case and abuse cases
Sometimes an independent security review of the design is appropriate
Very sensitive systems
Inexperienced development team
New technologies being used
Design your security mechanisms to be modular
Allows for centralized mechanism
You cannot build secure applications unless you understand threats
Adding security features does not mean you have secure software
“We use SSL!”
Find issues before the code is created
Find different bugs than code review and testing
Implementation bugs vs. higher-level design issues
Approx 50% of issues come from threat models
Threat Modeling Process
Create model of app (DFD, UML etc)
Build a list of assets that require protection
Categorize threats to each attack target node
Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service, Elevation of Privilege