• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Download Meeting Presentation
 

Download Meeting Presentation

on

  • 519 views

 

Statistics

Views

Total Views
519
Views on SlideShare
519
Embed Views
0

Actions

Likes
0
Downloads
7
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Standard Waterfall model

Download Meeting Presentation Download Meeting Presentation Presentation Transcript

  • Software Security and the Software Development Lifecycle Stan Wisseman [email_address] Booz Allen Hamilton 8251 Greensboro Drive McLean VA 22102
  • Software security: Why care?
    • Software is ubiquitous.
    • We rely on software to handle the sensitive and high-value data on which our livelihoods, privacy, and very lives depend.
    • Many critical business functions in government and industry depend completely on software.
    • Software—even high-consequence software—is increasingly exposed to the Internet.
      • Increased exposure makes software (and the data it handles) visible to people who never even knew it existed before.
      • Not all of those people are well- intentioned (to say the least!).
  • Security as a property of software
    • Secure software is software that can’t be intentionally forced to perform any unintended function.
    • Secure software continues to operate correctly even under attack.
    • Secure software can recognize attack patterns and avoid or withstand recognized attacks.
    • At the whole-system level, after an attack, secure software recovers rapidly and sustains only minimal damage.
  • Exploitable defects in software lead to vulnerabilities
    • Inherent deficiencies in the software’s processing model (e.g., Web, SOA, Email) and the model’s associated protocols/technologies
      • Example: Trust establishment in web applications is only one-way (client authenticates server)
    • Shortcomings in the software’s security architecture
      • Example: Exclusive reliance on infrastructure components to filter/block dangerous input, malicious code, etc.
    • Defects in execution environment components (middleware, frameworks, operating system, etc.),
      • Example: Known vulnerabilities in WebLogic, J2EE, Windows XP, etc.
  • Exploitable defects cont’d
    • Defects in the design or implementation of software’s interfaces with environment- and application-level components
      • Example: Reliance on known-to-be-insecure API, RPC, or communications protocol implementations
    • Defects in the design or implementation of the software’s interfaces with its users (human or software process)
      • Example: Web application fails to establish user trustworthiness before accepting user input.
    • Defects in the design or implementation of the software’s processing of input
      • Example: C++ application does not do bounds checking on user-submitted input data before writing that data to a memory buffer.
  • So what do you do with these exploitable defects? Exploit them!
    • Session hijacking – A hacker will claim the identity of another user in the system
    • Command Injection (e.g. SQL Injection) – A hacker will modify input causing a database to return other users’ data, drop tables, shutdown the database
    • Cross Site Scripting (XSS) – A hacker will reflect malicious scripts off a web server to be executed in another user’s browser to steal their session, redirect them to a malicious site, steal sensitive user data, or deface the webpage
    • Buffer Overflows – A hacker will overflow a memory buffer or the stack, causing the system to crash or to load and execute malicious code, thereby taking over the machine
    • Denial of Service – A hacker will cause individual users or the entire system the inability to operate
  • Topology of an Application Attack Network Layer OS Layer Application Layer (End-user interface) Network Layer OS Layer Application Layer Custom Application Back-end Database Application Traffic
  • Software Security Vulnerabilities Reported 1995-1999 2000-2005 Total vulnerabilities reported (1995-2Q,2005): 19,600 CERT/CC 417 262 311 345 171 Vulnerabilities 1999 1998 1997 1996 1995 Year 2,874 3,780 3,784 4,129 2,437 1,090 Vulnerabilities 1Q-2Q,2005 2004 2003 2002 2001 2000 Year
  • Cost of Software Security Vulnerabilities
    • NIST estimates costs of $60 Billion a year due to software vulnerabilities
    • Security fixes for implementation flaws typically cost $2000-$10,000 when done during testing phase. However, they may cost more than 5-10 times when fixed after the application has been shipped.
    • The cost of fixing architectural flaws is significantly higher than fixing implementation flaws.
    • Gartner Group says system downtime caused by software vulnerabilities will triple from 5% to 15% by 2008 for firms that don't take proactive security steps
  • What to Do About These Serious Vulnerabilities?
    • Integrate Software Security into the
    • Software Development Lifecycle
  • When to Address Software Security?
    • As early as possible AND throughout the development lifecycle
  • The Software Project Triangle
    • Software assurance affects every side of the triangle, and any changes you make to any side of the triangle are likely to affect software assurance
  • Challenges to Developing Secure Software
    • SDLC often does not have security as a primary objective
    • SDLC often is not robust enough to handle complex development needs For example, how does your SDLC handle:
      • Inherent vulnerabilities in the technologies you’re using
      • Use of code from untrusted (and open) sources
      • Increase is features and complexity make security harder
      • Time to market pushes security out
      • Vendors that don’t warrant the trustworthiness of their software
      • Software developers that aren’t trained in secure development
      • Component assemblies, COTS integration, etc.
      • Time, money constraints
      • COTS upgrades and patches
  • Security Enhancing the Software Development Lifecycle
  • Software Security Problems are Complicated
    • IMPLEMENTATION BUGS
    • Buffer overflow
      • String format
      • One-stage attacks
    • Race conditions
      • TOCTOU (time of check to time of use)
    • Unsafe environment variables
    • Unsafe system calls
      • System()
    • Untrusted input problems
    • ARCHITECTURAL FLAWS
    • Misuse of cryptography
    • Compartmentalization problems in design
    • Privileged block protection failure (DoPrivilege())
    • Catastrophic security failure (fragility)
    • Type safety confusion errors
    • Broken or illogical access control (RBAC over tiers)
    • Method over-riding problems (subclass issues)
    • Signing too much code
  • The Challenge: Find Security Problems Before Deployment
  • Software Security SDLC Touchpoints Source: Gary McGraw Requirements and use cases Design Test plans Code Test results Field feedback Abuse cases Security requirements External review Risk analysis Risk-based security tests Security breaks Static analysis (tools) Risk analysis Penetration testing
  • Security Throughout the Application Lifecycle
  • Requirements Phase
  • Requirements Phase
    • You may have built a perfectly functional car, but that doesn’t mean it’s gas tank won’t blow up.
    • System requirements usually include functional requirements
    • But omit security requirements!
  • Principles of the Requirements Phase
    • You can’t assume security will be addressed by the developers
    • To adequately identify and specify security requirements, a threat-based risk assessment must be performed to understand the threats that the system may face when deployed. The development needs to understand that the threats to the system may change while the system is under development and when it is deployed
    • If it’s not a requirement, it doesn’t get implemented and doesn’t get tested
  • Security Requirements
    • Reuse Common Requirements
      • Most IT systems have a common set of security requirements
      • Some examples:
        • Username/password
        • Access control checks
        • Input validation
        • Audit
      • Dozens of common security requirements have been collected and perfected by security professionals…use these to get your requirements right
    • Security Requirements should include negative requirements
    • Requirement Tools should include misuse and abuse cases as well as use cases to capture what the system isn’t suppose to do
  • Requirements Phase: Misuse and Abuse Cases
    • Use cases formalize normative behavior (and assume correct usage)
    • Describing non-normative behavior is a good idea
      • Prepare for abnormal behavior (attack)
      • Misuse or abuse cases do this
      • Uncover exceptional cases
    • Leverage the fact that designers know more about their system than potential attackers do
    • Document explicitly what the software will do in the face of illegitimate used
  • Design Phase
  • Principles of Secure Design
    • Based on premise that correctness is NOT the same as security
    • Defense-in-depth: layering defenses to provide added protection. Defense in depth increases security by raising the cost of an attack by placing multiple barriers between an attacker and critical information resources.
    • Secure by design, secure by default, secure in deployment
    • Avoid High Risk Technologies
  • Principles of Secure Design (cont.)
    • Isolate and constrain less trustworthy functions
    • Implement least privilege
    • Security through obscurity is wrong except to make reverse engineering more difficult
    • Using good software engineering practices doesn’t mean the software is secure
  • Security in the Design Phase
    • Have security expert involved when designing system
    • Design should be specific enough to identify all security mechanisms
      • Flow charts, sequence diagrams
      • Use cases, misuse case and abuse cases
      • Threat models
    • Sometimes an independent security review of the design is appropriate
      • Very sensitive systems
      • Inexperienced development team
      • New technologies being used
    • Design your security mechanisms to be modular
      • Allows reuse!
      • Allows for centralized mechanism
  • Threat Analysis
    • You cannot build secure applications unless you understand threats
      • Adding security features does not mean you have secure software
      • “We use SSL!”
    • Find issues before the code is created
    • Find different bugs than code review and testing
      • Implementation bugs vs. higher-level design issues
    • Approx 50% of issues come from threat models
  • Threat Modeling Process
    • Create model of app (DFD, UML etc)
      • Build a list of assets that require protection
    • Categorize threats to each attack target node
      • Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service, Elevation of Privilege
    • Build threat tree for each threat
      • Derived from hardware fault trees
    • Rank threats by risk
      • Risk = Potential * Damage
      • Damage potential, Reproducibility, Exploitability, Affected Users, Discoverability
  • Design Phase: Architectural Risk Analysis
    • The system designers should not perform the assessment
    • Build a one page white board design model
    • Use hypothesis testing to categorize risks
      • Threat modeling/Attack patterns
    • Rank risks
    • Tie to business context
    • Suggest fixes
    • Multiple iterations
  • Risk Analysis Must be External to the Development Team
    • Having outside eyes look at your system is essential
      • Designers and developers naturally have blinders on
      • External just means outside of the project
      • This is knowledge intensive
    • Outside eyes make it easier to “assume nothing”
      • Find assumptions, make them go away
    • Red teaming is a weak form of external review
      • Penetration testing is too often driven by outside  in perspective
      • External review must include architecture analysis
    • Security expertise and experience really helps
  • Risk Assessment Methodologies
    • These methods attempt to identify and quantify risks, then discuss risk mitigation in the context of a wider organization
    • A common theme among these approaches is tying technical risks to business impact
    • Commercial
    • STRIDE from Microsoft
    • ACSM/SAR from Sun
    • Standards-Based
    • ASSET from NIST
    • OCTAVE from SEI
  • Implementation Phase
  • Secure Implementation Concepts
    • Developer training
      • Essential that developers learn how to implement code securely
      • Subtleties and pitfalls that can only be addressed with security training
    • Reuse of previously certified code that performs well for common capabilities, especially
      • Authentication
      • Input Validation
      • Logging
      • Much of “custom” software uses previously developed code
    • Coding standards, style guides
    • Peer review or peer development
  • Validating Inputs
    • Cleanse data
    • Perform bounds checking
    • Check
      • Configuration files
      • Command-line parameters
      • URLs
      • Web content
      • Cookies
      • Environment variables
      • Filename references
  • Secure Coding Guides
    • Provide guidance on code-level security
      • Thread safety
      • Attack patterns
      • Technology-specific pitfalls
    • Booz Allen has developed internal secure coding guides
      • Java (J2EE general)
      • C/C++
      • Software Security Testing
  • Code Review
    • Code review is a necessary evil
    • Better coding practices make the job easier
    • Automated tools help catch common implementation errors
    • Implementation errors do matter
      • Buffer overflows can be uncovered with static analysis
      • C/C++ rules
      • Java rule
      • .NET rules
    • Tracing back from vulnerable location to input is critical
      • Software exploits
      • Attacking code
  • Code Review (con’t)
    • Pros of Code Review
      • Demonstrate that all appropriate security mechanisms exist
        • (e.g. LOGGING cannot be verified by pen testing)
      • Can be performed throughout the development
      • Complete traceability to show that security mechanisms are implemented correctly
      • Able to find risks that are not evident in live application
        • (explicit comments, race conditions, missing audit, class-level security, etc.)
    • Cons of Code Review
      • Labor intensive
        • (Static analysis tools reduce labor, expand completeness)
      • Requires expert
      • Only using automated tools isn’t sufficient
  • Testing Phase
  • Testing Phase
    • Software security testing objective is to determine that software:
      • Contains no defects that can be exploited to force the software to operate incorrectly or to fail
      • Does not perform any unexpected functions
      • Source code contains no dangerous constructs (e.g., hard-coded passwords)
    • The methodology for achieving these objectives will include:
      • Subjecting the software to the types of intentional faults associated with attack patterns
        • Question to be answered: Is the software’s exception handling adequate?
      • Subjecting the software to the types of inputs associated with attack patterns
        • Question to be answered: Is the software’s error handling adequate?
  • Software Security Testing is Different than ST&E
    • ST&E testing is functional in nature
      • Goal of ST&E is to verify correct behavior, not to reveal defects or cause unintended behavior
      • Only 3 NIST 800-53 controls refer to software security
    • ST&E testing not targeted towards vulnerabilities
    • Software Security testing is purely technical (no managerial or operational testing)
    • Software Security testing seeks out defects and vulnerabilities and attempts to exploit or reveal them.
      • Defects and vulnerabilities are in context of software platform or architecture
    • Software Security testing goes into detail, where ST&E leaves off.
  • How Software Security Testing is Different
    • Software Security Testing is focused, in-depth security testing of the software
      • Minimizes gap between potential exploits
    - Level of Detail + Focused Coverage Broad Coverage ST&E Sw Security Testing Recommended by NIST Guidance Potential Sophistication of Attackers
  • Testing strategy
    • Think like an attacker and a defender.
      • Seek out, probe, and explore unused functions and features.
      • Submit unexpected input.
      • Enter obscure command line options.
      • Inspect call stacks and interfaces.
      • Observe behavior when process flow is interrupted.
    • Verify all properties, attributes, and behaviors that are expected to be there.
    • Verify use of secure standards and technologies and secure implementations of same.
    • Be imaginative, creative, and persistent.
    • Include independent testing by someone who isn’t familiar with the software.
  • What parts of software to test
    • The parts that implement:
    • The interfaces/interactions between the software system’s components (modules, processes)
    • The interfaces/interactions between the software system and its execution environment
    • The interfaces/interactions between the software system and its users
    • The software system’s trusted and high-consequence functions, such as the software’s exception handling logic and input validation routines
  • Lifecycle timing of security reviews and tests
  • Software security testing tools Categories of testing tools
  • Software Security Testing - Conclusions
    • Combined code review combined with application-level vulnerability scanning or penetration testing is the most effective
      • Analysts have the ability to more clearly see and understand the interfaces of the application
      • Analysts have the ability to verify or dismiss suspected vulnerabilities
      • Security testers can “look under the hood” to fully understand application behavior
    • May lead to recommendations that change the requirements specification
    • Not guaranteed to find all problems: addressing security throughout the SDLC is more likely to reduce vulnerabilities
    • The changing threat environment can still induce vulnerabilities. Some level of testing should be periodically performed
  • Deployment Phase
  • Deployment Phase
    • Pre-deployment activities depend on the application but may include:
      • Remove developer hooks
      • Remove debugging code
      • Remove sensitive information in comments, e.g. “FIXME”
      • Harden deployment OS, web server, app server, db server, etc.
      • Remove default and test accounts
      • Change all security credentials for deployed system, e.g. database passwords: to reduce the number of insiders that have direct access to the operational system
  • Post-Deployment Validation
    • Security of deployed software should be investigated regularly
    • Requires observing and analyzing its usage in the field
    • Requires automated support
  • Maintenance Phase
  • Maintenance Phase Security Activities
    • Monitor for, install patches for COTS in your system
    • Individually consider security implications for each bug fix
    • Security analysis review for every major release
    • Changes to system should not be ad-hoc, should be added to the requirements specification, design specification, etc.
    • Monitoring, intrusion detection at the application level
  • Relevant Process Models
    • Capability Models are designed to improve a process
      • CMMI
      • SSE-CMM
    • Project capability models define a system by:
      • Analyzing, quantifying, and enhancing the efficiency and quality of generic processes
      • Improve efficiency and adaptability
      • Provide a framework to apply a process multiple times with consistency
  • A Final Caution
    • It is inevitable that unknown security vulnerabilities will be present in deployed software
      • Software, users, environments too complex to fully comprehend
      • Environment and usage are subject to change
  • References
    • Application security public cases:
      • http://informationweek.com/story/showArticle.jhtml?articleID=164900859
    • Build Security in Portal
      • https://buildsecurityin.us-cert.gov/portal/
    • Open Web Application Security Project (OWASP)
      • http://www.owasp.org/
    • Computer Security: Art and Science by M. Bishop
    • Secure Coding: Principles and Practices by M. G. Graff and K. R. van Wyk
    • Exploiting Software: How to Break Code by G. Hoglund and G. McGraw
    • Writing Secure Code by M. Howard and D. LeBlanc
    • Attack Modeling for Information Security and Survivability by A.P. Moore, R.J. Ellison, and R.C. Linger
    • Building Secure Software by J. Viega and G. McGraw
  •