Test 1 Review

628 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
628
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Test 1 Review

  1. 1. CSCE 548 Secure Software Development Test 1 -- Review
  2. 2. Reading <ul><li>Test 1 </li></ul><ul><ul><li>McGraw: Software Security: Chapters 1 -- 9 </li></ul></ul><ul><ul><li>Software reliability, John C. Knight, Nancy G. Leveson, An Experimental Evaluation Of The Assumption Of Independence In Multi-Version Programming, http://citeseer.ist.psu.edu/knight86experimental.html </li></ul></ul><ul><ul><li>B. Littlewood, P. Popov, L. Strigini, &quot;Modelling software design diversity - a review&quot;, ACM Computing Surveys, Vol. 33, No. 2, June 2001, pp. 177-208, http:// portal.acm.org/citation.cfm?doid =384192.384195 </li></ul></ul>
  3. 3. Test 1 <ul><li>Closed book </li></ul><ul><li>From reading materials on previous slide </li></ul><ul><li>1 hour and 15 minutes </li></ul><ul><li>No multiple choice questions </li></ul>
  4. 4. Software Security <ul><li>NOT security software! </li></ul><ul><li>Engineering software so that it continues to function correctly under malicious attack </li></ul><ul><ul><li>Functional requirements </li></ul></ul><ul><ul><li>Non-functional requirements (e.g., security) </li></ul></ul>
  5. 5. Why Software? <ul><li>Increased complexity of software product </li></ul><ul><li>Increased connectivity </li></ul><ul><li>Increased extensibility </li></ul><ul><li>Increased risk of security violations! </li></ul>
  6. 6. Security Problems <ul><li>Defects: implementation and design vulnerabilities </li></ul><ul><li>Bug: implementation-level vulnerabilities (Low-level or mid-level) </li></ul><ul><ul><li>Static analysis tool </li></ul></ul><ul><li>Flaw: subtle, not so easy to detect problems </li></ul><ul><ul><li>Manual analysis </li></ul></ul><ul><ul><li>Automated tools (for some but not design level) </li></ul></ul><ul><li>Risk: probability x impact </li></ul>
  7. 7. Application vs. Software Security <ul><li>Usually refers to security after the software is built </li></ul><ul><ul><li>Adding more code does not make a faulty software correct </li></ul></ul><ul><ul><li>Sandboxing </li></ul></ul><ul><ul><li>Network-centric approach </li></ul></ul><ul><li>Application security testing: badness-ometer </li></ul>Deep Trouble Who Knows
  8. 8. Three Pillars of Software Security <ul><li>Risk Management </li></ul><ul><li>Software Security Touchpoints </li></ul><ul><li>Knowledge </li></ul>
  9. 9. Risk Management
  10. 10. Risk Assessment RISK Threats Vulnerabilities Consequences
  11. 11. Risk Management Framework (Business Context) Understand Business Context Identify Business and Technical Risks Synthesize and Rank Risks Define Risk Mitigation Strategy Carry Out Fixes and Validate Measurement and Reporting
  12. 12. Assets-Threat Model (1) <ul><li>Threats compromise assets </li></ul><ul><li>Threats have a probability of occurrence and severity of effect </li></ul><ul><li>Assets have values </li></ul><ul><li>Assets are vulnerable to threats </li></ul>Threats Assets
  13. 13. Risk Acceptance <ul><li>Certification </li></ul><ul><ul><li>How well the system meet the security requirements (technical) </li></ul></ul><ul><li>Accreditation </li></ul><ul><ul><li>Management’s approval of automated system (administrative) </li></ul></ul>
  14. 14. Building It Secure <ul><li>1960s: US Department of Defense (DoD) risk of unsecured information systems </li></ul><ul><li>1970s: </li></ul><ul><ul><li>1977: DoD Computer Security Initiative </li></ul></ul><ul><ul><li>US Government and private concerns </li></ul></ul><ul><ul><li>National Bureau of Standards (NBS – now NIST) </li></ul></ul><ul><ul><ul><li>Responsible for standards for acquisition and use of federal computing systems </li></ul></ul></ul><ul><ul><ul><li>Federal Information Processing Standards (FIPS PUBs) </li></ul></ul></ul>
  15. 15. Software Security Touchpoints
  16. 16. Application of Touchpoints Requirement and Use cases Architecture and Design Test Plans Code Tests and Test Results Feedback from the Field 5. Abuse cases 6. Security Requirements 2. Risk Analysis External Review 4. Risk-Based Security Tests 1. Code Review (Tools) 2. Risk Analysis 3. Penetration Testing 7. Security Operations
  17. 17. When to Apply Security? <ul><li>Economical consideration: early is better </li></ul><ul><li>Effectiveness of touchpoints: </li></ul><ul><ul><li>Economics </li></ul></ul><ul><ul><li>Which software artifacts are available </li></ul></ul><ul><ul><li>Which tools are available </li></ul></ul><ul><ul><li>Cultural changes </li></ul></ul><ul><li>Bad: reactive strategy  need: secure development </li></ul><ul><li>See slides of Kromholz: Assurance – A Case for the V-Model, https://syst.eui.upm.es/conference/sv03/papers/V-Chart%20200309Kromholz08.ppt </li></ul>
  18. 18. Best Practices <ul><li>Earlier the better </li></ul><ul><li>Change “operational” view to secure software </li></ul><ul><li>Best practices: expounded by experts and adopted by practitioners </li></ul>
  19. 19. Who Should Care? <ul><li>Developers </li></ul><ul><li>Architects </li></ul><ul><li>Other builders </li></ul><ul><li>Operations people </li></ul>Do not start with security people. Start with software people.
  20. 20. Architectural Risk Analysis
  21. 21. Design Flaws <ul><li>50 % of security problems </li></ul><ul><li>Need: explicitly identifying risk </li></ul><ul><li>Quantifying impact: tie technology issues and concerns to business </li></ul><ul><li>Continuous risk management </li></ul>
  22. 22. Security Risk Analysis <ul><li>Risk analysis: identifying and ranking risks </li></ul><ul><li>Risk management: performing risk analysis exercises, tracking risk, mitigating risks </li></ul><ul><li>Need: understanding of business impact </li></ul>
  23. 23. Security Risk Analysis <ul><li>Learn about the target of analysis </li></ul><ul><li>Discuss security issues </li></ul><ul><li>Determine probability of compromise </li></ul><ul><li>Perform impact analysis </li></ul><ul><li>Rank risks </li></ul><ul><li>Develop mitigation strategy </li></ul><ul><li>Report findings </li></ul>
  24. 24. Knowledge Requirements <ul><li>Three basic steps: </li></ul><ul><ul><li>Attack resistance analysis </li></ul></ul><ul><ul><ul><li>Attack patterns and exploit graphs </li></ul></ul></ul><ul><ul><li>Ambiguity analysis </li></ul></ul><ul><ul><ul><li>Knowledge of design principles </li></ul></ul></ul><ul><ul><li>Weakness analysis </li></ul></ul><ul><ul><ul><li>Knowledge of security issues </li></ul></ul></ul><ul><li>Forest-level view: What does the software do? </li></ul><ul><ul><li>Critical components and interaction between them </li></ul></ul><ul><ul><li>Identify risk related to flaws </li></ul></ul>
  25. 25. Modern Risk Analysis <ul><li>Address risk as early as possible in the requirements level </li></ul><ul><li>Impact: </li></ul><ul><ul><li>Legal and/or regulatory risk </li></ul></ul><ul><ul><li>Financial or commercial considerations </li></ul></ul><ul><ul><li>Contractual considerations </li></ul></ul><ul><li>Requirements: “must-haves,” “important-to-have,” and “nice-but-unnecessary-to-have” </li></ul>
  26. 26. Attack Resistance Analysis <ul><li>Information about known attacks, attack patterns, and vulnerabilities – known problems </li></ul><ul><ul><li>Identify general flaws: using secure design literature and checklists </li></ul></ul><ul><ul><li>Map attack patterns: based on abuse cases and attack patterns </li></ul></ul><ul><ul><li>Identify risk in the architecture: using checklist </li></ul></ul><ul><ul><li>Understand and demonstrate the viability of known attacks </li></ul></ul>
  27. 27. Ambiguity Analysis <ul><li>Discover new risks </li></ul><ul><li>Parallel activities of team members  unify understanding </li></ul><ul><ul><li>Private list of possible flaws </li></ul></ul><ul><ul><li>Describe together how the system worked </li></ul></ul><ul><li>Need a team of experienced analysts </li></ul>
  28. 28. Weakness Analysis <ul><li>Understanding the impact of external software dependencies </li></ul><ul><ul><li>Middleware </li></ul></ul><ul><ul><li>Outside libraries </li></ul></ul><ul><ul><li>Distributed code </li></ul></ul><ul><ul><li>Services </li></ul></ul><ul><ul><li>Physical environment </li></ul></ul><ul><ul><li>Etc. </li></ul></ul>
  29. 29. Use Cases Misuse Cases
  30. 30. SecureUML <ul><li>Model-driven software development integrated with security </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Security is integrated during software design, using high-level of abstraction </li></ul></ul><ul><ul><li>Modeling information can be used to detect design errors and verify correctness </li></ul></ul><ul><li>Limitations: need precise semantics of modeling language for security assurance </li></ul>
  31. 31. SecureUML <ul><li>Defines vocabulary for annotating UML-based models with access control information </li></ul><ul><li>Metamodel: abstract syntax of the language </li></ul><ul><li>UML Profile: notation to enhance UML class model </li></ul><ul><li>Host language: an other modeling language that uses SecureUML </li></ul><ul><li>SecureUML dialect: SecureUML specifications are refined in the host language </li></ul><ul><ul><li>E.g., syntactic elements of the modeling language are transformed into constructs of the target platform </li></ul></ul>
  32. 32. Misuse Cases <ul><li>Software development: making software do something </li></ul><ul><ul><li>Describe features and functions </li></ul></ul><ul><ul><li>Everything goes right </li></ul></ul><ul><li>Need: security, performance, reliability </li></ul><ul><ul><li>Service level agreement – legal binding </li></ul></ul><ul><li>How to model non-normative behavior in use cases? </li></ul><ul><ul><li>Think like a bad guy </li></ul></ul>
  33. 33. Penetration Testing
  34. 35. Security Testing <ul><li>Look for unexpected but intentional misuse of the system </li></ul><ul><li>Must test for all potential misuse types using </li></ul><ul><ul><li>Architectural risk analysis results </li></ul></ul><ul><ul><li>Abuse cases </li></ul></ul><ul><li>Verify that </li></ul><ul><ul><li>All intended security features work (white hat) </li></ul></ul><ul><ul><li>Intentional attacks cannot compromise the system (black hat) </li></ul></ul>
  35. 36. Penetration Testing <ul><li>Testing for negative – what must not exist in the system </li></ul><ul><li>Difficult – how to prove “non-existence” </li></ul><ul><li>If penetration testing does not find errors than </li></ul><ul><ul><li>Can conclude that under the given circumstances no security faults occurred </li></ul></ul><ul><ul><li>Little assurance that application is immune to attacks </li></ul></ul><ul><li>Feel-good exercise </li></ul>
  36. 37. Penetration Testing Today <ul><li>Often performed </li></ul><ul><li>Applied to finished products </li></ul><ul><li>Outside  in approach </li></ul><ul><li>Late SDLC activity </li></ul><ul><li>Limitation: too little, too late </li></ul>
  37. 38. Late-Lifecycle Testing <ul><li>Limitations: </li></ul><ul><ul><li>Design and coding errors are too late to discover </li></ul></ul><ul><ul><li>Higher cost than earlier designs-level detection </li></ul></ul><ul><ul><li>Options to remedy discovered flaws are constrained by both time and budget </li></ul></ul><ul><li>Advantages: evaluate the system in its final operating environment </li></ul>
  38. 39. Success of Penetration Testing <ul><li>Depends on skill, knowledge, and experience of the tester </li></ul><ul><li>Important! Result interpretation </li></ul><ul><li>Disadvantages of penetration testing: </li></ul><ul><ul><li>Often used as an excuse to declare victory and go home </li></ul></ul><ul><ul><li>Everyone looks good after negative testing results </li></ul></ul>
  39. 40. Risk-Based Security Testing
  40. 41. Quality Assurance <ul><li>External quality: correctness, reliability, usability, integrity </li></ul><ul><li>Interior (engineering) quality: efficiency, testability, documentation, structure </li></ul><ul><li>Future (adaptability) quality: flexibility, reusability, maintainability </li></ul>
  41. 42. Correctness Testing <ul><li>Black box: </li></ul><ul><ul><li>Test data are derived from the specified functional requirements without regard to the final program structure </li></ul></ul><ul><ul><li>Data-driven, input/output driven, or requirements-based </li></ul></ul><ul><ul><li>Functional testing </li></ul></ul><ul><ul><li>No implementation details of the code are considered </li></ul></ul>
  42. 43. Correctness Testing <ul><li>White box: </li></ul><ul><ul><li>Software under test are visible to the tester </li></ul></ul><ul><ul><li>Testing plans: based on the details of the software implementation </li></ul></ul><ul><ul><li>Test cases: derived from the program structure </li></ul></ul><ul><ul><li>glass-box testing, logic-driven testing, or design-based testing </li></ul></ul>
  43. 44. Performance Testing <ul><li>Goal: bottleneck identification, performance comparison and evaluation, etc. </li></ul><ul><li>Explicit or implicit requirements </li></ul><ul><li>&quot;Performance bugs&quot; – design problems </li></ul><ul><li>Test: usage, throughput, stimulus-response time, queue lengths, etc. </li></ul><ul><li>Resources to be tested: network bandwidth requirements, CPU cycles, disk space, disk access operations, memory usage, etc. </li></ul>
  44. 45. Reliability Testing <ul><li>Probability of failure-free operation of a system </li></ul><ul><li>Dependable software: it does not fail in unexpected or catastrophic ways </li></ul><ul><li>Difficult to test </li></ul><ul><li>(see later lecture on software reliability) </li></ul>
  45. 46. Security Testing <ul><li>Test: finding flaws in software can be exploited by attackers </li></ul><ul><li>Quality, reliability and security are tightly coupled </li></ul><ul><li>Software behavior testing </li></ul><ul><ul><li>Need: risk-based approach using system architecture information and attacker’s model </li></ul></ul>
  46. 47. Risk-Based Testing <ul><li>Identify risks </li></ul><ul><li>Create tests to address identified risks </li></ul><ul><li>Security testing vs. penetration testing </li></ul><ul><ul><li>Level of approach </li></ul></ul><ul><ul><li>Timing of testing </li></ul></ul>
  47. 48. Security Testing <ul><li>Can be applied before the product is completed </li></ul><ul><li>Different levels of testing (e.g., component/unit level vs. system level) </li></ul><ul><li>Testing environment </li></ul><ul><li>Detailed </li></ul>
  48. 49. Who Should Perform the Test? <ul><li>Standard testing organizations </li></ul><ul><ul><li>Functional testing </li></ul></ul><ul><li>Software security professionals </li></ul><ul><ul><li>Risk-based security testing </li></ul></ul><ul><ul><li>Important: expertise and experience </li></ul></ul>
  49. 50. How to Test? <ul><li>White box analysis </li></ul><ul><ul><li>Understanding and analyzing source code and design </li></ul></ul><ul><ul><li>Very effective finding programming errors </li></ul></ul><ul><ul><li>Can be supported by automated static analyzer </li></ul></ul><ul><ul><li>Disadvantage: high rate of false positives </li></ul></ul><ul><li>Black box analysis </li></ul><ul><ul><li>Analyze a running program </li></ul></ul><ul><ul><li>Probe the program with various input (malicious input) </li></ul></ul><ul><ul><li>No need for any code – can be tested remotely </li></ul></ul>
  50. 51. Security Operations
  51. 52. Don’t stand so close to me <ul><li>Best Practices </li></ul><ul><ul><li>Manageable number of simple activities </li></ul></ul><ul><ul><li>Should be applied throughout the software development process </li></ul></ul><ul><li>Problem: </li></ul><ul><ul><li>Software developers: lack of security domain knowledge  limited to functional security </li></ul></ul><ul><ul><li>Information security professionals: lack of understanding software  limited to reactive security techniques </li></ul></ul>
  52. 53. Deployment and Operations <ul><li>Configuration and customization of software application’s deployment environment </li></ul><ul><li>Activities: </li></ul><ul><ul><li>Network-component-level </li></ul></ul><ul><ul><li>Operating system-level </li></ul></ul><ul><ul><li>Application-level </li></ul></ul>
  53. 54. SANS: Secure Programming Skills Assessment <ul><li>Aims to improve secure programming skills and knowledge </li></ul><ul><li>Allow employers to rate their programmers </li></ul><ul><li>Allow buyers of software and systems vendors to measure skills of developers </li></ul><ul><li>Allow programmers to identify their gaps in secure programming knowledge </li></ul><ul><li>Allow employers to evaluate job candidates and potential consultants </li></ul><ul><li>Provide incentive for universities to include secure coding in their curricula </li></ul>
  54. 55. Independence in Multiversion Programming
  55. 56. Multi-Version Programming <ul><li>N-version programming </li></ul><ul><li>Goal: increase fault tolerance </li></ul><ul><li>Separate, independent development of multiple versions of a software </li></ul><ul><li>Versions executed parallel </li></ul><ul><ul><li>Identical input  Identical output ? </li></ul></ul><ul><ul><li>Majority vote </li></ul></ul>
  56. 57. Separate Development <ul><li>At which point of software development? </li></ul><ul><ul><li>Common form of system requirements document </li></ul></ul><ul><ul><li>Voting on intermediate data </li></ul></ul><ul><li>Rammamoorthy et al. </li></ul><ul><ul><li>Independent specifications in a formal specification language </li></ul></ul><ul><ul><li>Mathematical techniques to compare specifications </li></ul></ul><ul><li>Kelly and Avizienis </li></ul><ul><ul><li>Separate specifications written by the same person </li></ul></ul><ul><ul><li>3 different specification languages </li></ul></ul>
  57. 58. Difficulties <ul><li>How to isolate versions </li></ul><ul><li>How to design voting algorithms </li></ul>
  58. 59. Advantages of N-Versioning <ul><li>Improve reliability </li></ul><ul><li>Assumption: N different versions will fail independently </li></ul><ul><li>Outcome: probability of two or more versions failing on the same input is small </li></ul>If the assumption is true, the reliability of the system could be higher than the reliability of the individual components
  59. 60. <ul><li>Is the assumption TRUE? </li></ul>
  60. 61. False? <ul><li>People tend to make the same mistakes </li></ul><ul><li>Common design faults </li></ul><ul><li>Common Failure Mode Analysis </li></ul><ul><ul><li>Mechanical systems </li></ul></ul><ul><ul><li>Software system </li></ul></ul>
  61. 62. How to Achieve Reliability? <ul><li>Need independence </li></ul><ul><ul><li>Even small probabilities of coincident errors cause substantial reduction in reliability </li></ul></ul><ul><ul><li>Overestimate reliability </li></ul></ul><ul><li>Crucial systems </li></ul><ul><ul><li>Aircrafts </li></ul></ul><ul><ul><li>Nuclear reactors </li></ul></ul><ul><ul><li>Railways </li></ul></ul>
  62. 63. Testing of Critical Software Systems <ul><li>Dual programming: </li></ul><ul><ul><li>Producing two versions of the software </li></ul></ul><ul><ul><li>Executing them on large number of test cases </li></ul></ul><ul><ul><li>Output is assumed to be correct if both versions agree </li></ul></ul><ul><ul><li>No manual or independent evaluation of correct output – expensive to do so </li></ul></ul><ul><ul><li>Assumption: unlikely that two versions contain identical faults for large number of test cases </li></ul></ul>
  63. 64. Voting <ul><li>Individual software versions may have low reliability </li></ul><ul><li>Run multiple versions and vote on “correct” answer </li></ul><ul><li>Additional cost: voting processs </li></ul>
  64. 65. <ul><li>Common Assumption: </li></ul><ul><li>Low probability of common mode failures </li></ul><ul><li>(identical, incorrect output generated from the same input) </li></ul>
  65. 66. Independence <ul><li>Assumed and not tested </li></ul><ul><ul><li>Two versions were assumed to be correct if the two outputs for the test cases agree </li></ul></ul><ul><li>Test for common errors but not for independence </li></ul><ul><ul><li>Kelly and Avizienis: 21 related and 1 common fault – nuclear reactor project </li></ul></ul><ul><ul><li>Taylor: common faults in European practical systems </li></ul></ul><ul><li>Need evaluation/testing of independence </li></ul>

×