• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Software Security Initiatives
 

Software Security Initiatives

on

  • 4,201 views

OWASP e-gov presentation in Rome November 5th 2009

OWASP e-gov presentation in Rome November 5th 2009

Statistics

Views

Total Views
4,201
Views on SlideShare
4,193
Embed Views
8

Actions

Likes
2
Downloads
101
Comments
0

1 Embed 8

http://www.slideshare.net 8

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Il primo passo e’ creare il caso facendo riferimento a fattori critici. Come costi per rimediare le vulnerabilita;, ottemperanza con gli standards, il fatto che la maggiornaza delle perdite di dati sensibili hanno cause in software insicuro come as esemption SQL injection per non citare gli esperti gartner e altri study (NIST)
  • Dal punto di vista della compliance si puo fare rifereminto ai requistiti del PCI-DSS se applicabili per esempion or mappare gli standards di sicurezza/polizze a requisiti per in controlli di application security
  • Ci sono molti dati relativi a vulnerability ma pochi dati che correlano vulnerability a incidenti. Si puo’ anche fare riferimento a data che correlano le vulnerabilita usate dagli attacchi web (attorno al 19% di tutti gli attacchi web) come incidenza delle vulnerabilita’ che hanno cause dirette a errori di codice.
  • Secondo gli esperti ci sono concordanze per quanto il fatto che la maggioranza degli incidenti sia a livello di applicazioni (70-75%) e correlazione con le vulnerabilita’ (70-90%) c’e’ concordanza con la riduzione dei costi al 75% se le vulnerabilita vengono ridotte del 50% e 83 % (per tutte le vulnerabilita’) se software vulnerabilities vengono rimediate durante la fase di codice.
  • Questa slide e’ illuminante di un approccio sbagliato verso application security ma anche molto comune e cioe la mancanza di un programma per la notificazione pubblica delle vulnerabilita che porta a perseguire la cause sbagliate There are bad guys and good guys and some they just looking for work, be careful of responsible disclosure going public with a zero day can go against you. On the other hand also ignoring the vulnerabilities you are suppose to fix is not a very responsbile approach
  • Nei casi iniziali per la software security la domanda che molti executives si pongono e’ dal punto di vista di softwrae security a parita’ di condizioni come siamo messi rispetto ad organizzazioni simili/ male bene. Se la domanda non sorge spontanea puo essere un motivo per il business case da parte di chi deve vendre software security o implementare software security mettendo in evidenza la PAGELLA. L’approccio standard di validazione di software assurance con il maturity model provvede proprio a questo, una pagella da 1 a 3 per SAMM e da 1 a 5 per identificare la maturita del processi e le necessita. L’ altro fattore importante e’ articolare il caso con dati a disposizione buoni e cattivi che siano.
  • La metrica in particolare e di support alla analisi pe il caso di software security specialmente quella relativa alla gestione dei difetti e I cost per gestirli
  • Quando il processo si ripete si puo anche misurare (level 2 di CMM repeteable ma non ancora pro-attivo). Molte organizzazioni (includo la mia) hanno trovato vincente un approccio: data driven e cioe’ guidato dalla metrica, e’ essenziale identificare le root causes e I trends per le vulnerabilita’ come metrica globale (per business ) e come metrica relativa alla applicazioni
  • Il tipico piano di software security So you need a road map, start with assessing the maturity of the PPT in place, then definition of the processes to build security into the SDLC and the security activities. Then you cannot manage if you do not measure and risk menagement expecially needs vulnerability metrics for example. Finally you can make the case with your data and set objectives moving forward.
  • E’ importante fare riferimento a modelli standards per la valutazione della maturita’. Un fattore importante e’ valutate le capacita’ e dare al management la visibilta’ delle attivita di software engineering ad ogni livello. Questo modello CMM en un modello usato in security e adatto alla sicurezza dei systemi come SSE-CMM. I livelli si possono mapapre in base al tipo di organizzazione. Answering specific questions on your organization software engineering and risk management processes is one way to perform the assessment, for processes in general this can be done with formal methods for assessing how your processes are currently managed and how can be implemented and improved. This means tying the reaching of some goals to maturity levels: the Capability Maturity Model (CMM) provides a useful framework to assess the level of maturity in processes and what activities need to be performed to reach a higher maturity level in the organization processes, people and tools. By applying software security through the CMM you can tie the adoption of software security activities to certain level of software security assurance. For example, for most organizations moving from traditional network centric and application centric approach to software security approach means adopting a software security framework. This adoption cannot happen overnight, stating a famous phrase: as Rome cannot be build in one day, secure software will not be built either. The main reason is because you need time to mature you processes, training and tools. From the tools perspective for example, it means starting from a proof of concept and then seek adoption throughout your organization, from the process perspective means standardize a software development and risk management processes, from people perspective means provide training for the organization as a whole instead ad-hoc training for some departments as required.
  • An example of reaching CMM security levels depending on the software security activities and the time frames, from low maturity to medium maturity in the short term and from low maturity to high maturity in the long tem is included herein: Low Maturity (CMM 0-1) No formal security requirements Issues identified through penetration testing and security incidents Issues are identified and corrected late in the lifecycle Penetrate and patch and reactive approach Medium Maturity (CMM 2-3) Applications are reviewed for security design flaws with threat modeling Source code is reviewed for security with source code analysis Penetration tests validate issues dealt with earlier in the SDLC Advanced Maturity (CCM 4-5) Threat analysis in each phase of the SDLC Risk metrics and measurements are used for improving security engineering and risk management processes
  • Passare da un approccio tattico ad uno strategico e’ la parte il punto di decisione piu importante e delinea un acquisizione di maturita nella approcciare il problema di software insicuro. Significa passare da u approccio initizale ad hoc ad un approccio strategico di definizione dei processi, management e ottimizzazione
  • A parte il tradizionale CMM I recenti modelli SAMM e BSIMM sono modelli specifici che si possono usare per la valutazione della maturita’ delle pratiche di software security. SAMM e’ organizzato in 4 funzioni busienss critiche essenziali e tre pratiche per ogni funzione e livelli di maturity da 1 a 3 per ogni pratica. BSIMM ha defnite 12 dominii definiti “pratiche” e 110 attivita organizzate in tre livelli di maturity. Ogni attivita’ ha defniti gli obbiettivi e le attivita associate
  • Un esempio di uso pratico di BSIMM e’ identificare il livello di maturity della attivita e la roadmap. Confessions of a Software Security Alchemist On March 4 th we released the Building Security In Maturity Model (BSIMM) under a Creative Commons license (and slightly ahead of schedule). Those of you who follow this column know that we built the BSIMM by gathering real data from nine large-scale software security initiatives . Seven of the nine companies we studied graciously agreed to let us identify them: Adobe, The Depository Trust & Clearing Corporation (DTCC), EMC, Google, Microsoft, QUALCOMM, and Wells Fargo. We could not have done this empirical work without the cooperation of the nine, including the "two who cannot be named." The BSIMM project adhered to one hard and fast scientific rule: only activities observed in the field could be added to the model. We started with a software security framework and a blank slate. As a result, BSIMM is the world's first software security yardstick based entirely on real world data and observed activities. Whether you run a software security initiative today or are charged with starting one tomorrow, you are likely to find the BSIMM incredibly useful. Empiricism Over Alchemy A handful of millennia ago — say, around 400 BCE — a number of particularly inquisitive souls spent much of their time working on alchemy. Some historians of science argue that alchemy evolved into chemistry. (The term "evolved" might be a bit of an overstatement. McGraw's dad was a chemist, and he claimed to "make potions" for a living.) Though the elusive lead-into-gold recipe remains out of reach, empirically-grounded chemistry has served the modern world well. The time has come for software security to make the same shift away from alchemy towards empiricism. Early work in software security, including our own, concerned itself with advocacy and evangelism. We needed to convince the world that we had a serious problem. We succeeded. In 2006, software security found itself embodied in three major methodologies: Microsoft SDL, Cigital Touchpoints, and OWASP CLASP. Not surprisingly, these three methodologies are very much like religions — charismatic leaders, articles of faith, and some basic best practices. If you stand back and squint, the three software security religions look basically the same. Both early phases of software security made use of any sort of argument or "evidence" to bolster the software security message, and that was fine given the starting point. We had lots of examples, plenty of good intuition, and the best of intentions. But now the time has come to put away the bug parade boogeyman , the top 25 tea leaves , black box web app goat sacrifice, and the occult reading of pen testing entrails. The time for science is upon us. We are aware of at least 35 software security initiatives underway. We chose to study nine. Our model is based on empirical evidence and data observation. Each of the 110 activities we identified was observed in the field. The example for each activity in BSIMM is a real example. The 110 activities are not "best practices;" they are actual practices. Science. BSIMM Basics To give you a taste of how the BSIMM is constructed, we'll dive down through the model from the highest level. We introduce the Software Security Framework (SSF) in an informIT article where you can read more. Incidentally, judicious use of the BSIMM requires careful use of the SSF to structure the data-gathering interview. Simply waltzing down the list of 110 activities and asking, "do you do such and so?" is not an appropriate data-gathering mode. In our early work applying the BSIMM, we have already noticed a "Soviet revisionist history" problem with self-reporting that we will need to account for as the model evolves. "Hey, that's a good idea!" becomes "Didn't we try that once?" becomes "We've totally been doing that forever." One way to avoid hyperbolic reporting issues is to structure the interview properly by discussing all of an organization's SSG activities in context, and then measuring the knowledge gained with the BSIMM yardstick. Another is to have the interview carried out by an experienced BSIMM interviewer. Here is the SSF: Figure 1. Each of the twelve practices in the SSF has an associated set of activities described in the BSIMM. There are 110 activities. The BSIMM document provides a description of each activity, including real examples of each activity as observed among the nine. Special note: all 110 activities are actually at work somewhere (even though we don't expect all 110 to be employed in any single organization)! The BSIMM website also has an interactive SSF . You can explore the practices and the activities by clicking around. To make this especially clear, we did NOT observe all 110 activities in all of the nine. In fact, only ten activities were observed in all nine initiatives . However, we DID observe each of the 110 activities in at least one of the nine (and in most cases, more than one). In that sense, even though we stuck to our data-driven guns, the BSIMM is an inclusive model. For each of the twelve practices, we have constructed a "skeleton" view of the activities divided into three levels. As an example, the skeleton for the Training practice is shown below: Figure 2. As you can see, there are eleven activities under the Training practice divided into three levels. Levels roughly correspond to maturity in that practice. Regarding levels, it is not necessary to carry out all activities in level 1 before moving on to level 2 or level 3 activities; however, the BSIMM levels correspond to a logical progression from "straightforward" to "advanced." In the body of the BSIMM (and on the website) is a paragraph describing each activity. Here is an example of the description for Activity T1.3: [T1.3] Establish SSG office hours. The SSG offers help to any and all comers during an advertised lab period or regularly scheduled office hours. By acting as an informal resource for people who want to solve security problems, the SSG leverages teachable moments and emphasizes the carrot over the stick. Office hours might be held one afternoon per week in the office of a senior SSG member. On page 49 of the BSIMM model, we report the number of times each of the 110 activities were observed in the field among the nine. By referring to that chart, we can note that five of the organizations we studied perform this activity. Software Security 2.0 — An Empirical Yardstick The most obvious way to use the BSIMM is as a yardstick for measuring your own software security initiative. You can do this by noting which of the activities you routinely carry out and which you don't, and then thinking about the level of maturity attained by your organization as evidenced by the level of the activities you carry out. We are collecting a set of data that is growing over time with a 110-bit activity vector for each organization. Using these data, it is possible to determine where you stand and how what you are doing compared to others. In the BSIMM we released two analyses that are useful for this comparison. The first is the average over the nine, which shows a "high water mark" maturity level for each of the twelve practices averaged over the nine. That is, if a level three activity was observed in a practice that practice is noted as a three regardless of the number of activities in levels one and two. We sliced the data many ways, and the high-water mark turned out to be both useful and easy to calculate. Figure 3. By computing your own high water mark score and laying it over the graph above, you can quickly determine where you may be ahead of the game and where you may be behind. This information becomes valuable when you switch from yardstick-mode to software security initiative planning-mode, adopting some of the BSIMM activities based on your local goals, your assessment of software security risks, and your organization's culture. A deeper analysis is also possible and is shown here (special note, the client data are FAKE): Figure 4. The table above is organized to roughly mirror the SSF. The SUM column shows how many of the nine performed each activity (from BSIMM page 49). Those activities where the organization carries out one of the activities that everybody does , are shown as green blocks. Those activities where the organization does not do one of the things that everybody does are shown in red blocks. Those practices where the data show that the organization under review is "behind" the average are shown with the blue swath over the activities in the practice. In the fake client data above, the pretend organization should think hard about the red blocks in compliance and policy, training, architecture analysis, security testing, and software environment . Red doesn't mean you're negligent, but it does make you an outlier. Like your mother always told you, just because everyone else is doing it doesn't mean you have to do it too, but it is prudent at least to know why you're not doing it. Blue shift practices for this pretend organization include: strategy and metrics, training, and security requirements . Those "popular" activities in blue shift practices (where more than five of the nine carry out the activity) are shown as blue blocks. The blue shift activities might be a way to accelerate a program quickly within a given practice. Everyone is a Special Snowflake (Not) One of the surprises of the BSIMM work was that industry vertical has less impact than we thought it would on a software security initiative. We observed data from 4 ISVs (Microsoft, EMC, Google, and Adobe). We also observed data from four financial services firms (Wells Fargo, DTCC, and two that remain un-named). Our intuition was that we would need to build two models, one for ISVs and one for Financial Services firms. The data show that intuition is wrong. Here is a spider chart showing the ISV high water mark average over the Financial Services high water mark average (these data are real): Figure 5. As you can see, the most apparent feature is the broad overlap between verticals. There is one difference worth noting: the ISVs studied have a tendency to emphasize testing (both security testing and pen testing ) and configuration management/vulnerability management over the financial services firms. The financial services firms have a slightly greater emphasis on compliance and policy, security features and design , and training . This makes perfect sense if you think about regulatory compliance versus market perception issues affecting the two verticals. The data show that the BSIMM is useful as a yardstick for software security initiatives even from diverse verticals. As we gather more data, both from larger, established programs and from smaller, newer programs, we will determine just how robust this feature of the model is in practice. This is an important result, because it cuts through one of the software security myths: the myth of the special snowflake. Jim Routh, one of the study participants and the CISO of DTCC puts it this way, "What we discovered is that a mature software security program for a financial service firms is very similar to one for an ISV like [Microsoft]. In fact they can and do use the same methods, techniques and tools for secure software development. The maturity for both entities can now be measured the same way with the same framework. This helps our firm when it communicates to ISVs and requests artifacts from the software security program. We always assumed that the ISV world of software security was different." Put concisely, the BSIMM is an extremely useful yardstick for software security programs that can provide an important measuring tool. The Metrics/Culture Problem We spoke to each of the nine about metrics, and previously reported some of our findings regarding the misuse of metrics . All of the nine have robust metrics systems in place and note the importance of good metrics to their success (and decry the distracting impact of bad metrics). Despite the common ground seen in the various SSG activities, we observed no common metrics shared among all of the nine. Our hypothesis is that metrics are valuable in a particular organizational culture. Furthermore, transferring a metrics program from one culture to another seems much like organ transplant — chances of rejection by the host are high. This is discouraging, because we all would like some common metrics that work for any organization, especially metrics that we can use to show that software security is actually improving. Meanwhile, we will settle for measuring activities under the assumption that such activities result in more secure software. The participants firmly believe that their SSG activities are a fundamental reason for their software security improvements, and we have no reason to doubt them.
  • Un aspetto importante e’ il mapping delle attivita ai vari livelli di knowledge, in particolare SAMM per esempio da un mapping e attivita’ essenziali dipendendo dalla tipologia della azienda da government o financial organization. Ancora in riferimento al caso financial industry il mapping qui e’ limitato a software security assessments come penetratio testing and source code analysis, metrics and risk management. L’aspetto del tempo che ci vuole nonche il costo, rappresentato dall’area delal curva e’ impoertante e non da sottovalutate.
  • Secondo la mia esperienza ci sono dei requsiti per la software security intiative che sono 1) people software security skills that usually are not found among typical information security professionals such as: Ethical hackers that know how to break into the applications but cannot tell you how to build a secure one. Security engineers professionals that know how to run security assessment tools but do not have grass roots (aka experience) in software engineering, designing application and coding Information security professionals with little or no experience with checking security compliance in web applications
  • Le due faccie della medaglia application security e software security, molte organizzazioni iniziano con pen test e procedono a implementare secure code assessements come step successivo The most common approach to finding vulnerabilities is to analyze the running application. The two techniques are “vulnerability scanning” (using tools and signature databases) and “penetration testing” (custom testing by experts). However, for many types of problems, analyzing the running application is very time-consuming and inaccurate. SQL injection, for example, is very difficult to find and diagnose in a running application, but can be quickly found by analyzing the source code. The other approach is to analyze the source code. Like pentesting, this can be done manually (source code review) or with tools (static analysis). Code-based approaches have a reputation for being expensive and time-consuming, but this reputation is unfounded. For many types of issues, using the code is many times faster and more accurate than penetration testing. The most cost-effective approach to application security is a “combined” or “integrated” approach. The assessor should be encouraged to use the most appropriate tool to find problems in the most cost-effective manner. For example, an assessor may notice a potential vulnerability during a penetration test, automatically scan the code for possible instances of the problem, and then confirm using code review. Note that the purely automated approaches (scanning and static analysis) are especially ineffective for application security (most experts put the effectiveness of pure scanning or static analysis at less than 20%). This is largely due to the custom nature of applications. Because each one is different, there is no database of signatures the automated tools can use.
  • La sicurezza del software dipende in larga parte dalal sicurezza del design, organizazzioni che hanno implementato app scans e souce code analysis si muovono verso la parte sinistra della SDLC per identificare issues durante design phase. Phishing Exploit weak authorization, authorization, session management and input validation (XSS, XFS) vulnerabilities Privacy violations Exploit poor input validation, business rule and weak authorization, injection flaws, information leakage vulnerabilities Identity theft Exploit poor or non-existent cryptographic controls, malicious file execution, authentication, business rule and auth checks vulnerabilities System compromise, data alteration or data destruction Exploit injection flaws, remote file inclusion-upload vulnerabilities Financial loss Exploit unauthorized transactions and CSRF attacks, broken authentication and session management, insecure object reference, weak authorization-forceful browsing vulnerabilities Reputation loss Depend on any evidence (not neccessarly exploitation) of a web application vulnerability
  • A livello 3 della CMM vedremo in seguito) I processi sono definiti e proattivi. Ultimately a mature software security process blends both information risk management and software engineering processes in a software security framework. For example, threat modeling will identify threats and technical impacts during design that are used as a factor along with business impact in the calculation of the overall risk. Ideally, such mature software security process should integrate software security activities in each phase of the SDLC. This also included activities that are performed as part of the operation life-cycle such as patch and incident management as well foundational activities such as metrics and measurements and security training and awareness. Ideally, for a software security framework to be useful in practice, it needs to apply to the different software development methodologies used by your organization. This might include evolutionary-interactive software life-cycles such as spiral, RUP, XP and Agile besides the traditional waterfall model but also. In case of RUP and Agile for example this means that such software security best practices need to be iterated as the software evolves and reviewed at each interaction depending on the available artifacts.
  • E il problema N1 e e misure, perche’ per un maturity model to be managed at level 4 it has to be measured! Da BSIMM si ricava per esemption che ognuna delle 9 organizations usa la sua metrica e non c;e un standard way to do it. Qundi basandomi sulla mia esperienza la metrica essenziale deve definire dove, cosa e come Discussion? Security flaws are often sensitive. Should they be treated differently
  • Obiettivo # della metrics e’ fornire informazioni sulle cause di insecure software per esempio flaws, bugs e configuration
  • La metrica puo anche essere in funzione di requisiti di compliance Step 3; measure. By assessing the maturity of secure software engineering practices within your organization it is possible to set the goals such as which activities can be performed in the short term, mid term and long term. It is important to measure and set up achievable goals. Adopting a software security metrics is a critical factor in measuring the effectiveness of secure software activities within the organization. For example, setting a baseline for the security posture of existing applications allows comparing results after the security activities have been implemented. Security metrics can be tied to specific objectives and goals when are “SMART” that is Specific, Measurable, Attainable, Realistic, Traceable and Appropriate. An example of a SMART metrics goal can be reducing the overall number of vulnerabilities by 30% by fixing all low hanging fruits with source code analysis during construction. Having a software security metrics in hand is the best way to make the case for software security to the software security initiative stakeholders. If the data shows what you are doing best and when as well as what your gaps are it is much easer to make the case for software security budget and resources since you can prove how effective these can be for your organization. One of the most critical aspects when enacting a software security program is gaining support from different levels of management within your organization: this might require fighting some misconceptions such as security impact performance, security impact costs and security impact development. This “fighting” might involve different battles depending on your role within your organization: as developer lead you to make the case to developers that are tired to rebuild their application because a security auditor changed his mind on requirements. As engineering director you have to make the case to project managers that worry about missing deadlines and how these could affect the budget, the costs and the performance. As information security officer you need to make the case to the CIOs that worry about putting money in a process or a tool/technology and not being able to show the return to the company executives. In all role cases this means communicating effectively the case for software security, to show where the problem is documenting it with data but also by provide potential solutions and trade-off options to each project stakeholder involved in the software security initiative. To developers a software security metrics can show that software can be build more securely and efficiently when using source code analysis tools, secure coding standards can be followed and software security training is available. To project managers, a software security metrics can show that project are on schedule and moving on target for delivery dates and are getting better during tests. Such metrics also builds the business case if the initiative comes from information security officers. For example it needs to provide evidence that security checkpoints and assessments that are part of the SDLC do not impact the project delivery but rather reduce the resources and the workload needed to address vulnerabilities later in production and allow to deliver on time. It shows that projects do not get stack during the pre-production phase fixing high risk vulnerabilities for example. To compliance auditors a software security metrics provides a level of software security assurance and a confidence that security standard compliance is addressed through the security review processes within the organization. Ultimately the software security metrics needs to be tied to the business case and support the business decision making. For a CIO and CISOs that need to make decisions on budgeting for right resources and technologies this could mean showing the Return Of Security Investment (ROSI) metrics for example. Return on Investment (ROI) over-simplified means that if I spend $100K on something, I want to know that in a certain period of time the money I spent is going to return something to me. A good metrics for software security should support both cost and risk management decision making and answer some critical questions such as: Is the investment in the security solution (e.g. SSDLC activity, tool etc) financially justified? Is the cost of the security solution less than the cost due to the risk of security exposure? What are the most cost effective security solutions? Which investment in security activities is most effective in reducing costs in fixing security defects?
  • Alcuni esempi di riferimento per una metrica che deve misurare se il processo e’ 1) assunto correttamente e 2) gestito per I suoi obbiettivi
  • Secondo me I domini fondamentali non sono 4 X3 compliance, construction, validation, deployment di Pravir Chandra o i 4 X 3 di gary McGraw governance, intelligence, SDLC TP, deployment ma 3, people, process and technology
  • E alla fine non si deve dimenticare l’effetto supporto (o tifo) a tutti i livelli project management, development directors, and ISOs la SSI deve essere un win win
  • most of software is tested after being build

Software Security Initiatives Software Security Initiatives Presentation Transcript

  • How to start a software security initiative within your organization: a maturity based and metrics driven approach Marco Morana OWASP Lead TISO Citigroup Application Security For E-Government
  • Building the Rationale For Software Security
  • Building Awareness Cases Around Secure Software Analysts, incidents, compliance, security costs all point to the same target ? ?
  • What PCI-DSS Compliance say regarding application security ?
    • [PCI-DSS] 6 Develop and Maintain Secure Systems and Applications
      • All vulnerabilities must be corrected .
      • The application must be re-evaluated after the corrections .
      • The application firewall must detect and prevent web based attacks such as cross site scripting and SQL injection.
    • [PCI-DSS] 11 Regularly Test Security Systems and Processes
    • [PCI-DSS] 11.3.2 External application layer penetration test .
      • For web applications, the tests should include, at a minimum, testing for OWASP T10 vulnerabilities
  • Which Vulnerabilities Are Mostly Exploited For Attacks ? SOURCE: Breach Security The WHID 2009, August 2009
  • What the “experts” say about application security ?
    • “ 75% of security breaches happen at the application”- Gartner
    • “ Over 70 percent of security vulnerabilities exist at the application layer, not the network layer” – Gartner
    • 92 % of reported vulnerabilities are in applications not in networks - NIST
    • “ If only 50 percent of software vulnerabilities were removed prior to production … costs would be reduced by 75 percent ” - Gartner
    • The cost of fixing a bug in the field is $30,000 vs. $ 5,000 during coding –NIST
    Majority of incidents , about 70-75% occur at the application layers, cost savings in fixing vulnerabilities in source code are about 70-80 % instead of fixing during field tests
  • What is your company approach toward application security ?
  • Software Security Maturity And Metrics Driven Standard Approaches
  • Software Security Maturity and Metrics
    • Maturity makes executives aware of how the software security effort compares to everyone else’s
      • Assess capabilities and make them visible
      • Point to goals and activity needs to reach them
      • Provide the context for software security activities
    • Measurements allow to articulate a case for software security backed with data
      • Analyze vulnerability assessment processes and data
      • Point to software security root causes
      • Identify historical vulnerability gaps and trends
      • Prepare a plan for software security improvements
  • Defect Management/Cost Metrics
    • Process Metrics
    • Is code validated against security coding standards?
    • Is design
    • of developers trained, using organizational security best practice technology, architecture and processes
    • Management Metrics
    • % of applications rated “business-critical” that have been security tested
    • % of projects that where developed with the SDL
    • % of security issues identified by lifecycle phase
    • % of issues whose risk has been accepted
    • % of security issues being fixed
    • Average time to correct vulnerabilities
    • Business impact of critical security incidents.
    Most of my vulnerabilities are coding and design issues But are mostly found during pen test in UAT The cost of fixing them in UAT is 10 X during coding (unit tests)
  • Vulnerability Management Metrics
    • Process Metrics
    • Is code validated against security coding standards?
    • Is design
    • of developers trained, using organizational security best practice technology, architecture and processes
  • Essential Plan For Software Security Initiatives
    • Assess the maturity level of the software security of the organization
    • Define the standards, software security process and training
    • Implement software security engineering activities as part of the SDLC
      • Security Requirements
      • Secure Design and Threat Modeling
      • Secure Coding Guidelines and Security Code Review
      • Security Testing
      • Secure Deployment
    • Measure and manage risks during the SDLC
    • Optimize and improve
  • Capability Maturity Models (CMM)
  • Software Security Activities Mapped to CMM
    • Initial to Repeatable: From CMM Level 1 to Level 2
      • Penetrate and patch ad-hoc approach
      • Existing high risk applications are pen tested
      • Incidents lead to in-depth vulnerability tests
    • Defined to Managed: From CMM Level 2 to Level 3
      • Vulnerabilities are tracked and managed per application
      • All development projects are pen tested and source code analyzed
    • Managed to Optimizing: From CCM Level 4 to Level 5
      • Metrics is correlated and analyzed at each phase of the SDLC and risks are proactively managed
      • Risk measurements are used for improving security engineering and risk management processes
  • Moving From Tactical to Strategic Planning From : Reactive Security, Pen Tests, Catch and Patch To : Metrics, Risk Management, Holistic Security
  • Standard Software Security Maturity Models: SAMM, BSIMM
  • Code Review Activities And Capability Levels: BSIMM I am here
  • Software Security Maturity Curve Example
  • Prerequisite #1 For Software Security Initiative: Acquire People with the Right Skills
  • From Application Security to Software Security Vulnerability Assessments Automated Vulnerability Scanning Automated Static Code Analysis Manual Penetration Testing Manual Code Review
  • From Source Code Analysis to Application Threat Modeling Injection flaws CSRF, Weak Session Mgmt, Weak b usiness rule and authorization Malicious file execution Insecure Object reference XSS, XFS, SQL Injection, Weak AUTHN AUTHZ Flaws Forceful browsing Information Disclosure Via errors & Files Broken Authentication, Connection with DB PWD in clear Broken Authentication/ AuthZ Lack of Synch Session Management Insecure Storage, poor or non-existent cryptographic controls
    • Phishing,
    • Privacy Violations,
    • Financial Loss
    • Identity Theft
    • System Compromise, Data Alteration, Destruction
    • Reputation loss
    Insecure Transit URL Parameter Tampering Insecure Storage, poor or non-existent cryptographic controls
  • Define Software Security Engineering Processes
  • Essential Software Security Metrics
    • Define where :
      • Tracking security defects throughout the SDLC
    • Define what qualitatively:
      • Root causes: requirements, design, code, application
      • Type of the issues (e.g. bugs vs. flaws vs. configuration)
      • Severity (Critical, High, Medium, Low)
      • SDLC Lifecycle stage where most flaws originating in
    • Define how quantitatively:
      • % of Critical, High, Medium, Lows for application
      • % of vulnerabilities closed/open
      • Vulnerability density (security bugs/LOC)
  • Defect Taxonomy in Support of Root Cause Analysis and Defect Containment Objectives
    • Analysis to support focused remediation, risk prioritization and tracking:
      • Security Design Flaws
        • Introduced because of errors in design
        • Can be identified with threat modeling and manual code reviews
      • Security Coding Bugs
        • Coding errors that result in vulnerabilities
        • Can be identified with secure code reviews and/or tools
      • Security Configuration Issues
        • Introduced after tests because of a change in secure configuration of either the application, the server and the infrastructure components
        • Can be identified by testing the application close to production staging environment
  • Specific Compliance Driven Metrics: PCI-DSS and OWASP T10 Example Security Metrics – The Latest From Metric 2.0, Korelogic: http://www.issa-centralva.org/presentations/SecurityMetrics09122007.pdf
  • Examples of Software Security Metrics
    • Process Metrics
    • Evidence that security-check points are enforced
      • Secure code analysis
      • Vulnerability assessments
    • Evidence that source code is validated against security standards (e.g. OWASP ASVS)?
    • Evidence of security oversight by security officers, SME:
      • Security officers signing off design documents
      • SME participate to secure code review
      • Security officer complete risk assessments
    • Training coverage on software security
    • Management Metrics
    • % of security issues identified by lifecycle phase
    • % of issues whose risk has been accepted vs. % of security issues being fixed
    • % of issues per project over time (between quarter to quarter)
    • % of type of issues per project over time
    • Average time required to fix/close vulnerabilities during design, coding and testing
    • Average time to fix issues by issue type
    • Average time to fix issue by application size/code complexity
  • Essential Elements For Adoption an Assimilation of Software Security Initiatives
    • People, Commitment, Awareness and Training
    • Software Security Standards
    • Software Security Frameworks and Risk Management Processes
    • Software Security Tools
    • Security=Commitment x ( Tools + Processes^2)
  • Finally: Ensure Continuous Support To The Software Security Initiative
      • Development directors : show that developers are getting better to code defensively
      • Project Managers : shows hat projects are on schedule and moving on target and testing cycles for vulnerabilities are shorter translating in cost savings
      • Information Security Officers: show that applications security posture is improving over time by proactively manage risks in compliance with information security standards and processes
  • Q & Q U E S T I O N S A N S W E R S
  • Thanks for listening, further references
    • Applied Software Measurement: Assuring Productivity and Quality
      • http://www.amazon.com/Applied-Software-Measurement-Assuring-Productivity/dp/0070328269
    • PCI-Data Security Standard (PCI DSS)
      • https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
    • A CISO’s Guide to Application Security
      • http://www.nysforum.org/committees/security/051409_pdfs/A%20CISO'S%20Guide%20to%20Application%20Security.pdf
    • PCI
    • http://www.breach.com/resources/whitepapers/downloads/WP_TheWebHackingIncidents-2009.pdf
    • ROSI
    • http://www.infosecwriters.com/text_resources/pdf/ROSI-Practical_Model.pdf
    • https://buildsecurityin.us-cert.gov/daisy/bsi/articles/knowledge/business/677-BSI.html
    • http://www.balancedscorecard.org/BSCResources/AbouttheBalancedScorecard/tabid/55/Default.aspx
    • Gartner study on phising: http://www.gartner.com/it/page.jsp?id=565125 )
    • UC Berkeley Center for Law and Technology on identity theft http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1045&context=bclt
  • Further references con’t
    • Gartner 2004 Press Release
      • http://www.gartner.com/press_releases/asset_106327_11.html
    • Software Assurance Maturity Model
      • http://www.opensamm.org/
    • The Software Security Framework (SSF)
      • http://www.bsi-mm.com/ssf/
    • SEI Capability Maturity Model Integration CMMi
      • http://www.sei.cmu.edu/cmmi/
    • The Microsoft Security Development LifeCycle
      • http://msdn.microsoft.com/en-us/security/cc448177.aspx
  • Further references con’t
    • The Seven Touchpoints of Software Security
      • http://www.buildsecurityin.com/concepts/touchpoints/
    • OWASP CLASP
      • http://www.owasp.org/index.php/Category:OWASP_CLASP_Project
    • ITARC Software Security Assurance
      • http://iac.dtic.mil/iatac/download/security.pdf
    • Internet Crime Compliant Center
      • http://www.ic3.gov/default.aspx
  • Further references con’t
    • OWASP Education Module Embed within SDLC
      • http://www.owasp.org/index.php/Education_Module_Embed_within_SDLC
    • Producing Secure Software With Software Security Enhanced Processes
      • http://www.net-security.org/dl/insecure/INSECURE-Mag-16.pdf
    • Security Flaws Identification and Technical Risk Analysis Through Threat Modeling
      • http://www.net-security.org/dl/insecure/INSECURE-Mag-17.pdf