Read this OWASP presentation on how companies measure risk in their Web applications. Presented at the Bay Area OWASP event (January 2010) by Cenzic CTO, Lars Ewe.
To familiarize ourselves with vulnerability databases, their terminology, standards, and procedures to share vulnerability data. To understand how these data sources are integrated into commercial security software tools that help organizations manage their vulnerabilities. These software tools are generally grouped under the term “vulnerabilities scanners” (or similar terms). To examine a few “classic” vulnerabilities in depth to get a sense of just how vulnerabilities expose systems to exploitation.
This presentation discusses the importance of threat Modeling. This presentation also discusses about different ways to perform threat modeling. This threat modeling should be done during the design phase of the application development. The main aim of the threat modeling is to identify the import assets or functionalities of the application and to protect them. Threat Modeling cuts down the cost of application development as it identifies the issues during the design phase. In this presentation we also discuss about basics of Mobile Threat Modeling. This presentation mainly concentrates on STRIDE and DREAD.
To familiarize ourselves with vulnerability databases, their terminology, standards, and procedures to share vulnerability data. To understand how these data sources are integrated into commercial security software tools that help organizations manage their vulnerabilities. These software tools are generally grouped under the term “vulnerabilities scanners” (or similar terms). To examine a few “classic” vulnerabilities in depth to get a sense of just how vulnerabilities expose systems to exploitation.
This presentation discusses the importance of threat Modeling. This presentation also discusses about different ways to perform threat modeling. This threat modeling should be done during the design phase of the application development. The main aim of the threat modeling is to identify the import assets or functionalities of the application and to protect them. Threat Modeling cuts down the cost of application development as it identifies the issues during the design phase. In this presentation we also discuss about basics of Mobile Threat Modeling. This presentation mainly concentrates on STRIDE and DREAD.
Vulnerability scanning report by Tareq HanayshaHanaysha
In this executive summary, we will go visually through the vulnerability scan we`ve done using Nessus and Nsauditor by providing the reader with screen shoots to clarify our scan and to make it easier for the readers to understand our vulnerability scan procedures, then we will introduce our work and give a summary of our findings, vulnerabilities, risks and threats, and try to find solutions or recommendations for these security problems in our conclusion.
Alex Sidorenko (www.risk-academy.ru) masterclass at the Risk Zone 2014 in Munich on quantitative risk analysis in project management and cognitive biases
In this presentation we look at approach of analyzing risks, we get into details of qualitative risk analysis and quantitative risk analysis. This presentation will help professionals who are preparing for PMP certification exam
Presentation by Charl der Walt and Francesco Geremla at The ITweb security summit in 2009.
This presentation is about the methodology behind version 2 of Sensepost's threat modeling tool, the corporate threat modeller.
The 2023 Vulnerability Stats report as delivered to the IISF.
Covering: PTaaS, Pentesting, Vulnerabilty Managment, EPSS, CISA KEV, Risk, Attack Surface Management. Its based on delivering thousands of PTaaS and RBVM assessments throughout 2022. Why tools and traditional pentesting has failed.
Vulnerability Management Nirvana - Seattle Agora - 18Mar16Kymberlee Price
Vulnerability Management Nirvana: A Study in Predicting Exploitability
When everything is a priority, nothing is. 15% or 10,000 vulnerabilities have a CVSS score of 10. Vendors and practitioners alike use CVSS or their own threat intelligence models to predict which vulnerabilities will be exploited next. We review current options, present a predictive data-driven prioritization model, and how attendees can get started using our approach in their vulnerability management program.
In January 2024, we decided to evaluate the most used network vulnerability scanners - Nessus Professional, Qualys, Rapid7 Nexpose, Nuclei, OpenVAS, and Nmap vulnerability scripts - including our own, which industry peers can validate independently.
Here’s why we did it, what results we got, and how you can verify them (there’s a white paper you can download with access to all the results behind this benchmark).
Vulnerability scanning report by Tareq HanayshaHanaysha
In this executive summary, we will go visually through the vulnerability scan we`ve done using Nessus and Nsauditor by providing the reader with screen shoots to clarify our scan and to make it easier for the readers to understand our vulnerability scan procedures, then we will introduce our work and give a summary of our findings, vulnerabilities, risks and threats, and try to find solutions or recommendations for these security problems in our conclusion.
Alex Sidorenko (www.risk-academy.ru) masterclass at the Risk Zone 2014 in Munich on quantitative risk analysis in project management and cognitive biases
In this presentation we look at approach of analyzing risks, we get into details of qualitative risk analysis and quantitative risk analysis. This presentation will help professionals who are preparing for PMP certification exam
Presentation by Charl der Walt and Francesco Geremla at The ITweb security summit in 2009.
This presentation is about the methodology behind version 2 of Sensepost's threat modeling tool, the corporate threat modeller.
The 2023 Vulnerability Stats report as delivered to the IISF.
Covering: PTaaS, Pentesting, Vulnerabilty Managment, EPSS, CISA KEV, Risk, Attack Surface Management. Its based on delivering thousands of PTaaS and RBVM assessments throughout 2022. Why tools and traditional pentesting has failed.
Vulnerability Management Nirvana - Seattle Agora - 18Mar16Kymberlee Price
Vulnerability Management Nirvana: A Study in Predicting Exploitability
When everything is a priority, nothing is. 15% or 10,000 vulnerabilities have a CVSS score of 10. Vendors and practitioners alike use CVSS or their own threat intelligence models to predict which vulnerabilities will be exploited next. We review current options, present a predictive data-driven prioritization model, and how attendees can get started using our approach in their vulnerability management program.
In January 2024, we decided to evaluate the most used network vulnerability scanners - Nessus Professional, Qualys, Rapid7 Nexpose, Nuclei, OpenVAS, and Nmap vulnerability scripts - including our own, which industry peers can validate independently.
Here’s why we did it, what results we got, and how you can verify them (there’s a white paper you can download with access to all the results behind this benchmark).
Black Hat 2014: Don’t be a Target: Everything You Know About Vulnerability Pr...Skybox Security
Presented at Black Hat 2014.
Heartbleed. Target. Adobe … businesses are under siege by cybercriminals looking for financial gain and political actors looking for trade secrets. It’s a wildly uneven match where a motivated attacker can find exploitable attack vectors in minutes and maintain unabated access for months, while the security team continues to rely on time-honored methodology to fix vulnerabilities in order of severity.
But severity-based vulnerability management misses the mark completely, as it overlooks the fact that risk exposure is the real concern. This workshop will focus on identifying critical vulnerabilities so they can be fixed as quickly as possible to ensure a reduction in risk and the shrinking the attack surface over time.
In this deep dive session on vulnerability analysis and prioritization, we’ll cover:
- Calculating risk exposure: Risk = Impact * Likelihood * Time
- The data you need to be collecting about assets and vulnerabilities
- Prioritizing vulnerabilities using simple 2 factor relationships
- Asset-to-vulnerability correlation to augment the accuracy and freshness of active scan data
- Techniques to drive down the risk exposure time
The 2018 Vulnerability Stats report covering off a fullstack review of cyber security across 1000's of web applictions, end-points and cloud based systems globally.
Explain in Hindi: https://www.youtube.com/watch?v=6xqkDB3NHN0
Discovering vulnerabilities is important, but being able to estimate the associated risk to the business is just as important. Early in the life cycle, one may identify security concerns in the architecture or design by using threat modeling. Later, one may find security issues using code review or penetration testing. Or problems may not be discovered until the application is in production and is actually compromised.
Reference: https://owasp.org/www-community/OWASP_Risk_Rating_Methodology
https://www.owasp-risk-rating.com/
Inspired by my work on understanding the effects of the EU cyber resilience act, I made this presentation on vulnerability handling - SBOM, Vex, CVE, CVSS, CWE and more.
Vulnerability Management In An Application Security World: AppSecDCDenim Group
Identifying application-level vulnerabilities via penetration tests and code reviews is only the first step in actually addressing the underlying risk. Managing vulnerabilities for applications is more challenging than dealing with traditional infrastructure-level vulnerabilities because they typically require the coordination of security teams with application development teams and require security managers to secure time from developers during already-cramped development and release schedules. In addition, fixes require changes to custom application code and application-specific business logic rather than the patches and configuration changes that are often sufficient to address infrastructure-level vulnerabilities.
This presentation details many of the pitfalls organizations encounter while trying to manage application-level vulnerabilities as well as outlines strategies security teams can use for communicating with development teams. Similarities and differences between security teams’ practice of vulnerability management and development teams’ practice of defect management will be addressed in order to facilitate healthy communication between these groups.
Risk Management Insight
FAIR
(FACTOR ANALYSIS OF INFORMATION RISK)
Basic Risk Assessment Guide
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
NOTE: Before using this assessment guide…
Using this guide effectively requires a solid understanding of FAIR concepts
‣ As with any high-level analysis method, results can depend upon variables that may not be accounted for at
this level of abstraction
‣ The loss magnitude scale described in this section is adjusted for a specific organizational size and risk
capacity. Labels used in the scale (e.g., “Severe”, “Low”, etc.) may need to be adjusted when analyzing
organizations of different sizes
‣ This process is a simplified, introductory version that may not be appropriate for some analyses
Basic FAIR analysis is comprised of ten steps in four stages:
Stage 1 – Identify scenario components
1. Identify the asset at risk
2. Identify the threat community under consideration
Stage 2 – Evaluate Loss Event Frequency (LEF)
3. Estimate the probable Threat Event Frequency (TEF)
4. Estimate the Threat Capability (TCap)
5. Estimate Control strength (CS)
6. Derive Vulnerability (Vuln)
7. Derive Loss Event Frequency (LEF)
Stage 3 – Evaluate Probable Loss Magnitude (PLM)
8. Estimate worst-case loss
9. Estimate probable loss
Stage 4 – Derive and articulate Risk
10. Derive and articulate Risk
Risk
Loss Event
Frequency
Probable Loss
Magnitude
Threat Event
Frequency
Vulnerability
Contact Action
Control
Strength
Threat
Capability
Primary Loss
Factors
Secondary
Loss Factors
Asset Loss
Factors
Threat Loss
Factors
Organizational
Loss Factors
External Loss
Factors
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 1 – Identify Scenario Components
Step 1 – Identify the Asset(s) at risk
In order to estimate the control and value characteristics within a risk analysis, the analyst must first identify the asset
(object) under evaluation. If a multilevel analysis is being performed, the analyst will need to identify and evaluate the
primary asset (object) at risk and all meta-objects that exist between the primary asset and the threat community. This
guide is intended for use in simple, single level risk analysis, and does not describe the additional steps required for a
multilevel analysis.
Asset(s) at risk: ______________________________________________________
Step 2 – Identify the Threat Community
In order to estimate Threat Event Frequency (TEF) and Threat Capability (TCap), a specific threat community must first be
identified. At minimum, when evaluating the risk associated with malicious acts, the analyst has to decide whether the
threat community is human or malware, and internal or external. In most circumstances, it’s appropriate to define the
threat community more specifically – e.g., network engineers, cleaning crew, etc., and characterize the e.
Risk Management Insight
FAIR
(FACTOR ANALYSIS OF INFORMATION RISK)
Basic Risk Assessment Guide
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
NOTE: Before using this assessment guide…
Using this guide effectively requires a solid understanding of FAIR concepts
‣ As with any high-level analysis method, results can depend upon variables that may not be accounted for at
this level of abstraction
‣ The loss magnitude scale described in this section is adjusted for a specific organizational size and risk
capacity. Labels used in the scale (e.g., “Severe”, “Low”, etc.) may need to be adjusted when analyzing
organizations of different sizes
‣ This process is a simplified, introductory version that may not be appropriate for some analyses
Basic FAIR analysis is comprised of ten steps in four stages:
Stage 1 – Identify scenario components
1. Identify the asset at risk
2. Identify the threat community under consideration
Stage 2 – Evaluate Loss Event Frequency (LEF)
3. Estimate the probable Threat Event Frequency (TEF)
4. Estimate the Threat Capability (TCap)
5. Estimate Control strength (CS)
6. Derive Vulnerability (Vuln)
7. Derive Loss Event Frequency (LEF)
Stage 3 – Evaluate Probable Loss Magnitude (PLM)
8. Estimate worst-case loss
9. Estimate probable loss
Stage 4 – Derive and articulate Risk
10. Derive and articulate Risk
Risk
Loss Event
Frequency
Probable Loss
Magnitude
Threat Event
Frequency
Vulnerability
Contact Action
Control
Strength
Threat
Capability
Primary Loss
Factors
Secondary
Loss Factors
Asset Loss
Factors
Threat Loss
Factors
Organizational
Loss Factors
External Loss
Factors
FAIR™ Basic Risk Assessment Guide
All Content Copyright Risk Management Insight, LLC
Stage 1 – Identify Scenario Components
Step 1 – Identify the Asset(s) at risk
In order to estimate the control and value characteristics within a risk analysis, the analyst must first identify the asset
(object) under evaluation. If a multilevel analysis is being performed, the analyst will need to identify and evaluate the
primary asset (object) at risk and all meta-objects that exist between the primary asset and the threat community. This
guide is intended for use in simple, single level risk analysis, and does not describe the additional steps required for a
multilevel analysis.
Asset(s) at risk: ______________________________________________________
Step 2 – Identify the Threat Community
In order to estimate Threat Event Frequency (TEF) and Threat Capability (TCap), a specific threat community must first be
identified. At minimum, when evaluating the risk associated with malicious acts, the analyst has to decide whether the
threat community is human or malware, and internal or external. In most circumstances, it’s appropriate to define the
threat community more specifically – e.g., network engineers, cleaning crew, etc., and characterize the ex.
Similar to HARM Score: Approaches to Quantitative Risk Analysis for Web Applications (20)
Continuous Monitoring for Web Application SecurityCenzic
In a world with constantly changing and increasingly complex attacks on web applications, security practices are evolving to stay ahead of the threats. Dave Shackleford, IANS Research application security faculty member, and Bala Venkat, Cenzic CMO, explain how government agencies can benefit from continuous security monitoring.
These are the slides from "Continuous Monitoring for Web App Security," a Cenzic and IANS webinar that originally aired on 10 September 2013. The video recording is available at info.cenzic.com (free, registration required).
In the webinar, Dave and Bala discuss the types of attacks currently seen in the wild, what attackers are focused on, and how they are compromising web applications, systems and data. We'll explore the most pressing compliance and regulatory challenges for government agencies and commercial businesses. Finally, we'll show how continuous monitoring tactics and tools can improve your security posture.
How to Overcome the 5 Barriers to Production App Security TestingCenzic
View the slides from Sameer Dixit and Chris Harget's energetic discussion about the five most common obstacles to monitoring production applications for new vulnerabilities. This webinar will set you on a path rise above the production security challenges of downtime, data loss and disgrace.
Webinar recording at: https://info.cenzic.com/overcome-barriers-prod-app-sec.html
Essentials of Web Application Security: what it is, why it matters and how to...Cenzic
Join Cenzic’s Chris Harget for an overview of the essentials of Web Application Security, including the risks, practices and tools that improve security at every stage of the application lifecycle.
Top 10 Ways To Win Budget For Application Security - Cenzic.2013.05.22Cenzic
This slide deck denotes practical and insightful techniques for finding budget for Application Security solutions. It includes ideas for where to look, who to ask, how to speak their language, and provides proof points to make your case.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. Agenda
Risk Analysis for Web Applications
Common Scoring Systems
Cenzic HARM
(Hailstorm Application Risk Metric)
Q & A
OWASP 2
3. Risk Analysis for Web Applications
Why a quantitative risk metric?
To help IT management manage risk and prioritize
vulnerabilities and remediate those that pose the greatest
risk.
Common risk metrics
What’s impacted? How big is the impact?
What kind of damage can be done? What kind of data
can potentially be compromised? Etc.
How easy is the exploit? What are the required
prerequisites / circumstances?
Remediation complexity
… OWASP 3
4. Common Scoring Systems
Low-Medium-High qualitative system
Probably most common risk metric in use
Lacks granularity, doesn’t scale well
Not quantitative
OWASP 4
5. Common Scoring Systems – contd.
CVSS (Common Vulnerability Scoring System)
CVSS consists of three base groups (each consisting
of a set of metrics):
Base – Represents the intrinsic qualities of a vulnerability
Temporal – Reflects the characteristics of a vulnerability that
change over time
Environmental – Represents the characteristics of a
vulnerability that are unique to any user’s environment
Each group produces a numeric score (0 to 10)
For scoring guidelines and equations, see CVSS guide
OWASP 5
6. A Brief Look At CVSS Metrics
Base – Represents the intrinsic qualities of a vulnerability
Name Values Description
Access Vector local, adjacent Reflects how the vulnerability is exploited
network,
network
Access high, medium, Measures the complexity of the attack required
Complexity low to exploit the vulnerability
Authentication multiple, Measures the number of times an attacker must
single, none authenticate to a target in order to exploit a
vulnerability
Confidentiality none, partial, Measures the impact on confidentiality of a
Impact complete successfully exploited vulnerability
Integrity none, partial, Measures the impact to integrity of a successfully
Impact complete exploited vulnerability
Availability none, partial, Measures the impact to availability of a
Impact complete successfully exploited vulnerability
OWASP 6
7. A Brief Look At CVSS Metrics
Temporal – Reflects the characteristics of a vulnerability that change
Name Values Description
Exploitability unproven, Unproven, proof-of-concept, functional, high, not
proof-of- defined
concept,
functional,
high, not
defined
Remediation official fix, Describes the level of available remediation
Level temporary fix,
workaround,
unavailable,
not defined
Report unconfirmed, Measures the degree of confidence in the
Confidence uncorroborated existence of the vulnerability and the credibility
, confirmed, of the known technical details
not defined
OWASP 7
8. A Brief Look At CVSS Metrics
Environmental – Represents the characteristics of a vulnerability
that are unique to any user’s environment
Name Values Description
Collateral none, low, low- Measures the potential for loss of life or physical
Damage medium, assets through damage or theft of property or
Potential medium-high, equipment
high, not
defined
Target none, low, Measures the proportion of vulnerable systems
Distribution medium, high,
not defined
Security low, medium, Allows for customization of CVSS score
Requirements high, not depending on the importance of the affected IT
defined asset to a user’s organization, measured in terms
of confidentiality, integrity, and availability
OWASP 8
9. Cenzic HARM (Hailstorm Application Risk Metric)
Quantitative risk metric
The HARM score is built with inherent flexibility
HARM has a modifier, that we call a weight. This is the
“application weight” or “asset value”.
With the HARM Score, more is bad: 500 is worse than 50
Harm score example:
OWASP 9
10. Cenzic HARM – contd.
HARM takes 4 distinct impact areas into consideration:
Browser
Session
Application
Infrastructure (server environment)
Default HARM scores per vulnerability types represent
Cenzic’s analysis of the risk inherent in the vulnerabilities,
but can be modified by users
Visualize these four impact areas as a target in a
topological ringed sense. Each quadrant of the target
(“impact area”) is divided into 5 rings, ring 5 being the
centermost ring, or the “bull’s eye”. The least type of
application risk would hit Ring 1
OWASP 10
11. Cenzic HARM – Impact Areas
Each application risk
level (ring) is named
as followed:
1.Low
2.Moderate
3.Serious
4.Severe
5.Critical
OWASP 11
12. Cenzic HARM – contd.
Mathematically our Base Risk Equation is 2 raised to the
power of the impact area value, times 10
Thus a vulnerability that is a critical security issue for the
server environment (level 5) would score 320 (2^5 x 10)
OWASP 12
13. Cenzic HARM – contd.
So for each impact we can create a graph that shows the
score of a risk level from 1 to 5 using the base risk
equation:
OWASP 13
14. Cenzic HARM – contd.
Any vulnerability can impact a Web application in up to 4
different ways (4 impact areas). Within those 4 areas, the
degree of the risk can be 1 (“low”) to 5 (“Critical”). The
worst possible vulnerability would hit the “bull’s eye” in all 4
areas:
OWASP 14
15. Cenzic HARM – contd.
What are the placement criteria Cenzic uses to determine
the application risk level (ring) for a vulnerability? Answer:
Security values. Each security value also has 5 degrees of
risk. Examples of security values and associated risk
degrees:
A buffer overflow may give instant control of a system
and is rated "Access 5”
A flat file containing 10,000 credit card numbers that
may be exposed to the internet in the Web server root
is rated "Confidentiality 5“
Both are worst case scenarios scoring 320
OWASP 15
16. Cenzic HARM – contd.
In summary, scoring a vulnerability is a matter of:
How the application cluster is hit (which impact areas
are affected)
How hard (degree of effect within each impact area)
In what way (security values) and an estimate of the
probability of success.
Vulnerability risk is the sum of the risk score from each of
the four impact areas. Vulnerability Risk Equation (using α,
β, σ, ε for the 4 different impact areas):
OWASP 16
17. Cenzic HARM – contd.
There are some addl. risk weights HARM considers:
Attack Complexity (χ). Examples:
Multi-staged XSS attack: "Complexity 3", with a Risk Weight of .8
Simple SQL Injection (' or 1=1 --'): “Complexity 5”, with a Risk
Weight of 2
Detection Precision (δ). Examples:
Fuzzing and trapping error signatures, like buffer overflow:
“Category 1 or 2”, with a Precision Weight < 1
In the case of XSS we inject a watermarked script into the
application and monitor in Web browser for the presence of an
event that matches our watermark. This allows us to detect XSS
with less than 1% false positives: “Category 5”, with a Precision
Weight of 1
Asset Value (ω)
Assigned by user (default: 1) OWASP 17
18. Cenzic HARM – contd.
We can now compute the Adjusted Vulnerability Risk
(using additional risk weights) as follows:
OWASP 18