The document compares dynamic and traditional probabilistic risk assessment methodologies. Traditional methodologies like fault trees, event sequence diagrams, and FMECA require analysts to assess possible system failures. Dynamic methodologies like Monte Carlo simulation use executable models to simulate system behavior probabilistically over time and automatically generate event sequences. Dynamic methods can address limitations of traditional approaches that rely heavily on analyst judgment.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
On Duty Cycle Concept in Reliability - Definitions, Pitfalls, and Clarifications
By Frank Sun, Ph.D.
Product Reliability Engineering
HGST, a Western Digital company
For ASQ Reliability Division Webinar
August 14, 2014
Draft comparison of electronic reliability prediction methodologiesAccendo Reliability
A draft version of the paper that was eventually published as “J.A.Jones & J.A.Hayes, ”A comparison of electronic-reliability prediction models”, IEEE Transactions on reliability, June 1999, Volume 48, Number 2, pp 127-134”
Provide with the kind permission of the author, J.A.Jones
This is a presentation to the top management as to why reliability is important and what is the difference between a maintenance engineer and a reliability engineer.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
On Duty Cycle Concept in Reliability - Definitions, Pitfalls, and Clarifications
By Frank Sun, Ph.D.
Product Reliability Engineering
HGST, a Western Digital company
For ASQ Reliability Division Webinar
August 14, 2014
Draft comparison of electronic reliability prediction methodologiesAccendo Reliability
A draft version of the paper that was eventually published as “J.A.Jones & J.A.Hayes, ”A comparison of electronic-reliability prediction models”, IEEE Transactions on reliability, June 1999, Volume 48, Number 2, pp 127-134”
Provide with the kind permission of the author, J.A.Jones
This is a presentation to the top management as to why reliability is important and what is the difference between a maintenance engineer and a reliability engineer.
Environmental Stress Screening (ESS) is performed on most of the Electrical/Electronic products. However Failure Rate/Time distribution analysis is not conducted always to evaluate the effectiveness of the Screening Process
Stochastic Analysis of a Cold Standby System with Server Failureinventionjournals
This paper throws light on the stochastic analysis of a cold standby system having two identical units. The operative unit may fail directly from normal mode and the cold standby unit can fail owing to remain unused for a longer period of time. There is a single server, who may also follow as precedent unit happens to be in the line of failure .After having treated, the operant unit for functional purposes, the server may eventually perform the better service efficiently. The time to take treatment and repair activity follows negative exponential distribution whereas the distributions of unit and server failure are taken as arbitrary with different probability density functions. The expressions of various stochastic measures are analyzed in steady state using semiMarkov process and regenerative point technique. The graphs are sketched for arbitrary values of the parameters to delineate the behavior of some important performance measures to check the efficacy of the system model under such situations.
TSO Reliability Management: a probabilistic approach for better balance betwe...Leonardo ENERGY
This webinar presents the probabilistic approach to reliability management developed by the ongoing collaborative project GARPUR (www.garpur-project.eu) involving the TSOs of 7 European countries.
The reliability management criteria and approach (RMAC) developed and tested by GARPUR aims at a better balance between reliability and cost, by taking into account the increasing uncertainty of generation due to intermittent renewable energy, but also the opportunities brought by demand side management and storage.
The webinar will present the methodology developed, its application to real time operation, and the overall benefits of such probabilistic approach. A 2nd webinar will be planned to present the test results of applying GARPUR methodology to asset management and system development.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
Estimation of Reliability Indices of Two Component Identical System in the Pr...IJLT EMAS
Progress in science & technology has made
engineering systems more powerful than ever. The intensity of
sophistication in high-tech industrial producers emerged with
reliability problems. Therefore the problem of reliability
continue to exist and more likely to require complex solutions.
Consequently, the field of reliability analysis and statistical
probability modeling of the systems and components were
growing. Ever since the theory of reliability was formally
recognized statistical and modeling of the components/ systems
analysis was used to develop various reliability measures that are
important to assess the system performance. In this research
paper, an attempt is made to find an approach of estimation
method, which could establish a formal estimation procedure to
estimate the reliability measures and also developed estimates of
the system reliability indices practically under the influence of
common cause shock failures as well as intrinsic failures. From
the results, it is seen that maximum likelihood approach used
was found useful in the estimation process to find estimate for
the reliability measures of the system, where small sample is
essential point of interest in the case of reliability analysis. The
estimates so derived using empirical procedure do possess the
property that MSE in each case is well within the prescribed
error, i.e. coincides even to the three decimal places are more.
Environmental Stress Screening (ESS) is performed on most of the Electrical/Electronic products. However Failure Rate/Time distribution analysis is not conducted always to evaluate the effectiveness of the Screening Process
Stochastic Analysis of a Cold Standby System with Server Failureinventionjournals
This paper throws light on the stochastic analysis of a cold standby system having two identical units. The operative unit may fail directly from normal mode and the cold standby unit can fail owing to remain unused for a longer period of time. There is a single server, who may also follow as precedent unit happens to be in the line of failure .After having treated, the operant unit for functional purposes, the server may eventually perform the better service efficiently. The time to take treatment and repair activity follows negative exponential distribution whereas the distributions of unit and server failure are taken as arbitrary with different probability density functions. The expressions of various stochastic measures are analyzed in steady state using semiMarkov process and regenerative point technique. The graphs are sketched for arbitrary values of the parameters to delineate the behavior of some important performance measures to check the efficacy of the system model under such situations.
TSO Reliability Management: a probabilistic approach for better balance betwe...Leonardo ENERGY
This webinar presents the probabilistic approach to reliability management developed by the ongoing collaborative project GARPUR (www.garpur-project.eu) involving the TSOs of 7 European countries.
The reliability management criteria and approach (RMAC) developed and tested by GARPUR aims at a better balance between reliability and cost, by taking into account the increasing uncertainty of generation due to intermittent renewable energy, but also the opportunities brought by demand side management and storage.
The webinar will present the methodology developed, its application to real time operation, and the overall benefits of such probabilistic approach. A 2nd webinar will be planned to present the test results of applying GARPUR methodology to asset management and system development.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
Estimation of Reliability Indices of Two Component Identical System in the Pr...IJLT EMAS
Progress in science & technology has made
engineering systems more powerful than ever. The intensity of
sophistication in high-tech industrial producers emerged with
reliability problems. Therefore the problem of reliability
continue to exist and more likely to require complex solutions.
Consequently, the field of reliability analysis and statistical
probability modeling of the systems and components were
growing. Ever since the theory of reliability was formally
recognized statistical and modeling of the components/ systems
analysis was used to develop various reliability measures that are
important to assess the system performance. In this research
paper, an attempt is made to find an approach of estimation
method, which could establish a formal estimation procedure to
estimate the reliability measures and also developed estimates of
the system reliability indices practically under the influence of
common cause shock failures as well as intrinsic failures. From
the results, it is seen that maximum likelihood approach used
was found useful in the estimation process to find estimate for
the reliability measures of the system, where small sample is
essential point of interest in the case of reliability analysis. The
estimates so derived using empirical procedure do possess the
property that MSE in each case is well within the prescribed
error, i.e. coincides even to the three decimal places are more.
Unit V - Hazard Indentification Techniques.pptxNarmatha D
Job Safety Analysis-Preliminary Hazard Analysis-Failure mode and Effects Analysis- Hazard and Operability- Fault Tree Analysis- Event Tree Analysis Qualitative and Quantitative Risk Assessment- Checklist Analysis- Root cause analysis- What-If Analysis- and Hazard Identification and Risk Assessment
Fault Tree Analysis-Concepts and Application-Bill VeselyMassimo Talia
During the e-gate 46200 project in Logistics, i was involved in the hours of education in the study of FTA applied to the case project. This is an application of FTA in a real industrial case. This is a methodology evaluates the causes of a given undesired event.
Talk of Ali Mousavi "Event-Modelling An Engineering Solution for Control and Analysis of Complex Systems" at 116th regular meeting of INCOSE Russian chapter, 14-Sep-2016
An analyst's perspective on measuring safety performance, discussing reactive and proactive indicators, ideas on developing proactive indicators, and a balanced scorecard approach to safety metrics
Understanding the principles and practicalities of risk assessment
Understanding risk evaluation
Selecting and implementing control measures
Monitoring and reviewing
EC Directive 89/391/EEC
IEC/ISO 31010:2009
Objectives
To understand Weibull distribution
To be able to use Weibull plot for failure time analysis and
diagnosis
To be able to use software to do data analysis
Organization
Distribution model
Parameter estimation
Regression analysis
With the increase in global competition, more and more costumers consider reliability as one of their primary deciding factors, when purchasing new products. Several companies have invested in developing their own Design for Reliability (DFR) processes and roadmaps in order to be able to meet those requirements and compete in today’s market. This presentation will describe the DFR roadmap and how to effectively use it to ensure the success of the reliability program by focusing on the following DFR elements.
Improved QFN Reliability Process by John Ganjei. John will talk about the improvements in the reliability process in this webinar.
It is free to attend - see www.reliabilitycalendar.org/webinars/ to register for upcoming events.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dynamic vs. Traditional Probabilistic Risk Assessment Methodologies - by Huairui Gup
1. Dynamic vs. Traditional Probabilistic
Risk Assessment Methodologies
动态与传统概率风险评估方法
Huairui Gup
2. ASQ Reliability Division
Chinese Webinar Series
One of the monthly webinars
on topics of interest to
reliability engineers
To view upcoming or recorded webinars visit us today at
www.asqrd.org
4. 4
• Determine potential undesirable consequences
associated with use of systems and processes.
• Identify ways that such consequences could
materialize.
• Estimate the likelihood (e.g., probability) of such
events.
• Provide input to decision makers on optimal
strategies to reduce the levels of risk.
Introduction to Risk Analysis
5. 5
• Risk is usually associated with the uncertainty and
undesirability of a potential situation or event.
• In order to have a risk situation, both elements must
be present.
Risk = Uncertainty and Undesirability
Risk = Likelihood and Severity
Definition of Risk
6. 6
Key metrics of risk are embedded in its definition. Risk
can be measured in terms of
the frequency or likelihood of occurrence of events,
degree or magnitude of their direct and indirect
consequences
Levels of risk need to be measured and compared with an
acceptance or tolerance criterion.
Risk Metrics
7. 7
• Risk assessment is the process of providing answer to
four basic questions:
1. What can go wrong?
2. What are the consequences?
3. How frequently might they happen?
4. How confident are we about our answer to the
above questions?
• Answering these questions could be simple or require a
significant amount of analysis and modeling.
Risk Assessment
8. 8
Managing risk requires answers to the following questions:
1. What can be done:
- to prevent/avoid risk?
- to mitigate risk?
- to detect/notify of risk?
2. How much will it cost?
3. How efficient is it?
Risk Management
9. 9
Mission Time
Success of Mission
Risk Senario
(e.g, loss of mission)
Risk Senario
( e.g., Abort)
Risk Senario
( e.g., Degraded Mission)
Perturbation
(Initiating Event)
Branch Point
(Pivotal Event)
End State
A path from the initiating event to an end state is called a scenario.
Anatomy of a Risk Scenario
12. 12
• Traditional Methodologies
– Fault Tree
– Event Sequence Diagram
– FMECA
– Etc
• Dynamic Methodologies
– Monte Carlo Simulation
Risk Assessment Methodologies
13. 13
• Traditional Methodology is a list of methodologies for identifying and
assessing the probability of situations leading to undesired state of a
system.
• Traditional methodologies require analyst to assess possible system
failures
• The quality of PRA using traditional methodologies is analyst
dependent.
Traditional Methodologies
14. 14
• Inductive Method: Induction involves reasoning from individual
cases to a general conclusion.
– Event Sequence Diagram
– FMECA
– Reliability Block Diagram
– etc
• Deductive Method: Deduction constitutes reasoning from the
general to the specific. In a deductive system analysis, it is
postulated that the system itself has failed in a certain way, and an
attempt is made to find out what modes of system or subsystem
(component) behavior contribute to this failure.
– Fault Tree
Traditional Methodologies
16. 16
• The protection system is designed to operate in the following manner. If a
runaway reaction takes place the pressure and temperature sensors will
detect the increase in pressure and temperature above a threshold setting.
The provision of sensors for both temperature and pressure provides
redundancy into the shut-down system design as it only requires one of
these sensors to indicate the threshold is exceeded in order to send a signal
to the alarm unit and valve controller. The function of the valve controller is
to signal both the electrical valves to close. Both input streams must be
shut-down to ensure the runaway reaction is halted. The alarm unit
indicates to the operator that a runaway reaction is taking place. If either of
the two electrical valves fail then the operator may shut valves MV1 and
MV2 manually. Both electrical valves are powered from the grid.
• If the input stream valves do not close one of two possible hazardous
events will occur. If the pressure relief valve NRV opens successfully then
the runaway reaction will be halted with minor release of toxic chemicals. If
the pressure relief valve NRV is stuck closed then the reactor vessel will
rupture with a major release of toxic chemicals.
Examples
17. 17
• Identify the objective
• Define the Initiator/Top Event.
• Define the scope.
• Define the resolution.
• Define ground rules.
• Construct the Model.
• Evaluate the Model.
• Interpret and present the results.
Procedures
22. 22
– Build Model
• Common Cause Failure
– Quantify Basic Events
• Hardware Failures
• Software/Human Failures
– Results
• Accident Probability
• Cut Set / Importance Measure
• Uncertainty
Key Elements
23. 23
– Demand Based Models: Events which occur
at the specific time (absolute mission time or
time relative to the occurrence of a previous
event) that an item is called upon (demanded)
to function.
– Time Distributed Models: Events which occur
over an interval of time, for which the
probability of failure over the length of the
interval is expressed as a point estimate and
an uncertainty distribution
Failure Types
24. 24
• Models specify a distribution over probability of
occurrence of an event
• Distribution consists of a parametric distribution
model, e.g., lognormal, Beta
• Point estimate values are approximated using
parametric distributions (e.g., uniform) with small
standard deviations
Demand Based Models
26. 26
• Models specify a distribution over time-to-failure
distribution model
– Example: failure rate for Exponential model
• In addition, the models specify a time interval
• Distributions consist of a parametric distribution
model, e.g., lognormal
Time Based Model
27. 27
Human / Software Failures
1& 2
3
ROOT CAUSES
RISK METRI CS
- Li kelihood & Severi ty
- Hazard Ranking
- ...
LI KELI HOOD
S
E
V
E
R
I
T
Y
L
H
M
MH
L
SSYSTEM1
Human
Action SYSTEM2 S
F
Initiating
Event
F
SY S TE M 1
FA I L UR E
SU B
SY S TE M 1
SU B
SY S TE M 2
SU B
SY S TE M 3
SU B
SY S TE M 1A
X Y
......
1
SU B
SY S TE M 1B
...
SY S TE M2
FA I LU R E
SU B
SY S TE MA
SU B
SY S TE MB
SU B
SY S TE MA 1
SU B
SY S TE MA 2
A B A CB
H U MA N
A CT I O N
3
2
SYSTEM
ORGANIZIATION
Maintenance Operation
Physical
Environment
Socio-Economic
Environment
Regulatory
Environment
29. 29
• The risk associated with a system is computed as the
sum of many different combinations of events that
would bring the system in an undesirable state.
• Component failures leading to top events and risk
scenarios can be thought of as contributors to the
overall risk of the system.
• The following questions are examples:
• Which components or risk scenarios contribute
most to the overall system risk?
• Changes in the reliability of which components is
the total risk most sensitive to?
Results
30. 30
• A risk scenario is defined as a combination of
events anticipated to bring the system in an
undesirable state.
• Scenarios can be described in different forms
• Paths through an Event Tree
• Event sequences in an Event Sequence
Diagram
• Cut-sets
• Scenarios can be ranked for significance by
sorting them according to their probabilities
Results - Risk Scenario
31. 31
• Cut-set: a set of events whose occurrence causes
the system failure to occur
• A cut-set is minimal if after removal of any event from
the set, the set is no longer a cut-set
– All events are required
AND
OR
A
CB
Minimal Cut-Sets:
A
BC
Results - Cut Set
32. 32
• Ranking scenarios provides limited insight regarding the contribution
of individual components
• Many occurrences in low probability scenarios may be as significant
as few occurrences in high probability scenarios.
• Risk importance measures provide perspective on dominant
contributions by individual components.
• Quantitative measures indicating contribution to risk or sensitivity
of risk
• Function of component’s reliability and its role in the system
• Common importance measures:
– Birnbaum
– Fussell-Vesely
– Risk Reduction Worth
– Risk Achievement Worth
Results – Importance Measure
34. 34
• Dynamic methodology is a set of methods and techniques in which
executable models that represent the behavior of the elements of a system
are exercised in order to identify risks and vulnerabilities of the system
• The essence of this approach is the probabilistic simulation of the dynamic
behavior of the system using the models of the system elements and rules
of their internal and external interactions
– A formal representation of the system behavior needs to be constructed
for the hardware, software, and human components
– A set of rules needs to be prescribed to systematically decompose the
system
– The executable model is used to simulate the behavior of the system
and the physical processes taking place in the system, as a function of
time
– The event sequences are generated automatically by controlling the
stochastic events in the model
Dynamic Methodologies
35. 35
• Dynamic Probabilistic Risk Assessment
– Discrete Dynamic Event Tree
• Systematically explore all scenarios
– Continuous Event Tree Simulation
• Randomly selecting system states and the timing
of events
Dynamic Methodologies
37. 37
Continuous Event Tree Simulation
High Probability
Medium Probability
Low Probability
Time
r
x
(xo, ro)
(xt, rt)
38. 38
• Approach to Solve State Explosion Issue
– Reduce the number of risk scenarios
• Combine system and operator states that lead to
similar end states
– Bias the system and operator states toward
interesting or risk significant events and end
states
• Reduces the computational effort expended on
less important scenarios
• Provides results for desired event sequences using
less simulation effort
State Explosion
41. 41
• The scheduler that manages the exploration process
– Save the system states, and restarting the simulation
• Guide the simulation toward the plan generated by
planner
– Maintain sufficient coverage of important scenarios
– Guide simulation toward areas where it is expected to
gain more insight of the system vulnerabilities
– Continuously adjust priorities based on simulated
results
– Simulation should be able to cover all the event
sequence space
Scheduling
42. 42
• Scheduling rules constitute a dynamic adjustment of
event biasing factors with the objective to favor
simulation of high importance scenarios
– Learning value changes when a scenario is simulated
– No absolute control over how often a scenario is
simulated
• Frequency at which a particular scenario is simulated
depends among other factors on:
– Total number of planned scenarios
– Complexity of the scenario
Scheduling
43. 43
Temperature
Pressure
Pump Control Software
Life Support System
Temperature, Pressure, Time
Low Level:
Detail Equation
High Level:
Lookup Table
Software
Scheduler
Danger
Safe
Sensitive
Level Adjustment