Fault Tree Analysis for the EnergyGrid

1,987 views

Published on

Published in: Technology
1 Comment
1 Like
Statistics
Notes
No Downloads
Views
Total views
1,987
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
56
Comments
1
Likes
1
Embeds 0
No embeds

No notes for slide

Fault Tree Analysis for the EnergyGrid

  1. 1. Projektmodul Software und Systems EngineeringFault Tree Analysis for the EnergyGrid Eugen Petrosean (Matr-Nr. 1071092) WS 2012/2013 Supervisor: Jan-Philipp Steghöfer November 2012
  2. 2. ContentsContents1 Introduction....................................................................................32 Fault Tree Analysis for the EnergyGrid – Basic Concepts...............4 2.1 Autonomous Virtual Power Plants ....................................................................4 2.2 Safety – A New Metric for Structuring of Autonomous Virtual Power Plants. .6 2.3 Fault Tree Analysis – Procedure Steps..............................................................83 Fault Tree Construction ................................................................10 3.1 Mutually Independent Basic Events.................................................................10 3.2 Common Cause Failures...................................................................................11 3.2.1 Beta Factor Method ..................................................................................11 3.2.2 Basic Parameter Method...........................................................................13 3.2.3 Multiple Greek Letter Method..................................................................13 3.2.4 Alpha Factor Method................................................................................144 Algorithms for Qualitative Analysis of Fault Trees........................19 4.1 MOCUS Algorithm............................................................................................19 4.2 FATRAM Algorithm.........................................................................................23 4.3 Reduction of Comparisons in the MOCUS and FATRAM Algorithms............28 4.4 Binary Decision Diagrams...............................................................................30 4.4.1 Shannons Decomposition........................................................................33 4.4.2 Directed Acyclic Graph ............................................................................34 4.4.3 Priority Ordering Method of Basic Events ..............................................35 4.4.4 BDD Algorithms in Fault Tree Analysis ..................................................41 4.4.5 Algorithmic Complexity...........................................................................485 Importance Measures for Quantitative Analysis of Fault Trees....49 5.1 Risk Achievement Worth..................................................................................49 5.2 Risk Reduction Worth......................................................................................49 5.3 Fussel-Vesely Importance................................................................................50 5.4 Birnbaum Importance......................................................................................50 5.5 Criticality Importance......................................................................................506 Conclusions...................................................................................52References.......................................................................................54
  3. 3. 1 Introduction1 IntroductionThe EnergyGrid is a simulation platform for power systems to deal with the upcomingchallenges of decentralised heterogenous power supply and increasing number ofpower plants, especially weather dependent power plants, whose energy outputs arehighly volatile. This simulation platform is being developed within the scope of theresearch unit „OC-Trust“ of the German research foundation and provides importantapproaches of how a future energy grid might work and how current systems andinfrastructure could be evolved towards a more flexible and robust system. As thepreliminary results of the EnergyGrid are very promising, the simulation environmentand algorithms are being constantly refined in close cooperation with utilities inSouthern Germany.The fundamental concept of the EnergyGrid are self-organising Autonomous VirtualPower Plants (AVPPs) which structure themselves on the basis of a real power plantlandscape with different types of power plants. These structures (AVPPs) thenautonomously plan the power supply based on predictions made by the power plantsand by the consumers. Depending on the process of structuring and controlling,different structures of a power plant landscape can be obtained which for their part canbe classified into two categories whether they can meet the demand for power deliveryor not.In order to give accurate evidence during the structuring process within a power plantlandscape whether an AVPP will be able to meet the demand for power delivery or not,an additional mechanism is required to provide a precise assessment of the ability of anAVPP to meet the power demand. As a result, in this paper we are going to introduce anew metric which should improve the structuring process of a power plant landscapeand provide at the same time a reliable measure for the quality of forming a specificAVPP. This metric will be realised by means of fault tree analysis which comprises arange of steps required for this kind of analysis. These steps will form the main subjectareas of this paper. Thus, our focus will be to provide an explanation of the specificfault tree procedure steps and elaborate a concept of computational techniques interms of how they should be applied and implemented in the EnergyGrid.The remainder of this paper is organised as follows. Chapter 2 highlights the idea of thepartition safety metric with the goal to compute it by means of fault tree analysis.Chapter 3 describes how the fault tree for the EnergyGrid can be constructedconsidering common cause failures for weather dependent power plants. Chapter 4explains how the qualitative evaluation of a fault tree can be optimally performedwithin the scope of computational procedures. Chapter 5 shows how the partition safetymetric can be obtained from importance measures defined for the quantitativeevaluation of a fault tree. Chapter 6 summarizes the results obtained in the previouschapters and draws conclusions about the set of approaches which should be realized inthe EnergyGrid. 3
  4. 4. 2 Fault Tree Analysis for the EnergyGrid – Basic Concepts2 Fault Tree Analysis for the EnergyGrid – Basic ConceptsIn this chapter we are going to explain the main idea of the EnergyGrid, the principlesunderlying the concept of Autonomous Virtual Power Plants (AVPPs) and introduce anew metric (Safety Metric) making the process of structuring and controllingdistributed power sources through the usage of AVPPs within a power plant landscapemore reliable and precise. This metric will rely on one of the most important bases ofreliability assessment for technical systems, on fault tree analysis and indicate duringthe partitioning process of a power plant landscape into AVPPs and their subsequentadaption to certain circumstances and changing environmental conditions, whether theinclusion/removal of a specific power plant into/from an AVPP will increase/decreasethe propability of meeting the demand for power delivery or balancing power supplyand load by the specific AVPP.2.1 Autonomous Virtual Power PlantsNowadays, one can observe dynamic changes (e.g. continuous increase of the numberof power plants) and especially a rapid deregulation of the energy market in terms ofutilising new types of power plants, such as photovoltaics, wind turbines, domesticcombined heat and power units, which cannot be fully controlled and dependsimultaneously on the continuous change of environmental conditions (e.g. weatherconditions). These factors make a central control for balancing power supply and loadmore complicated and often inefficient [Ase10]. Therefore, the idea of AutonomousVirtual Power Plants was developed to solve on the one hand the problems describedabove and on the other hand to provide a mechanism for an autonomous, decentralisedcontrol which is be able to make decisions at runtime based on predictions made bothby power plants about their future energy production and by the consumers about theirconsumption. According to the ideas of Organic Computing, an AVPP can be defined asfollows:Definition (Autonomous Virtual Power Plant). An Autonomous Virtual Power Plant(AVPP) is a self-organising ensemble of power plants which self-adapt and self-optimise in order to be able to deal with an increasingly decentralised, heterogeneouspower supply.The AVPP concept takes into account two fundamental concepts: (1) Self-organisation – the capability of a system to find suitable organisational structures autonomously which support the systems goal, i.e., to find and maintain a suitable structure for the power plant landscape. (2) Trust – the capability of a system to deal with uncertainties, i.e., to react to problems of not feeding the predicted amount of energy into the power grid due to technical problems of a power plant or to external, uncontrollable circumstances which affect primarily the output of weather dependent power plants. It comprises different facets such as: 1. Credibility – a power plant adheres to its predictions. 4
  5. 5. 2 Fault Tree Analysis for the EnergyGrid – Basic Concepts 2. Reliability – a power plant is online and feeds power into the grid (when scheduled to do so). 3. Safety – the power plants act in such a way that no harm comes to the user, the system, or its environment. Thus, the system is able to operate in a stable fashion.These types of concepts make sure that AVPPs do not consist of a predefined static setof power plants, but are changing their structure (see Figure 2.1) to meet the powerdemand and adapt to the current environmental conditions and circumstances. As longas the power plants do not change their behaviour fundamentally and the power plantlandscape stays the same, the structure should become stable. Figure 2.1: The structuring process of power plants into AVPPsThe structuring process of AVPPs during runtime and their subsequent adaption isdriven by a number of crucial aspects: 5
  6. 6. 2 Fault Tree Analysis for the EnergyGrid – Basic Concepts (1) Energy Mix – an AVPP has to show an efficient composition of power plants in order to deal with load peaks or unexpected weather conditions, i.e., the energy mix should comprise at least some controllable power plants and ideally one or more storage power plants to avoid dependence on weather conditions and the time of day resulting from utilising for example only photovoltaics. (2) Location – an AVPP should be located nearby its supply areas to avoid transmission losses of electricity as transmission losses increase with the transmission distance. (3) Trustworthiness – an AVPP has to ensure that it has enough reserves to compensate the failure of unreliable or non-credible power plants which are dependent on the consumers behaviour (e.g. combined heat and power units) or weather conditions (e.g. photovoltaics). Thus, this definition comprises two aspects described above: credibility and reliability. (4)Demand and potential supply – an AVPP must be able to deal with the load curves of its supply areas, i.e., there has to be enough capacity to meet their demand for power delivery at any time.2.2 Safety – A New Metric for Structuring of Autonomous Virtual Power PlantsThe main purpose of introducing a new metric is to improve the structuring algorithmwhich is able to arrange a set of power plants into several AVPPs. Whereas the aspectsdescribed in Section 2.1 - Energy Mix, Location, Trustworthiness – have already beenimplemented in the EnergyGrid, the fourth aspect – Demand and potential supply –will be the subject of this section.As the structuring algorithm is a decentralised multi-agent set partitioning algorithmand an AVPP is considered as a partition, and a power plant as an agent, then agentscan be partitioned according to four aspects: (1) Partition Size1 – an agent prefers larger partitions. (2) Partition Coherence – an agent prefers partitions which are close to its geographical location. (3) Energy Mix – an agent switches partitions if this improves the energy mix of the new partition and of the old one. (4)Former partition memberships2 – an agent prefers partitions in which it has been a member for a lot of time steps. The more time steps have passed since it was a member of a partition, the less this membership is considered.Each agent can be a member of exactly one partition and each partition is led by exactlyone agent, the leader. In each step, the partition head selects neighbours which are1 Additional aspect to avoid small partitions2 Additional aspect to avoid thrashing 6
  7. 7. 2 Fault Tree Analysis for the EnergyGrid – Basic Conceptsclose to the partition and which would improve the quality of the partitions energy mix(see Figure 2.2). The decision of joining the specific partition or not is made finally byagents which take into account the four factors described above.The safety metric to be introduced in this section can be considered as an additionalaspect of affecting the forming process of a partition. Whereas the whole structuringprocess of agents into partitions and their subsequent adaption to external impacts canbe considered as a dynamic process, the evaluation of a specific partition in terms of thefault tree analysis can be considered as a static process because the structure of apartition remains unchanged at a particular time and can therefore be analyzed interms of the fault tree analysis. This type of metric can be defined as follows:Definition (Partition Safety). An agent prefers to join a new partition and to leavethe old one, if the safety of the new partition will increase and improve the systemstability due to the inclusion of the specific agent into the new partition whereas thesafety of the old partition must not decrease significantly and impair the systemstability due to the removal of the specific agent from the old partition. Therefore,both partitions, the new and the old one remain capable of meeting the demand forpower supply thus ensuring the system stability.Moreover, the partition safety metric can be computed by means of reliabilityassessment, by analyzing the impact on the capability of an AVPP to meet the desireddemand for power delivery or not when the specific agent is included or removed fromthe specific AVPP. Figure 2.2: The partition head of an AVPP selects a neighbourThis mechanism will be studied within this paper with the subsequent purpose toimplement and to integrate the findings into the decentralised multi-agent setpartitioning algorithm of the EnergyGrid. Moreover, we will answer the question ofhow the qualitative and quantitative fault tree evaluation 3 can be efficiently performedand how environmental conditions (e.g. weather conditions) can be included into the3 See MOCUS Algorithm, FATRAM Algorithm, Binary Decision Diagrams 7
  8. 8. 2 Fault Tree Analysis for the EnergyGrid – Basic Conceptsfault tree construction4 to achieve more realistic results in terms of the partition safetymetric.2.3 Fault Tree Analysis – Procedure StepsThe fault tree analysis is a standard method for the assessment and improvement ofreliability and safety in complex technical systems [Cep11]. Figure 2.3: Fault tree procedure stepsThe fault tree analysis is an analytical technique, where an undesired state of thesystem is specified and then the system is analyzed in the context of its environmentand operation to find all realistic ways in which the undesired event can occur. Theundesired state of the system which is identified at the beginning of the fault treeanalysis, is a state which is critical from a safety standpoint and is identified as the topevent. The top event is therefore an undesired event which is further analyzed with thefault tree analysis.4 See Common Cause Failures 8
  9. 9. 2 Fault Tree Analysis for the EnergyGrid – Basic ConceptsThis analytical technique will be used to compute the partition safety metric introducedin Section 2.1. It comprises a number of steps (see Figure 2.3), three of them are crucialfor an efficient and accurate computation of the partition safety metric which will bediscussed in detail in the next sections. Step (4) – Fault Tree Construction (see Chapter 3) After selecting the undesired event – AVPP cannot meet the power demand5 – as a top event and having analyzed the system so that we know all the causing6 effects, we can now construct the fault tree by using AND and OR gates which define the major characteristics of the fault tree. Step (5) – Qualitative Fault Tree Evaluation (see Chapter 4) After the fault tree has been constructed for the undesired event, a qualitative analysis can be performed to provide valuable information about failure combinations of power plants which can cause the undesired top event. Step (7) – Quantitative Fault Tree Evaluation (see Chapter 5) As the EnergyGrid has the infrastructure necessary to provide probabilities for the basic events of the fault tree by means of providing the reliability and credibility metrics of power plants7, the quantitative fault tree evaluation can therefore be performed to obtain the probability that the undesired top event will occur, and how the criticality of the basic events will influence the probability of the undesired top event. The latter aspect is important for identifying power plants which can impair the system stability in terms of reducing the ability of an AVPP to meet the demand for power delivery.5 The undesired event results from the definition of the partition safety metric described in Section 2.2.6 The fault tree analysis is based on the assumption that all causes which lead to the undesired state of the system – the undesired top event, are known.7 The reliability and credibility metrics are provided by the EnergyGrid for each power plant from the corresponding power plant landscape. 9
  10. 10. 3 Fault Tree Construction3 Fault Tree ConstructionIn fault tree construction the fault tree model is developed graphically or with Booleanequations which represent the fault tree. It is important to distinguish betweendependent and independent basic events. If component fault states of the system existonly as mutually independent basic events without any shared root cause, then the faulttree construction will proceed according to standard procedure steps. If componentfault states are a direct result of a shared root cause, then an additional approach isnecessary to model such dependencies in a fault tree. These fundamental questions willbe described in the following section due to the fact that weather conditions repre-senting a typical shared root cause have a direct impact on the operation of photo-voltaics, wind turbines and, to a lesser extent, hydropower stations modeled in theEnergyGrid.3.1 Mutually Independent Basic EventsThe idea of fault tree analysis within the EnergyGrid is to construct as many fault treesas AVPPs were formed during the structuring process of a power plant landscape. Thus,each fault tree of the EnergyGrid provides a basis for a safety assessment of the specificAVPP with its power plants. The fault tree construction for AVPPs can be considered asthe same in terms of the fault tree structure only with the difference that mainlydifferent power plants are included. Each fault tree is built according to the EnergyGridmetrics (reliablity and crediability) to model that an AVPP cannot meet the demand forpower delivery (see Figure 3.1, Figure 3.2, Figure 3.3, Figure 3.4). For specifying why anAVPP cannot meet the demand for power delivery and taking into consideration thetwo metrics mentioned above, we define two corresponding intermediate fault treeevents – „Power Plant is not online“ (Negation of the reliability metric definition – seeFigure 3.1) and „Power Plant does not adhere to its predictions“ (Negation of thecredibility metric definition – see Figure 3.2). If we continue specifying these twointermediate fault tree events, we can say that a power plant can be offline due totechnical problems (see Figure 3.1) and cannot adhere to its predictions due toinsufficient fuel8 (see Figure 3.2) or unscheduled maintainance work (see Figure 3.3) orweather conditions (see Figure 3.4) with regard to specific types of power plants. Theresolution of the fault tree within the EnergyGrid terminates with the modeling ofconcreate power plants represented by the basic events within our fault tree. The basicevents are thus related to the failure modes of power plants as a whole and not to thelarger number of components of power plants and their failure modes.The fault tree modeled for the EnergyGrid includes a big number of basic events relatedto the operability of the specific power plants. But not all of the basic events presentedin the fault tree can be considered as mutually independent. As mentioned above,similar weather dependent power plants (see Figure 3.4) depend on weather conditions(e.g. cloudiness, dead calm, drought) to the same extent. Thus, we can assume thatwhen the operability of a power plant drops due to weather conditions, then all similarpower plants (power plants of the same type) located in direct proximity must have asimilar drop in power supply. Therefore, during the quantitative evaluation of the faulttree this type of the shared root cause must be considered precisely to provide reliableand accurate results during the quantative evaluation of the fault tree modeled for theEnergyGrid. Although the methods for evaluating common cause failures will bepresented in the next section (see Section 3.2), it is important to emphasize that thefault tree for the EnergyGrid comprises more than two components sharing the same8 The insufficient fuel problem can be considered as a technical problem only for controllable power plants. 10
  11. 11. 3 Fault Tree Constructionroot cause which means that we are interested in methods considering common causefailures for more than two components.3.2 Common Cause FailuresCommon cause failure (CCF) events [Cep11, Kep11] are a subset of dependent events inwhich two or more component fault states exist at the same time and are a direct resultof a shared root cause. The definition of common cause failure is closely related to thegeneral definition of dependent failure. The events A and B are dependent if: P  A∩ B≠P  A∗P  Bwhere P  x  is the probability of event x . If the dependency exists between parallelevents, the probability of system failure is larger than the product of failureprobabilities of all parallel events: P  A∩ BP  A∗P  BCommon cause failure results from the coexistence of two main factors: (1) A susceptibility for components to fail or become unavailable because of a particular root cause of the failure. (2) A coupling mechanism that creates the condition for multiple components to be affected by the same cause.An example of common cause failures in the EnergyGrid is the case wherephotovoltaics fail to operate at the same time as a result of changing weather conditions(e.g. clouds). The methods for the evaluation of common cause failures include thefollowing approaches: (1) Beta factor method (2) Basic parameter method (3) Multiple Greek letter method (4)Alpha factor methodThe full effect of the difference between the methods is observed for the systems withmore than two or three parallel branches or portions or redundant lines. If there areonly two parallel components in the system, the differences between the methods arenot notable. The simplest method for evaluation of common cause failures is the betafactor method which provides a good understanding of how common cause failures canbe modeled in a fault tree.3.2.1 Beta Factor MethodThe beta factor method9[Kep11] is a method where the likelihood of the common cause9 The beta factor method is a special case of the multiple Greek letter method described in Section 3.2.3. It is an approach which can be applied if only two components with the same 11
  12. 12. 3 Fault Tree Constructionfailure is evaluated in relation to the random failure probability of the componentssusceptible to a common cause failure.If the failures of components are not completely independent from others, the originalrandom failure probability of the failure mode of the specific component can be dividedinto two parts: independent failure probability and common cause failure probability(see Figure 3.5). Figure 3.5: The FT from the explicit modeling of common cause failuresThe fault tree shown in Figure 3.5 can be simplified through the equivalent Booleanrepresentation of the top event (see Figure 3.6): Top= K 1−ind ∨CCF 12 ∧ K 2−ind ∨CCF 12 =  K 1−ind ∧K 2−ind ∨CCF 12∧K 2−ind ∨ K 1−ind ∧CCF 12 ∨CCF 12∧CCF 12 =  K 1−ind ∧K 2−ind ∨CCF 12 Figure 3.6: The simplified FT of the fault tree in Figure 3.5The beta factor method is one parameter method where the factor β directs the shared root cause should be modeled in a fault tree. 12
  13. 13. 3 Fault Tree Constructionextent of the common cause failure probability related to the original random failureprobability. The β factor is obtained from the historical data by determining thepercentage of all the component failures in which multiple similar components failedversus single components failures. If β factor is not known, a general value of 0.1 issometimes used.The general expression for contributions of failure probabilities of m parallelcomponents for the component t is the following: { 1− β ×P t ; k =1 Pk = 0; 1k m β×P t ; k =mThis expression shows that only the independent part of the component failure and thecommon cause failure contribution of all components failures at the same time areconsidered in the beta factor method.3.2.2 Basic Parameter MethodThe basic parameter model [Iea92, Pil89] refers to the straightforward definition of theprobabilities of the basic events P m . The total failure probability P t of a kcomponent in a common cause group of m components is: m   P t=∑ m−1 ×P m k=1 k −1 kwhere the binomial term m−1= k −1!m−k ! k −1 m−1!represents the number of different ways that a specified component can fail with k −1 other components in a group of m similar components and the events P m and P m are mutually exclusive for all k , j . k j3.2.3 Multiple Greek Letter MethodThe multiple Greek letter method [Pil89] is used for a more accurate analysis ofsystems with higher levels of redundancy. The multiple Greek letter method uses otherparameters in addition to the beta factor to distinguish among common cause eventsaffecting different numbers of components in a higher order redundant system. Themultiple Greek letter parameters consist of the total failure probability which includesthe effects of all independent and common cause contributions to that componentfailure, and a set of failure fractions which are used to quantify the conditionalprobabilities of all the possible ways a common cause failure of a component can beshared with other components in the same group, given component failure has occured.For a group of three components, three different parameters are defined: 13
  14. 14. 3 Fault Tree Construction { 1− β ×P t ; k =1 P k = 1 β 1− χ ×P t ; k =2 2 β × χ ×P t ; k =3The general expression for the multiple Greek letter method is the following: ∏  k 1 Pk = ρi ×1− ρk1 ×P t ρ1=1, p2 =β , p3 = χ , p m1=0 where   m−1 k −1 i=1The beta factor method is a special case of the multiple Greek letter method. If χ ofthe multiple Greek letter method equals one, it becomes the beta factor method.3.2.4 Alpha Factor MethodThe alpha factor method [Iea92, Pil89, War10] is a multiparameter method that canhandle any redundancy level. The alpha factor method develops common cause failureprobabilities from a set of failure ratios and the total component failure probabilities.The parameters of the model are: Total failure probability of each component due to all independent and Pt common cause events Fraction of the total failure probability of events that occur in the system αk and involve the failure of k components due to a common causeThe general expression for the alpha factor method is the following: k αk m Pk= × Pt m−1 α t where α t =∑ k ×α k where k =1, 2, ... , m   k −1 k=1 14
  15. 15. 3 Fault Tree Construction Figure 3.1: The FT for the EnergyGrid – Part 115
  16. 16. 3 Fault Tree Construction Figure 3.2: The FT for the EnergyGrid – Part 216
  17. 17. 3 Fault Tree Construction Figure 3.3: The FT for the EnergyGrid – Part 317
  18. 18. 3 Fault Tree Construction Figure 3.4: The FT for the EnergyGrid – Part 418
  19. 19. 4 Algorithms for Qualitative Analysis of Fault Trees4 Algorithms for Qualitative Analysis of Fault TreesAs explained in the previous sections, the undesired state of the system which isidentified at the beginning of the fault tree analysis, is a state which is critical from asafety standpoint and is identified as the top event. By means of the qualitative faulttree evaluation, we try to find such minimal combinations of basic events (minimal cutsets) which, if they occur, cause the undesired top event occurence. On the basis ofobtained minimal cut sets during the qualitative analysis we can subsequently computethe probability of how likely is the occurence of the undesired state of the system.Under the assumption that the basic events are mutually independent, the propabilityfor the occurence of the undesired top event can be computed as follows: n n P TOP =∑ P MCS −∑ P MCS ∩MCSi i j i =1 i j n  ∑ P MCS ∩MCS i j ∩MCS k −⋯−1n−1 P MCS ∩MCS ∩∩MCS 1 2 n i  j k m P MCS =∏ P B i j j=1where P B is the probability of the basic event B j representing the failure of the jcorresponding component; B j are mutually independent basic events; P MCS is the iprobability of the occurence of the minimal cut set i ( MCS i ); m is the numberof basic events in the minimal cut set i ; n is the number of minimal cut sets.If the fault tree is written in the form of Boolean equations, those need to be combinedinto one by applying the rules of Boolean algebra to obtain the equation for the topevent representing a disjunction of conjuctive clauses (DNF10).If the fault tree is developed in its graphical form, the Boolean equations need to bewritten first based on the logic of the gates and their inputs. Then, the rules of theBoolean algebra are applied to obtain the equation for the top event which consists of adisjunction of conjunctive clauses. When the disjunction of conjunctive clauses of basicevents is expressed by the Boolean equations derived from the corresponding fault tree,each element of this disjunction includes conjunctive clauses of a certain number ofbasic events. Those basic events together represent a minimal cut set.This approach provides a method of how minimal cut sets can be obtained during thequalitative analysis of fault trees. However, this method cannot be used by means ofcomputational techniques due to its complexity and inefficiency. Therefore, in the nextsections we are going to explain existing algorithms such as MOCUS algorithm (seeSection 4.1), FATRAM algorithm (see Section 4.2), an additional optimisationalgorithm for MOCUS/FATRAM (see Section 4.3) and Binary Decision Diagrams (seeSection 4.4) which provide an efficient implementation of the computation of minimalcut sets.4.1 MOCUS AlgorithmOne of the most common FT algorithms for generating CSs (cut sets) is the MOCUS 1110 Disjunctive Normal Form11 MOCUS algorithm is the oldest and the most used one in FT analysis 19
  20. 20. 4 Algorithms for Qualitative Analysis of Fault Trees(method for obtaining cut sets) algorithm [Fuv72], developed by J.Fussel and W.Vesely. It is an effective top-down gate substitution methodology for generating CSsfrom a FT. MOCUS is based on the observation that AND gates increase number ofelements in a CS and that OR gates increase the number of CSs. The basic steps in theMOCUS algorithm are as follows: (1) Name or number all gates and events. (2) Place the uppermost gate name in the first row of a matrix. (3) Replace the top gate with its inputs, using notation of: (a) Replace an AND gate with its inputs, each input seperated by a comma. (b) Replace an OR gate by vertical arrangement, creating a new line for each input. (4) Reiteratively substitute and replace each gate with inputs, moving down the FT. (5) When only basic inputs remain in the matrix, the substitution process is complete and the list of all CSs has been established. (6) Remove all non-minimal CSs and duplicate CSs from the list using the laws of Boolean Algebra. (7) The final list contains the minimal CSs.Example of applying the MOCUS algorithm to an FTConsider the fault tree given in Figure 4.1. The results obtained by applying the MOCUSalgorithm to the FT are as follows:(a) The TOP gate is located in the first row and column: G0. Since G0 is an OR gate, itis replaced by its input events in separate rows as follows: {A} {B} {G1} .(b) G1 is also an OR gate. Replacing it by its input events in separate rows yields {A} {B} {G2} {G3} . 20
  21. 21. 4 Algorithms for Qualitative Analysis of Fault Trees(c) G2 is an AND gate. It should be replaced by its input events in separate columns asfollows: {A} {B} {G4 , G5} {G3} .(d) Replacing G3 by its input events leads to {A} {B} {G4 , G5} {C } {G6 } .Continuing in this fashion, the total CSs are obtained to be {A}, {B} ,{D , F ,G },{E , F , G} ,{C }, {E , F }and are given in column 6 of Table 4.1. The individual steps for obtaining the CSs areshown in Table 4.1. It may be seen that the set {E , F , G} includes {E , F } . It is asuperset and should be removed, since it is not a minimal cut set. Therefore, theminimal cut sets are: {A}, {B} ,{D , F ,G },{C }, {E , F } . 21
  22. 22. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.1: The FT for which the MOCUS algorithm is applied in the example Steps 1 2 3 4 5 6 A A A A A A B B B B B B G1 G2 G4, G5 G4, G5 D, G5 D, F, G G3 G3 C E, G5 E, F, G G6 C C E, F E, F Table 4.1: The steps for obtaining the cut sets for the fault tree given in Figure 4.1The MOCUS algorithm, as mentioned above, is the oldest deterministic algorithmdeveloped for obtaining minimal cut sets. It provides a good basis for determiningminimal cut sets in terms of computational techniques. However, its execution speeddoes not represent the upper limit for this kind of deterministic approach due to theexisting weak point in treating repeated events [Ram78], and thus can be improved.This means, MOCUS can be optimised for fault trees containing a big number ofrepeated events. As our constructed fault tree (see Figure 3.1, Figure 3.2, Figure 3.3 andFigure 3.4) contains many different repeated basic events (power plants) in order toreproduce the technical specificity of the EnergyGrid, it is important to take a look atthe FATRAM algorithm (see Section 4.2) and its subsequent optimisation algorithm 22
  23. 23. 4 Algorithms for Qualitative Analysis of Fault Trees(see Section 4.3) dealing with this question. This insight will help us to get an overviewof how they can be applied for our proposed fault tree.4.2 FATRAM AlgorithmBesides the MOCUS algorithm, several other algorithms [Ben76, Ram78] have beendeveloped for obtaining the minimal cut sets of FTs. The majority of these algorithmshave tried to improve upon the MOCUS algorithm by taking into consideration therepeated events. Among the most important of such algorithms, we have the algorithmof FATRAM [Ram78].The FATRAM (fault tree reduction algorithm) algorithm is a top-down algorithmsimilar to MOCUS. Gates to be resolved are selected in such a manner that computercore requirements are minimized. The steps of the algorithm are: (1) Resolution begins with the TOP event. If the TOP event is an AND gate, all inputs are listed as one set; if it is an OR gate, the inputs are listed as separate sets. (2) Iterate until all OR gates with gate inputs and all AND gates are resolved. OR gates with only basic event inputs are not resolved at this time. (3) Remove any supersets12 that still exist. (4) Process any repeated basic events remaining in the unresolved OR gates. For each repeated event do the following: (a) The repeated event replaces all unresolved gates of which it is an input to form new sets. (b) These new sets are added to the collection. (c) This event is removed as an input from the appropriate gates. (d) Supersets are removed. (5) Resolve the remaining OR gates. All sets are minimal cut sets.The FATRAM algorithm, as mentioned above, is a more compact, new version of theoriginal MOCUS algorithm. The FATRAM algorithm can bring about an improvementover the MOCUS algorithm only in the case of the FT having primary OR operators. Inthe opposite case, it does not offer any improvement in terms of execution time.Example 1 of applying the FATRAM algorithm to an FTThe minimal cut sets for the fault tree in Figure 4.2 are determined in this example. TheFT contains two repeated events, B and C; thus, all steps of the algorithm areillustrated.12 A cut set that is not minimal. (In the intermediate steps of the analysis it may contain gates.) 23
  24. 24. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.2: The FT for applying the FATRAM algorithm(a) The TOP gate is an AND gate; thus we obtain {G1 ,G2} .(b) Gate G1 is an AND gate; thus, by Rule 2, it is resolved first yielding: {A , G3 , G2} .(c) Both G3 and G2 are OR gates, but G3 has only basic event inputs. Therefore, G2 isresolved next (Rule 2) to yield: {A , G3 , B} {A , G3 , E } {A , G3 , G4} .(d) G4 is an AND gate and is the next gate to be resolved (Rule 2); we obtain: {A , G3 , B} {A , G3 , E } {A , G3 , D ,G5} .(e) The gates that remain, G3 and G5, are both OR gates with only basic event inputs.No supersets (Rule 3) exist in step (d) so repeated events (Rule 4) are handled next.Consider basic event B which is input to gates G2 and G3. G2 has already been resolvedbut G3 has not. Everywhere G3 occurs in the sets it is replaced by B creating additionalsets. This gives: 24
  25. 25. 4 Algorithms for Qualitative Analysis of Fault Trees {A , G3 , B} {A , G3 , E } {A , G3 , D ,G5} {A , B , B} {A , B , E} {A , B , D , G5} .Within the set {A , B , B} the redundant B is removed to obtain {A , B} giving: {A , G3 , B} {A , G3 , E } {A , G3 , D ,G5} {A , B} {A , B , E} {A , B , D , G5} .Gate G3 (Rule 4c) is altered by removing B as an input. Hence, G3 is now an OR gatewith two basic event inputs, C and H.(f) Supersets are deleted here (Rule 4d), which leaves: {A , G3 , E } {A , G3 , D ,G5} {A , B} .(g) Basic event C is also a repeated event, it is an input to G3 and G5. By Rule 4areplace G3 and G5 in the sets by C, creating additional sets to obtain {A , G3 , E } {A , G3 , D ,G5} {A , B} {A , C , E } {A , C , D , C } .The set {A , C , D , C } is reduced to {A , C , D} because of the redundant basic event 25
  26. 26. 4 Algorithms for Qualitative Analysis of Fault TreesC. The gate definitions into which C is input are altered. Thus, G3 has only input H, andG5 has inputs F and G.(h) Supersets are removed at this point (Rule 4d). Since none exists and all repeatedevents have been handled, proceed to Rule 5. This results in obtaining all of theminimal cut sets: {A , H , E } {A , H , D , F } {A , H , D , G} {A , B} {A , C , E } {A , C , D} .Example 2 of applying the FATRAM algorithm to an FTConsider the fault tree given in Figure 4.1. The FT contains two repeated events, E andF; thus, all steps of the algorithm are illustrated in the following section.(a) The TOP gate is an OR gate; thus we obtain: {A} {G1} {B} .(b) Gate G1 is an OR gate; thus, by Rule 2, it is resolved first yielding: {A} {G2} {G3} {B} .(c) Gate G2 is an AND gate; Gate G3 is an OR gate; thus, by Rule 2, we obtain {A} {G4 , G5} {C } {G6 } 26
  27. 27. 4 Algorithms for Qualitative Analysis of Fault Trees {B} .(d) Gate G4 is an OR gate and G5 is an AND gate, but G4 has only basic event inputs.Therefore, G4 is resolved next (Rule 2) to yield: {A} {G4 , F , G} {C } {G6 } {B} .(e) Gate G6 is an AND gate and is the next gate to be resolved (Rule 2); we obtain {A} {G4 , F , G} {C } {E , F } {B} .(f) The gate that remains, G4, is an OR gate with only basic event inputs. No supersets(Rule 3) exist in step (e) , so repeated events (Rule 4) are handled next. Consider basicevent E which is input to gates G4 and G6. G6 has already been resolved but G4 hasnot. Everywhere G4 occurs in the sets, it is replaced by E creating additional sets. Thisgives: {A} {G4 , F , G} {C } {E , F } {B} {E , F , G} .Gate G4 (Rule 4c) is altered by removing E as an input. Hence, G4 is now an OR gatewith one basic input, D.(g) Supersets are deleted here (Rule 4d), which leaves: {A} 27
  28. 28. 4 Algorithms for Qualitative Analysis of Fault Trees {G4 , F , G} {C } {E , F } {B} .Thus, G4 has only input D.(h) Since all repeated events have been handled, proceed to Rule 5. This results inobtaining all of the minimal cut sets: {A} {D , F , G} {C } {E , F } {B} .4.3 Reduction of Comparisons in the MOCUS and FATRAM AlgorithmsThis section describes a new algorithm [Liz86] for reduction when repeated eventsappear in the fault tree. It improves the conventional MOCUS top-down algorithm toobtain all minimal cut sets faster. The improvement is based on reducing the number ofset comparisons required to find the minimal cut sets.The reduction of the obtained cut sets is the most time-consuming task. For a givennumber n of cut sets, n!/[2 n−2!] operations of comparison at the maximum haveto be performed. This number increases when the number of cut sets is considerable.The MOCUS algorithm applied to an FT not containing any repeated event, directlyyields the minimal cut sets without reduction. In [Liz86] has been shown that the cutsets not containing any repeated event are minimal. The reduction is then solely limitedto the cut sets containing repeated events. The steps of the algorithm are as follows: (1) Obtain the cut sets K (e.g. MOCUS). If the fault tree contains no repeated events K Min =K . Go to step (5). (2) Partition K into two subsets K 1 and K 2 by examining all the cut sets. Those containing at least one repeated event constitute K 1 and others, K2 . (3) Reduce the cut sets in K 1 to obtain K 1 Min . (4) Let K Min =K 1 ∪ K 2 . Min 28
  29. 29. 4 Algorithms for Qualitative Analysis of Fault Trees (5) Each element of K Min is a minimal cut set.Example 1 of applying the Reduction algorithm to an FTConsider the FT in Figure 4.3. The MOCUS algorithm generates the following 9 cut set: {A}, {B} ,{C }, {F }, {H }, {D , F } ,{D ,G }, {E ,G },{E , F } . Figure 4.3: The FT for applying the Reduction algorithmFor reducing these cut sets, 36 comparisons should be made. By resticting oneselfsolely to the cut sets containing the repeated event F, that is, the cut sets {F }, {D , F } and {E , F } , (the other cut sets already being minimal), we have tocarry out 3 comparisons. The minimal cut sets are: {A}, {B} ,{C }, {F }, {H }, {D , G}, {E , G} .For a better undestanding we can now compare how many reductions are required forapplying the FATRAM algorithm to the fault tree of Figure 4.3. 1. {A , B , G3G4 ,C ,G6 } The reduction requires a maximal number of comparisons equal to 10. 29
  30. 30. 4 Algorithms for Qualitative Analysis of Fault Trees 2. {A , B , G3G4 ,G6 , F , G5 , F } The reduction requires a maximal number of comparisons equal to 21 (or 6 if we proceed group by group). 3. {A , B , D.G , E.G , H ,C , F } are the minimal cut sets.Example 2 of applying the Reduction algorithm to an FTThe Reduction algorithm can be also combined with the FATRAM algorithm. In thisexample we will show how the combination with FATRAM can be performed. Considerthe fault tree given in Figure 4.2. By FATRAM we obtain: 1. {A.G3.B , A.G3.E , A.G3.D.G5 } Reduction on 3 cut sets. With the Reduction algorithm, no reduction. 2. {A.G3.B , A.G3.E , A.G3.D.G5 , A.B , A.B.E , A.B.D.G5} Reduction on 6 cut sets. With the Reduction algorithm reduction on 4 cut sets. 3. {A.G3.E , A.G3.D.G5 , A.C.E , A.C.D } Reduction on 5 cut sets. With the Reduction algorithm reduction on 3 cut sets.In conclusion, the FATRAM algorithm obtains all the minimal cut sets after a maximalnumber of comparisons equal to 28, and FATRAM combined with the Reductionalgorithm obtains all the minimal cut sets after a maximal number of comparisonsequal to 9.4.4 Binary Decision DiagramsIn recent years, new algorithms concerning fault trees were proposed designated underthe term of binary decision diagrams (BDD). The algorithms based on the BDD areexceptionally rapid. They have made possible the treatment of difficult test cases in avery short space of time (some seconds) instead of very long time periods (some hours,even some days) even with the most efficient traditional algorithms (e.g. MOCUSalgorithm).Nevertheless, it has to be noted that these algorithms have not altered the nature of theproblem which still continues to be NP-complete. As a result, there can be cases thatare difficult to solve, even with these algorithms.A binary decision diagram [Rau93] is a graph encoding Shannons decomposition of aformula. BDDs have two important features: (1) the graphs are compacted by sharingequivalent subgraphs, and (2) the results of operations performed on BDDs arememorized and thus a job is never performed twice. These two features make BDDsone of the most efficient methods for Boolean formulae management.This concept however comprises a larger number of algorithms (see Figure 4.4) which 30
  31. 31. 4 Algorithms for Qualitative Analysis of Fault Treesare essential for the computation of minimal cut sets by means of BDDs: Figure 4.4: BDD Procedure Steps Step (1) – Algorithm for ordering basic events (see Section 4.4.3) This algorithm is used to find the best possible sequence of basic events in order to construct a compact BDD with a minimum number of nodes. The result of this algorithm is the obtained sequence of basic events. Step (2) – Algorithm of a binary operation between two BDDs (see Section 4.4.4) This algorithm is used to combine two BDDs representing two Boolean formulae by using a binary connective (e.g. and, or). The result of this algorithm is a new composed BDD. Step (3) – Algorithm for the construction of the BDD from the FT (see Section 4.4.4) 31
  32. 32. 4 Algorithms for Qualitative Analysis of Fault Trees This algorithm is used to transform a FT into a BDD. The result of this algorithm is a BDD obtained from the corresponding FT.Step (4) – Algorithm for implementing the „“-operator (see Section 4.4.4) This algorithm is used to realize the „without“-operator between two BDDs so that all the paths included in a path of the second BDD will be removed from the first one. The result of this algorithm is a new BDD without the removed paths.Step (5) – Algorithm for computing the BDD encoding the minimal solutions (see Section 4.4.4) This algorithm is used to compute the BDD encoding the minimal solutions according to the theorem of the characterization of minimal solutions. The result of this algorithm is a new BDD containing only the set of paths from the root to leaf 1 encoding minimal solutions.Step (6) – Algorithm for computing solutions defined by a BDD (see Section 4.4.4) This algorithm is used to compute solutions defined by paths from the root of a BDD to leaf 1. The result of this algorithm is a set of solutions so that each solution in turn is a set of nodes defining a path from the root of the BDD to leaf 1.Step (7) – Algorithm for computing the probability of the root event (see Section 4.4.4) This algorithm is used to compute the probability of the root event if the probabilities of the terminal events are known. The result of this algorithm is the probability of the root event. 32
  33. 33. 4 Algorithms for Qualitative Analysis of Fault Trees4.4.1 Shannons DecompositionDefinition. Let X ={x1,... , x n } be a set of Boolean variables. An assignment of Xis a mapping from X into {0,1} .Theorem (Shannons decomposition). Let f be a Boolean function on X , and x be a variable of X then f = x∧ f {x=1}∨¬ x∧ f {x=0 } , where f evaluatedin x=v is denoted by f {x=v} .In order to make the notations correspond with the intuitive notion of binary treeinduced by the Shannons decomposition of a function, we introduce the ite (If-Then-Else) connective:Definition. ite  F ,G , H = F ∧G∨¬ F ∧H Each node in the BDD can be written as an ite format which represents an orderedtriple with a variable, a pointer to the one-branch and a pointer to the zero-branch.Thus, ite  A , f 1, f 2  can be interpreted as: if A then consider function f 1 else consider function f 2 .The usual operations between the Boolean functions can be carried out with the help ofthis operation. For example: F =ite  F ,1, 0 G=ite F , G ,G  F ∧G=ite  F , G , 0 F ∨G=ite  F , 1,G  .Definition. Let F be a Boolean formula. F is said to be in Shannons form ifeither it is a constant or it is in the form ite  x , G , H  where x is a variable and G and H are formulae in Shannons form in which x does not occur.Property. For any formula F a formula G exists in Shannons form equivalent to F .Example 1 of applying the Shannons form 33
  34. 34. 4 Algorithms for Qualitative Analysis of Fault Trees a∨c ∧b∨c  = a∧1∨c ∧b∨c ∨¬a∧0∨c∧b∨c  =a∧1∧b∨c∨¬a∧c∧b∨c = a∧b∨c ∨¬a∧c = a∧b∧1∨c∨¬b∧0∨c ∨¬a∧c =a∧b∧1∨¬b∧c∨¬a∧c  =a∧b∨¬b∧c ∨¬a∧c  =a∧b∨¬b∧c∧1∨¬c∧0 ∨¬a∧c∧1∨¬c∨0 =ite a , ite b ,1, ite c , 1,0 , ite c , 1,0The BDD associated with the formula a∨c ∧b∨c =ite a ,ite b , 1,ite c , 1, 0 , ite c , 1, 0is shown in Figure 4.5. Figure 4.5: The BDD associated with a∨c ∧b∨c 4.4.2 Directed Acyclic GraphDefinition. Let < be a total order on variables. A binary decision diagram (BDD) is adirected acyclic graph (DAG). A BDD has two leaves: 0 and 1 encoding the twocorresponding constant functions. Each internal node encodes an ite connective, i.e., itis labelled with a variable x and has two out-edges. These two edges are called 0-edgeand 1-edge. An internal node descendant from a node labelled by a variable x islabelled by a variable y such that x < y.A BDD is illustrated in Figure 4.6. The diagram is entered at the root vertex. Eachvertex or node represents a basic event from the fault tree and has two exit branchesbelow it. If the event occurs then the node is left on the 1 branch. For the non-occurrence of the basic event the node is left on the 0 branch. 34
  35. 35. 4 Algorithms for Qualitative Analysis of Fault TreesWhen the set of component conditions is such that the top event is determined then aterminal-one node or a terminal-zero node occurs. A terminal-one vertex indicates topevent occurrence and a terminal-zero vertex is the top event non-occurence. Thisencodes the structure function of the fault tree. The entire BDD in Figure 4.6 can beexpressed using the following notation (see Shannons decomposition) as: ite  A , 1, ite B , ite C , 1,0 ,0 . Figure 4.6: Binary Decision Diagram StructureCut sets are combinations of component failures which cause the top event. On theBDD these can be tracked as component failure events which lead to a terminal-onevertex. So the BDD in Figure 4.6 has CSs {A} and {B , C } . In this case the CSs areminimal.However the significant advantage to transforming the FT to the BDD form is gainedwhen quantifying the top event probability and failure itensity. These can be obtaineddirectly from the BDD without need to produce a list of the minimal cut sets as anintermediate stage or resort to approximations.4.4.3 Priority Ordering Method of Basic EventsBefore converting the fault tree into a BDD, the ordering sequence of basic events in thefault tree must be determined first. The sequence of the basic events can directlyinfluence the size of a BDD. If the ordering schemes are different, the same fault treecan be converted into different BDDs. In general, the smaller the BDD is, the lesscomputation time is required.In practical systems the relationship between the basic events in a fault tree is quitecomplex. Hence, when ordering basic events, it is necessary to consider multiple factorsto obtain the optimal result. Specifically, factors such as the number of layers, repeatedevents, and neighboring events and the number of basic events are considered andprioritized by the priority ordering method [Hul11]. (1) Priority 1: number of layers. The higher the layer where the basic event resides, the greater the effect it has on the top event. Therefore, when ordering basic events, the number of layers can be treated as the primary 35
  36. 36. 4 Algorithms for Qualitative Analysis of Fault Trees factor, and the subtree with less number of layers has a higher priority. (2) Priority 2: repeated events. The same basic events can appear in many different subtrees and may have an effect on these subtrees and other basic events. When calculating the influence of the basic events on the top event, repeated events can have a cumulative effect on it and may have a greater effect than that of the nonrepeated basic events on the same layer of a fault tree. However, repeated events are only considered from the basic event aspect and treated as more than one basic event. In contrast to this, the number of layers is considered from the subtree aspect. Therefore, a repeated event is less important than the number of layers and thus can be considered as the secondary factor when ordering the basic events. The event with larger number of repetition is assigned with a higher priority. (3) Priority 3: neighboring events. When a basic event or gate event fails, its neighborhood will be influenced first. The neighbouring event of the ordered event has only a great influence on the basic events whereas its influence on the top event may be much lower. Therefore, it is treated as the third factor, and the neighboring event of an ordered event has a higher priority. (4)Priority 4: number of basic events. The number of basic events does not mean to regard the basic events of the whole subtree, but rather the basic events under the same gate at the same layer. If there are fewer basic events in the subtree, its influence on the top event may be greater than those with more basic events. The factor is also considered from the basic events point of view and may have less effect on the top event than the first three factors. As a result, it is considered as the fourth factor, and the subtree with fewer numbers of basic events has a higher priority. (5) Priority 5: from left to right. If the previously mentioned factors are not applicable, the left-right scheme can be adopted to order the basic events.Example 1 of applying the priority ordering methodFor a better understanding of this approach, the FT shown in Figure 4.7 is given as anexample to illustrate the priority ordering method.Before the application of the priority ordering method, the fault tree should besimplified because the child events of G8 contain the basic event C and the brotherevents of G8 also contain C. Hence, the whole gate event of G8 is deleted on the basis ofthe absorption law of the Boolean algebra:  F ∨D∨C ∧C∧G=C∧G .G6 is an AND gate, and its child event G9 is also an AND gate. G9 is hence deleted. Itssubevents B and I are moved up and become the child events of G6 on the basis of theexchange law of the Boolean algebra: D∧ B∧I =D∧B∧I . 36
  37. 37. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.7: A FT used to illustrate priority ordering methodFigure 4.8 shows the simplified fault tree model. Figure 4.8: The simplified FT of the fault tree in Figure 4.7The specific steps to order the basic events of the simplified fault tree using the priorityordering method are as follows: (1) In this FT, the child event of the top event only has one basic event. We therefore chose the basic event and obtain the first order, φ1={A} . 37
  38. 38. 4 Algorithms for Qualitative Analysis of Fault Trees (2) G1 and G2 are two brother gate events of A. Comparing their sizes, G2 only has two layers whereas G1 has three. Therefore, G2 is smaller and ordered first. (3) The child events of G2 contain two gate events, G5 and G6. By comparing, one can find that both of the two gate events only have one layer, which cannot be compared in accordance with Priority 1. Then through Priority 2, one can find that both of the two gate events contain the repeated event I; hence, this priority also does not work. Next, use Priority 3, that is, the neighbor first ordering principle, to compare G5 and G6. Since neither has the basic event that has already been ordered, Priority 3 is not applicable either. Then in accordance with Priority 4, G5 has two child events and G6 has three; G5 is ordered first accordingly. The child event I in G5 is a repeated event and has a higher priority. The sequence φ2 ={I , H } is hence obtained. Then order G6; the repeated event I is ordered first. The remaining basic events D and B are ordered according to Priority 5, from left to right. This results in the sequence φ3={I , D , B } . (4)Next, choosing from the remaining brother gate events of G2, we have G1. Its child events has two gate events G3 and G4. G3 has two layers and G4 only has one. On the basis of Priority 1, G4 takes the priority, and this results in the sequence φ4 ={C , G} . Gate event G3 has one gate event and two basic events. Choose the basic events and order them with Priority 5, from left to right. This results in the sequence φ5={E , F } . Then order the gate event. Because all the child events of G7 have been already ordered, the ordering is complete.Finally, the sequence of basic events is shown as follows (in the formula, „ A I 13 “means A is ordered before I). A I  H DBCGE FThen through the Shannons decomposition, the BDD is generated using the previousorder, as shown in Figure 4.9.13 We say that A is smaller than I (A < I) if A comes before I in the variable order, i.e., higher up in the BDD 38
  39. 39. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.9: The BDD generated using the order obtained from priority ordering methodExample 2 of applying the priority ordering methodConsider the fault tree given in Figure 4.1. Before the application of the priorityordering method, the FT should be simplified because the gate G2 is an AND gate, andits child event G5 is also an AND gate. G5 is hence deleted. Its subevents F and G aremoved up and become the child events of G2 on the basis of the exchange law of theBoolean algebra:  D∨E ∧ F∧G = D∨E∧F ∧GFigure 4.10 shows the simplified fault tree model. 39
  40. 40. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.10: The simplified FT of the tree in Figure 4.1The specific steps to order the basic events of the simplified fault tree using the priorityordering method are as follows: (1) In this fault tree, the child event of the top event has two basic events. We therefore chose the basic events and obtain the first order, φ1={A , B} . (2) G1 is only one brother gate event of A and B, and therefore will be ordered first. (3) The child events of G1 contain two gate events, G2 and G3. By comparing, one can find that both of the two gate events have two layers, which cannot be compared in accordance with Priority 1. Then through the Priority 2, one can find that unlike G3, G2 contains the repeated event F. G2 is ordered first accordingly. The child event F in G2 is a repeated event and has a higher priority. The sequence φ2 ={F , G} is hence obtained. Then order the gate event G4. The child event E in G4 is a repeated event and has a higher priority. The sequence φ3={E , D} is hence obtained. (4)Next, choosing from the remaining brother gate events of G2, we have G3. G3 contains only one basic event. This results in the sequence φ4 ={C } . Then order the gate event G6. Because all the child events of G6 have been already ordered, the ordering is complete. 40
  41. 41. 4 Algorithms for Qualitative Analysis of Fault TreesFinally, the sequence of basic events is shown as follows A BF GE DC .Then through the Shannons decomposition, the BDD should be generated using theorder identified above. The concrete algorithm of how the BDD can be generated fromthe given fault tree (see Figure 4.10) considering the determined order of the basicevents, will be presented next.4.4.4 BDD Algorithms in Fault Tree AnalysisIn Section 4.4.3 we presented an efficient method of how the ordering sequence of thebasic events in a fault tree can be obtained. This technique is a very important step oftransforming a fault tree into a binary decision diagram due to the fact that, if theordering of the basic events is not chosen suitably, the size of the final BDD can growexponentially. After obtaining the order of the basic events, the BDD Algorithms can beapplied to compute MCSs of the BDD converted from the corresponding fault tree.These algorithms can be classified as follows: – Algorithm of a binary operation between two BDDs – Algorithm for the construction of the BDD from the FT – Algorithm for implementing the „“- operator – Algorithm for computing the BDD encoding the minimal solutions – Algorithm for computing solutions defined by a BDD – Algorithm for computing the probability of the root event if the probabilities of the terminal events are givenAlgorithm of a binary operation between two BDDsProperty (Logical operations on BDDs). Let x and y be two variables (x < y), let G1,G2, H1, H2 be four formulae and let be ◊ any binary connective (and, or), then thefollowing equalities hold: ite(x, G1, G2) ◊ ite(x, H1, H2) = ite(x, G1 ◊ H1, G2 ◊ H2) ite(x, G1, G2) ◊ ite(y, H1, H2) = ite(x, G1 ◊ ite(y, H1, H2), G2 ◊ ite(y, H1, H2))Assume that we have built the two BDDs G = ite(a, 1, ite(c, 1, 0)) and H = ite(b, 1, ite(c,1, 0)) encoding respectively the subformulae a∨c  and b∨c of the formula ofexample . Now, in order to compute the BDD associated with the formula itself, weneed to compute G∧H . 41
  42. 42. 4 Algorithms for Qualitative Analysis of Fault Trees G∧H =ite a ,1∧iteb , 1,ite c ,1, 0 ,ite c , 1, 0∧iteb , 1,ite c ,1, 0 =ite a , iteb , 1,ite c ,1, 0 ,ite c , 1, 0∧ite b , 1, ite c ,1, 0 =ite a , ite b ,1, ite c , 1, 0 , ite b , 1∧ite c , 1, 0 , ite c ,1, 0∧ite c , 1, 0 =ite a , ite b ,1, ite c , 1,0 , iteb , ite c , 1,0 ,ite c , 1, 0 =ite a ,ite b , 1,ite c , 1, 0 , ite c , 1,0 These two equalities suggest clearly a recursive procedure [Rau93] (see Figure 4.11).computation (op, F, G) = if ((F=0) or (F=1) or (G=0) or (G=1)) return op(F, G) /* call to the truth table of op */ else if (computation-table has enty {<op, F, G>, R }) return R else let x be the least14 variable of F and G U ← computation(op, F{x=1}, G{x=1}) V ← computation(op, F{x=0}, G{x=0}) if (U = V) return U else R ← find-or-add-ite-table(x, U, V) insert-in-computation-table ({ <op, F, G>, R }) return R Figure 4.11: Algorithm of a binary operation between two BDDsTo compute programmatically the BDD associated with G∧H , let us apply thealgorithm in Figure 4.11.computation(and, ite(a, 1, ite(c, 1, 0)), ite(b, 1, ite(c, 1, 0)) = /* let a be the least variable of F and G – that means a < b < c */ U1 = computation(and, ite(1, 1, ite(c, 1, 0)), ite(b, 1, ite(c, 1, 0))) = computation(and, 1, ite(b, 1, ite(c, 1, 0))) = ite(b, 1, ite(c, 1, 0)) V1 = computation(and, ite(0, 1, ite(c, 1, 0)), ite(b, 1, ite(c, 1, 0))) = computation(and, ite(c, 1, 0), ite(b, 1, ite(c, 1, 0))) /* let b be the least variable of F and G */ UV1 = computation(and, ite(c, 1, 0), ite(1, 1, ite(c, 1, 0))) = computation(and, ite(c, 1, 0), 1) = ite(c, 1, 0) VV1 = computation(and, ite(c, 1, 0), ite(0, 1, ite(c, 1, 0))) = computation(and, ite(c, 1, 0), ite(c, 1, 0)) = /* let c be the least variable of F and G */ UVV1 = computation(and, ite(1, 1, 0), ite(1, 1, 0))) = computation(and, 1, 1) = 1 VVV1 = computation(and, ite(0, 1, 0), ite(0, 1, 0)) =14 At this point, we make use of the ordering sequence obtained according to Section 4.4.3 and select the smallest variable from it 42
  43. 43. 4 Algorithms for Qualitative Analysis of Fault Trees computation(and, 0, 0) = 0 thus, VV1 = ite(c, 1, 0) thus, V1 = ite(c, 1, 0)thus, the result is ite(a, ite(b, 1, ite(c, 1, 0)), ite(c, 1, 0)).Algorithm for the construction of the BDD from the FTIn this section we present an algorithm for the construction of the BDD from the FT[Lim07].ft-to-bdd (node) = if (node is a basic event) return ite(node, 1, 0) else /* node is an operator node/intermediate node */ op ← the operator associated with node j ← first child of node R ← ft-to-bdd(j) for (for all the threads i of node and i ≠ j) F ← ft-to-bdd(i) R ← computation(op, R, F) end for return R Figure 4.12: Algorithm for the construction of the BDD from the FTLet us consider the Boolean formula a∨c ∧b∨c  . To compute programmaticallythe BDD from the FT (see Figure 4.13) corresponding to this formula, we perform thefollowing steps: Figure 4.13: The FT for applying the FT-to-BDD Algorithm 43
  44. 44. 4 Algorithms for Qualitative Analysis of Fault Treesft-to-bdd (Top) = OP1 = and J1 = G1 R1 = ft-to-bdd(J1) = ft-to-bdd(G1) = OP11 = or J11 =A R11 = ft-to-bdd(J11) = ft-to-bdd(A) = ite(A, 1, 0) /* next node of G1 */ I11 =C F11 = ft-to-bdd(I11) = ft-to-bdd(C) = ite(C, 1, 0) R11 = computation(OP11, R11, F11) = computation(or, ite(A, 1, 0), ite(C, 1, 0)) /* let A be the least variable of formulae F and G */ U1 = computation(or, ite(1, 1, 0), ite(C, 1, 0)) = computation(or, 1, ite(C, 1, 0)) = 1 V1 = computation(or, ite(0, 1, 0), ite(C, 1, 0)) = computation(or, 0, ite(C, 1, 0)) = ite(C, 1, 0) thus, R11 = ite(A, 1, ite(C, 1, 0) thus, R1 = ite(A, 1, ite(C, 1, 0) /* next node of Top Event */ I1 = G2 F1 = ft-to-bdd(I1) = ft-to-bdd(G2) = OP12 = or J12 =B R12 = ft-to-bdd(J12) = ft-to-bdd(B) = ite(B, 1, 0) /* next node of G2 */ I12 =C F12 = ft-to-bdd(I12) = ft-to-bdd(C) = ite(C, 1, 0) R12 = computation(OP12, R12, F12) = computation(or, ite(B, 1, 0), ite(C, 1, 0)) = /* let B be the least variable of formulae F and G */ U2 = computation(or, ite(1, 1, 0), ite(C, 1, 0)) = computation(or, 1, ite(C, 1, 0)) = 1 V2 = computation(or, ite(0, 1, 0), ite(C, 1, 0)) = computation(or, 0, ite(C, 1, 0)) = ite(C, 1, 0) thus, R12 = ite(B, 1, ite(C, 1, 0) thus, F1 = ite(B, 1, ite(C, 1, 0)) R1 = computation(OP1, R1, F1) = computation(and, ite(A, 1, ite(C, 1, 0)), ite(B, 1, ite(C, 1, 0)) /* compare the result from the example of the computation algorithm */ thus, R1 = ite(A, ite(B, 1, ite(C, 1, 0)), ite(C, 1, 0))thus, the result is ite(A, ite(B, 1, ite(C, 1, 0)), ite(C, 1, 0)).Algorithm for implementing the „“- operatorTheorem (Characterization of the minimal solutions). Let F=ite(x, G, H) be amonotonic Boolean formula in Shannons form. Then, the following equality holds: Sol min  F ={σ │σ =δ∪{x }∧δ∈Sol min G∧δ  H =0}∪Sol min  H By means of the described theorem, the minimal solutions of the BDD: ite((a, ite((b, 1, 44
  45. 45. 4 Algorithms for Qualitative Analysis of Fault Treesite(c, 1, 0)), ite(c, 1, 0)) are: the minimal solutions of the BDD ite(c, 1, 0) – i.e., {c} – theminimal solutions of the BDD ite(b, 1, ite(c, 1, 0)) – i.e., {b} and {c} – that are notsolutions of the BDD ite(c, 1, 0) – i.e., {b} – augmented of a – i.e., {a, b}. Finally, wehave: Solmin(ite(a, ite(b, 1, ite(c, 1, 0)), ite(c, 1, 0))) = {{a, b}, {c}}.This theorem suggests a recursive algorithm in order to compute the BDD F min encodingthe minimal solutions of F. It consists in computing G min and Hmin encoding the minimalsolutions of G and H, then removing from G min all the paths included in a path of H andfinally composing the two obtained BDDs with x in order to obtain F min. Therefore, wedefine a new remove operation, the without operator denoted by „“.Corollary. Let F = ite(x, F1, F2) and G = ite(x, G1, G2) be two BDDs such that Gencodes a monotonic function. Then, F G = ite(x, F1 G1, F2 G2).The following algorithm presented by [Rau93] demonstrates the realization of the „“-operator.without (F, G) = if (F=0) return 0 else if (G=1) return 0 else if (G=0) return F else if (F=1) return 1 else if (computation-table has entry {<without, F, G>, R }) return R else /* F = ite (x, F1, F2) */ /* G = ite (y, G1, G2) */ if (x < y) U ← without(F1, G) V ← without(F2, G) R ← find-or-add-ite-table (x, U, V) insert-in-computation-table (<without, F, G>, R) return R else if (x > y) return without(F, G2) else /* x = y */ U ← without (F1, G1) V ← without (F2, G2) R ← find-or-add-ite-table (x, U, V) insert-in-computation-table (<without, F, G>, R) return R 45
  46. 46. 4 Algorithms for Qualitative Analysis of Fault Trees Figure 4.14: Algorithm for computing FGAlgorithm for computing minimal solutionsThe theorem about the characterization of the minimal solutions and the subsequentdefinition of the „“-operator allow the description of an algorithm which computes theBDD encoding the minimal solutions of a monotonic Boolean function from the BDDencoding this function.minsol (F) = if ((F = 0) or (F = 1)) return F else if (computation-table has entry {<minsol, F, _>, R }) return R else /* F = ite(x, G, H) */ K ← minsol(G) U ← without(K, H) /* H is monotonic */ V ← minsol(H) R ← find-or-add-ite-table(x, U, V) insert-in-computation-table ({<minsol, F, _ >}, R}) return R Figure 4.15: Algorithm for computing minimal solutionsAlgorithm for computing solutions defined by a BDDProperty (Solutions defined by the paths in a BDD). Let ƒ be a Boolean functionencoded by the BDD F and let σ be a solution of ƒ, then a path exists from the root of Fto leaf 1 which defines a solution δ of ƒ such that δ is included in σ.Corollary. If σ is a minimal solution of ƒ, then there exists only one path from theroot of F to leaf 1 defining σ.The algorithm for computing solutions defined by a BDD can be realized as following:solutions (F, σ) = if (F = 0) return ∅ else if (F=1) return {σ} else /* F = ite(x, F1, F2) */ S ← solutions(F1, σ + {x}) /* σ {x}≡σ∪{x } */ T ← solutions(F2, σ) 46
  47. 47. 4 Algorithms for Qualitative Analysis of Fault Trees return S + T /* ST ≡S ∪T */ Figure 4.16: Algorithm for computing solutions defined by a BDDExample of applying the algorithm for computing minimal solutionsLet us apply the algorithm on ite a , ite b , 1, ite c ,1, 0 ,ite c ,1, 0minsol(ite(a, ite(b, 1, ite(c, 1, 0)), ite(c, 1, 0))) = K1 = minsol(ite(b, 1, ite(c, 1, 0)) K11 = minsol(1) = 1 U11 = without(1, ite(c, 1, 0)) = 1 V11 = minsol(ite(c, 1, 0)) = KV11 = minsol(1) = 1 UV11 = without(1, 0) = 1 VV11 = minsol(0) = 0 thus, V11 = ite(c, 1, 0) thus, K1 = ite(b, 1, ite(c, 1, 0)) U1 = without(K1, ite(c, 1, 0)) = without(ite(b, 1, ite(c, 1, 0)), ite(c, 1, 0)) = U12 = without(1, ite(c, 1, 0) = 1 /* b < c – first case */ V12 = without(ite(c, 1, 0), ite(c, 1, 0)) UV12 = without(1, 1) = 0 /* c = c – third case */ VV12 = without(0, 0) = 0 thus, V12 = 0 thus, U1 = ite(b, 1, 0) V1 = minsol(ite(c, 1, 0)) = V11 = ite(c, 1, 0) /* memorization of results */thus, the result is ite(a, ite(b, 1, 0), ite(c, 1, 0)).It is now easy to verify that the solutions of this BDD obtained by the procedure inFigure 4.16 are {a, b} and {c}, i.e., the minimal solutions of the considered formula:solutions(ite(a, ite(b, 1, 0), ite(c, 1, 0)), { }) = S1 = solutions(ite(b, 1, 0), { } + {a}) = SS1 = solutions(1, {a} + {b}) = {{a, b}} TS1 = solutions(0, {a}) = ∅ thus, S1 = SS1 + TS1 = {{a, b}} T1 = solutions(ite(c, 1, 0), { }) = ST1 = solutions(1, { } + {c}) = {{c}} TT1 = solutions(0, { }) = ∅ thus, T1 = ST1 + TT1 = {{c}}thus, the result is S1 + T1 = {{a, b}} + {{c}} = {{a, b}, {c}}.Algorithm for computing p(F)Theorem (Shannons decomposition 2). Let ƒ be a function and x a variableoccurring in ƒ, then the following equality holds: p  f = p  x=1∗p  f {x=1 } p x =0∗p  f {x=0}probability (F) = if (F = 0) return 0 else if (F = 1) return 1 47
  48. 48. 4 Algorithms for Qualitative Analysis of Fault Trees else if (computation-table has entry { <probability, F, _>, R }) return R else /* F = ite(x, G, H) */ R ← p(x) * probability(G) + (1 – p(x)) * probability(H) insert-in-computation-table({<probability, F, _>, R }) return R Figure 4.17: Algorithm for computing p(F)4.4.5 Algorithmic ComplexityThe complexity of a BDD [Lim07] is measured by its size, that is, by the number of thevertices contained in it. The size of a tree developed by Shannon representing a Booleanfunction of n variables, is 2 02122 ...2 n=2 n1−1 .The maximum size of a reduced Shannon tree or BDD representing a Boolean functionof n variables is given by 2n 2ε  nwith ε ≃3.125 . This boundary [Lia92] was obtained by considering uniquely thereductions due to the equivalent vertices.The introduction of BDD based methods has deeply impacted the framework of faulttree assessment algorithms. In most of cases, they outperform the other methods.However, BDDs also are subject to combinatorial explosion. For very large models[Rau03] considering a big number of – basic events > 1300 – gate variables > 2500 – replicated events15 > 850 – singular input variables16 > 250the BDD that encodes the model might be not computable within a reasonable amountof time and computer memory. The difficulty is that, conversely to MOCUS-likealgorithms, the BDD technology does not easily support approximations. Therefore,BDD might be unable to provide any result, which is indeed unacceptable. Rauzy[Rau03] shows according to the experimental results obtained on the benchmark aftercarrying out several improvements on MOCUS and describing an efficientimplementation of it that, in contrast to the BDD method, the MOCUS algorithmrepresents an efficient alternative algorithm supporting approximations for very largefault tree models as mentioned above.15 A variable is replicated if it occurs more than once in the set of equations16 A variable is singular if there is only 1 path that goes from the variable to the top event 48

×